BI 188 Jolande Fooken: Coordinating Action and Perception

May 27, 2024 01:28:14
BI 188 Jolande Fooken: Coordinating Action and Perception
Brain Inspired
BI 188 Jolande Fooken: Coordinating Action and Perception

May 27 2024 | 01:28:14

/

Show Notes

Support the show to get full episodes and join the Discord community.

Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.

0:00 - Intro 3:27 - Eye movements 8:53 - Hand-eye coordination 9:30 - Hand-eye coordination and naturalistic tasks 26:45 - Levels of expertise 34:02 - Yarbus and eye movements 42:13 - Varieties of experimental paradigms, varieties of viewing the brain 52:46 - Career vision 1:04:07 - Evolving view about the brain 1:10:49 - Coordination, robots, and AI

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: But as I was looking at all these players doing the task, I was like, some hit it like if you think of a fly ball, it comes down, like way when it comes down, and others hit it like on the way still up as it was entering this zone where they could intercept it. And so I was looking at this and I thought, that's just interesting. I think that's that whole circle idea I don't like anymore, because to me it sort of implies sequentially sequentiality, like it implies this. And I think it's a coordination. I think we're constantly coordinating these processes. [00:00:49] Speaker B: This is brain inspired. Hey, everyone, I'm Paul. Yolanda Fuchen is a post post doctoral researcher, or soon to be, anyway. She just is completing her first postdoctoral position and will be starting a new one. Yolanda is interested in how we move our eyes and how we move our hands together to accomplish naturalistic tasks. Hand eye coordination is one of those things that sounds simple and that we do all the time, for example, to make meals for our children day in and day out and day in and day out. But hand eye coordination becomes way less seemingly simple as soon as you learn that we make various kinds of eye movements, slow tracking eye movements, fast, jumpy eye movements, tiny jittery eye movements, and that we make various kinds of hand movements. There's lots of degrees of freedom for how we move our limbs and joints and fingers through the world to accomplish tasks. And of course, we use various strategies to accomplish various tasks. And therefore, like just about everything in the brain sciences, it's something that we don't have a perfect story for yet. So Yolanda and I discuss her work, thoughts and ideas around those and related topics. As always, you can learn more about Yolanda in the show notes at Braininspired co podcast 188 and as always, you can learn how to help support brain inspired via Patreon to get full episodes, join our Discord community or just show your appreciation. Go to Braininspired Co to find the link for Patreon support. Thank you, Patreon supporters. Okay, here's Yolanda. Hello, old friend. Are we friends at this point? Yeah. Are we beyond friendly acquaintances? We've hung out at conferences. I've, I've played with your kids. That makes us friends, right? [00:02:47] Speaker A: Yeah. That makes us friends. Yeah. [00:02:49] Speaker B: Okay. We're finally doing it. We've been planning this for a long time and for numerous reasons. As always, it got kicked down the road. So we most recently hung out, I believe, at like last year, at a conference for specifically for eye movements. Isn't that where we last physically hung out or. Oh, no, we were looking at SFN. That's right. Yeah. Okay. Well, in any sort of, like, more professional capacity, we hung out the eye movements. And the reason why I wanted to point to that is because my background is in the eye movements. That's why I was invited to an eye movement conference to do what I was doing there. But I don't do eye movements anymore, and so part of what I want to discuss with you today is eye movements in general. But also, I was thinking more about this. These days, I work with a set of, quote unquote naturalistic data. A dataset that I'm working with is recorded in various brain areas, while a mouse is doing a quote unquote naturalistic thing. And naturalistic tasks are all the rage these days. But you've been doing it since the beginning, pretty much, right? [00:04:00] Speaker A: Yeah, I think so. I mean, that depends what you call the beginning, but definitely my eye movement beginning. [00:04:07] Speaker B: Not your. Not your physics degree, but, yeah. [00:04:11] Speaker A: So, yeah, it took me a while to find what, what I wanted to do, like, whether I wanted to do research and what I wanted to do. But when I started working on eye movements, I immediately liked it because it sort of bridges several areas. And it's also relatable when I talk to people outside of academia, which is something I like to do. So. [00:04:36] Speaker B: Well, your specific research is relatable. Not all eye movement literature, not all. [00:04:41] Speaker A: Eye movement research, and that's actually true. And that's always fun. At that eye movement meeting that you mentioned, when you go, you realize all the ins and outs that people work on when they work on eye movements, from the muscles and the synapses on the muscles, all the way to brain areas in monkeys, mice, and so on. Yeah, but I work on humans and on eye movements in naturalistic behavior and on eye hand coordination quite a bit. [00:05:11] Speaker B: Yeah. Well, it's fun to be able to just bring up baseball, probably as. [00:05:17] Speaker A: And so that's sort of how I got started. So the lab I joined to do my PhD, or I guess my master's already, was an eye movement lab, looking at smooth pursuit, in particular. So, smooth pursuit is when we look at a moving target and the eyes sort of follow it really smoothly. And it's actually something you can't do without visual input. So if you try to do it, you'll be jumping around. Your eyes will make those fast movements of the saccades that we make all the time. [00:05:48] Speaker B: Let's just define saccade for people because my parents came to my PhD defense. And I use the word saccade just liberally, you know, without defining it. And then afterwards, they're like, it was great, but I didn't know what a saccade was until halfway through, you know? [00:06:04] Speaker A: Yeah. [00:06:05] Speaker B: So, so it. And even backing up more, maybe we'll just get into, like, why eye movements have been traditionally interesting lines here, I mean, from my perspective. So I studied saccadic eye movements, and a saccade, as opposed to smooth pursuit, is when your eye is. It's like a ballistic quick eye movement, right? [00:06:23] Speaker A: Yeah, although. Although there is people in the field that would hang you for the term ballistic, so. [00:06:29] Speaker B: Oh, okay. [00:06:30] Speaker A: I think it's like the high, the fast velocity is sort of the key part there, and that you shift your line of sight or your gaze from one position to another position. And the odd thing that we often don't think about is we do it all the time. We do it, I don't know, the number is two to 3 seconds. [00:06:51] Speaker B: Two to 3 /second right. [00:06:54] Speaker A: Per second. Yeah. Two to three saccats per second. So. And yet we perceive the world as stable and as sort of this one thing. And so it's amazing if you think about it. [00:07:05] Speaker B: It's crazy. Yeah. And this is kind of the thing that a lot of people start with is like, how do we have this stable perception when the visual input on our retinas is just constantly jumping around? [00:07:17] Speaker A: Well, and that is the thing. We should also mention the foveal vision. So high acuity vision is only really accurate in a very small part. And so that's about like, if you put your hand in front of you, it's about a thumb thick. Like, that's what you see, really highly accurate. And everything else is sort of blurred. And then we point this little area of high acuity around the world, and yet we feel like we see everything sort of stable. Yeah. And that's just interesting. I think even this fact is interesting. And then smooth pursuit is when you look at moving objects. Now, your eye actually can sort of, or the line of sight, the gaze can lock onto the moving object and smoothly follow it, and the velocity matches the velocity of the moving objects. And then, of course, those two things work together. So if you have something like a baseball or fast moving object, you won't be able to track it smoothly with your eyes. You'll track it and make a saccade. And also, if we're in a baseball game, you'll move your head as well, which is something eye movement researchers don't like to think about. [00:08:32] Speaker B: Traditionally, that's true. Well, and still we, they don't, I don't, I won't say we. [00:08:37] Speaker A: Even myself, I have to say I haven't done much with head movements, even though I think I should. [00:08:42] Speaker B: Well, there's always. That's all. There's already, like so much going on. So this is go, this goes back to, like, why I brought up naturalistic in the beginning because I wanted to ask you, and I'm just kind of jumping the gun here so we can come back to this. But I, I wanted to ask you how, how you feel about the term naturalistic because a lot of people, you know, recently I was at cosine and the term naturalistic is just thrown around now when, for example, my data set is just a mouse just hanging out in a box walking around. Yeah. I mean, what is naturalistic about that? [00:09:13] Speaker A: Yeah, I think so. I feel good about it when the task you're doing is in a somewhat controlled environment and it's mimicking some natural behavior. So for my PhD, I did a lot of work on manual interception. [00:09:29] Speaker B: And what's that? Define that. [00:09:32] Speaker A: Yeah. So you are seeing a moving target and you're moving your hand. In our case, it was the index finger to hit it and the target or the visual object moved across the virtual screen. But you're doing a hand movement. So not a cursor movement or something. So you're doing an actual hand movement and you catch it. So it's not actually catching, but you hit it by moving your finger to the screen. And so there's aspects of, like, natural behavior that you're actually moving your hand. [00:10:04] Speaker B: So sorry to interrupt, but just to make it clear, also, this is like an analog of hitting a baseball, essentially, right? It's different essentially. [00:10:11] Speaker A: Yes. [00:10:12] Speaker B: Some of the similarities, but just to round people in case they're. [00:10:15] Speaker A: Yeah, exactly. Of either catching. So we started by looking at something that would be more analog of catching a fly ball, maybe, although there you have all the running and stuff and then transition more towards what would be hitting and integrating that decision making progress. [00:10:32] Speaker B: But you're still just as well touching. [00:10:34] Speaker A: Us, but you're still just touching the screen. But why? I would argue it is naturalistic. Maybe. I continue my story about starting in the lab. So I did my PhD at UBC working with Miriam Sparing, who had an eye movement lab there. And Miriam published a paper from her PhD which was called keep your eyes on the ball. And it was outlining the benefit of smooth pursuit for motion prediction. So really briefly, her finding was that when you pursue a moving target, you were better at predicting the path it would take after it was blanked compared to when you're just seeing it with peripheral vision. And you were fixating. [00:11:22] Speaker B: So you see the beginning of the trajectory where the ball is headed, and then it disappears and you have to guess or track or. [00:11:30] Speaker A: Yeah. So in her case, it was a goal, like a visual goal, and they had to predict whether the moving ball would hit or miss this goal. And so it was blanked. And then you had to say, okay, it would actually hit the goal or miss the goal. And she found that when you're pursuing the target, you were better at this motion prediction. And the title of that article was, keep your eyes on the ball, and then some, blah, blah, blah, blah. And what actually happened was that the UBC baseball coach approached her and said, I want to work with you. [00:12:06] Speaker B: Oh, I didn't know that. [00:12:08] Speaker A: Yeah. And so something that was really cool about this collaboration was that he was very aware of the academic timeline, and he didn't expect results within, like, days. But he was like, no, I want to start a collaboration between the baseball team and the lab. What a great footage. [00:12:28] Speaker B: How does that happen? [00:12:28] Speaker A: I know. It was great. [00:12:30] Speaker B: Yeah, yeah. [00:12:31] Speaker A: And then, so eventually, he transitioned out of the coaching role and became more a manager role. And then the collaboration, also interesting, fizzled out because the new coaches were more like, we still haven't seen results, sort of thing. But that was just when I joined the lab. And so I started working with these baseball players, and that was just so cool because they're just. Yeah, very incredible participants. They have great eye movements to start with. They're very. [00:13:06] Speaker B: Yeah. [00:13:06] Speaker A: What does that mean? They just, when they see a moving target, their eyes lock onto it, and they track it very smoothly. They're very, like, they understand right away what's being asked of them. And so that comes back to what I think is naturalistic. So we then said, okay, we did a couple things with them first, we did just a full vision test with them where we took their acuity and the color vision and this and that. That was interesting as well, because they were mostly above, like, average acuity as well. So that seemed to indicate that it helps. They need to have a good visual system to play baseball. Or I've always been asked this question, maybe they have a superior or superior. I don't know if that's the right word, but a good visual acuity because they play. I don't know. But anyway, so then we designed this task where they're tracking and it was a simulated fly ball that would again, disappear after launch. And then they had the whole entire right hand side of the screen to hit it wherever they wanted to and whenever they wanted to. So all we told them was, you want to catch, in a sense, this virtual ball. And we didn't have to explain much to them. And I think then your task is naturalistic. If you don't have to say, you know, like, keep staring at this dot and do this and do that. Like, my instruction to them was very brief. It was just like, you'll see a dot that starts to move and you want to catch it in this right half of the screen and go. And they would just start doing it without really. [00:14:50] Speaker B: So it was natural for them because they're such. [00:14:53] Speaker A: It was natural for them. And, well, we did it with, of course we did it with, you know, a student population. We eventually did it in patients. And so I can tell you, almost any participant I had felt like they knew what they had to do. Like, once they got used to the equipment, and it's a little bit like you have a tracker on your. On your finger and things like that. And people are like, interestingly enough, the baseball players were always very gentle and touching the screen, whereas if you took, like, student populations, they'd be like, or myself. [00:15:26] Speaker B: Yeah, right. So baseball players have great eye movements, and the rest of us are terrible. Is that. What is that? Is that one of the take homes? Just kidding. [00:15:34] Speaker A: Well, no, no, we had, we had these, like, really fun anecdotes where we were in a computer science building, and we had, like, tested all the baseball players and we had this idea, and then we tested lab members as one does, and we were like, okay, you know, we make. What does it mean to be, have worse eye movements? You just make more catch ups and cards. You're not on the target as closely. And, you know, the. The hitting the ball was just further away from the actual position. But then we were planning to do this training study, and we started recruiting, and we recruited first in our building, and we had this group of computer scientists, and all of a sudden we had this one group that was super good and on the baseline test, and we're like, what is happening? And it turned out they were all gamers. And so. [00:16:29] Speaker B: Oh, okay. [00:16:29] Speaker A: Yeah, because they were doing a lot of gaming. They were also very good at this or gamified gaming. [00:16:38] Speaker B: Make them good, one could ask, as usual. [00:16:40] Speaker A: Yes, yes. But anyway, so I think just for me, it's naturalistic when there is some sort of recognition of what you would do in your natural behavior, that, of course, it's not natural to hit a screen and see a simulated dot and so on. But later, when we transition towards the more batting version, where you actually had a go no go decision, the players would actually say, oh, good pitch. They would, like, say that while they sat there. For them, it seemed like the real game situation, and so it mimicked the timing and the demands really well. And I think that that's, to me, the key to the naturalistic. [00:17:28] Speaker B: So you might not call it ecological, but you're happy with naturalistic. [00:17:32] Speaker A: Yeah. [00:17:33] Speaker B: Okay. See, I think that that's a distinction I'm okay with. Also in, for example, my data set, right, a mouse just wandering around because the mouse does its quote unquote natural behaviors. It walks, it grooms, it turns, it rears and stuff. But I think that people get hung up on the term, on the nature aspect of naturalistic. Right? [00:17:55] Speaker A: Yeah, maybe. Yeah. [00:17:57] Speaker B: Yeah. And they're probably. I think there is some validity to that, but everything is coming under the banner of naturalistic, and that has only kind of recently been the case, I would say. I mean, you know this literature a lot better than I do. I would say that you were pretty early on in and maybe, maybe helping usher in. I mean, there's. There's the data collection abilities, the, you know, the deep learning ability to analyze the data and all these things that are just bigger, more compute power, bigger, better. But you were doing these things. It was probably pretty hard to implement these things early on. [00:18:35] Speaker A: Yeah, I mean, there's two people who really pioneered this, I would say. Or, well, yeah, Michael land and Mary Hayhoe. They've done a lot of work really early on and with pretty cool equipment. I mean, there's more like the baseball work. There's someone called Behel. Oh, what's his first name? I'm blanking. Anyways, he did a lot of work at baseball players, and then Michael landed work at cricket players. So there always has been some work. What's interesting with this work is that often the results, you present them, and then people are like, oh, yeah, that makes sense. And somehow that seems to get less attention because it makes sense. [00:19:23] Speaker B: It's not surprising. [00:19:25] Speaker A: It's not surprising, though, if you ask people beforehand what their expectation would be of, let's say you want to hit a bouncing ball, where do do you look? And if I ask you, what's your expectation of, how would you do it? How would you look at it? You probably would be off. Right. By whatever you predict but once you find the results, people are like, yeah, that makes sense. That, okay, what you actually do is you look predictively at the bounce location and then you track it past bounds, and that enables you to, again, extract some really important motion information about the trajectory of the ballot. And so it makes sense once you've found it. And, yeah, I always, like, that always baffled me a little bit. Like, I often get this comment that people are like, yeah, okay, that makes sense. And then it's sort of like less interesting because it's not. [00:20:21] Speaker B: Well, yeah, I mean, it's, you're not showing them a colorful picture of a brain also. Yeah, that's true, because I want to make that distinction too, that, you know, so what we've been talking about is all psychophysics and behavioral work. And, you know, when I grew up in the eye movement community, mine, I was in non human primate world in this very controlled, you know, you're not moving your head, you're just moving your eyes. It's a very unnaturalistic task to perform. [00:20:52] Speaker A: Yeah. [00:20:53] Speaker B: Like any of the tasks that I, most of them are very unnaturalistic. And you're recording brains brain activity. And one of the reasons you do that, control it so much, constrain it so much, reduce your task so much, is because you want to be able to rule out other variables that the neural activity might be related to. So you want to be able to rule out as much as possible so you can say this neural activity is doing attention or whatever my cognitive function is. And in the psychophysics and behavioral world, I wonder if that's a difference, is that it sort of frees you to take some of the constraints away because you're just measuring things and then you have to infer a ton from those measurements and design the experiments well, and all that stuff too. [00:21:37] Speaker A: Yeah, yeah. I think what that can really give you, though, is a little bit of more freedom in, let's say, the monkey world or so if you find consistent patterns in your naturalistic task, let's say. And so it's not always a question of, let's say, eye movements, where do you look? But also the timing of when do you look where you look, especially in these tasks where you have moving stimuli, but also when you manipulate an object, or that means just when you grasp something, when you prepare a sandwich, that's one of the famous examples, the sandwich preparing. So at first glance you would say, okay, this is like wild. Like, you have all these possibilities of where you can look. And of course you can just sort of quantitatively describe it and say, okay, all the eye movements go towards peanut butter and sandwich and here and there and so on. But I think if you do it well, you can come up with some principles of how you coordinate your actions in this, like, pretty open environment and then take them back and make your very lab based experiments more naturalistic. And so, yeah, so in terms of the, of the sandwich making, we, like, recently did some work where we, we put people under pressure while they're like, they're not making a sandwich, but they have to drop a ball into a slot. And so we put them under pressure by giving them a secondary task. And we find this really nice coupling of the gaze shift. So the eye movement shift to the action task that is related to contact event. So what's going on? So if you think about, you want to grasp your cup of coffee, most people would probably say, yeah, you look at your cup of coffee before you grasp it, and then you take it and you drink it. If I ask you, do you have to look at it? [00:23:48] Speaker B: I'm about to not look at mine because I'm multitasking. [00:23:51] Speaker A: Right, exactly. So you don't have to look at it. So why do we look at it and what information do we like? Why do we usually do it? If you're not talking to me, you would probably look at it. So why do you do it and what information do you use to guide what. Basically what we did was we did a little bit more difficult than grasping a cup of coffee. We had a little ball you put into a slot, and you used either your fingers or tweezers. And so when you used the tweezers, it was really tough and you needed the vision to guide it. At the same time, you had to monitor a visual scene, and you lost points in this little game when you didn't look at the display. And so now we can say, okay, at what times do you disengage your gaze from that visual task to guide your actions? And that timing was really linked to the time where you first contacted the ball with the tweezers or when you drop it, you first contacted the slot, and so you got there a little bit earlier, but the coupling was to this point in time. So what we learn is that you want to use that visual information to guide the contact between either the object you're manipulating or when you're placing it. Right. And so now having learned that, you can go back and say, I don't know, you look at the monkey and maybe you don't know what time to like what? Because even in like, I think these simple tasks where you have one stimulus and then you put on a different stimulus, that gives you the ability to align your data to the timing of where the second stimulus comes on. [00:25:35] Speaker B: Yeah. And then you're going to say that you're going to be able to align to behaviorally relevant actions. [00:25:44] Speaker A: Yes, exactly. So let's say you have a monkey that grasps a banana, right? Like that snatch or grasps, I don't know, something. And that's a naturalistic thing that a monkey might do. Or maybe, you know, they, like, de lyse another monkey or something like that. And so now you can align your neural activity to the first point of contact of monkey hand to whatever they're grasping, and then go back in time and see. And so that gives you a different frame. Right? Like, you go away from this timeframe that you have created by showing a stimulus and, you know, like, of course there's an expectation, if there's nothing else on the screen and you show something, monkey will probably look at it, because what else is monkey going to do? And so that gives you some validity in relating brain activity to that visual event that you've created, but it really loses this self driven behavior that is such a big part of natural behavior. [00:26:45] Speaker B: I think that's a nice example, but it also makes me think of your previous work. And I don't think that you explained this before when you had baseball players and the way that they tracked these objects over time. Refresh. Everyone's never here. So there, there's a visual stimulus that comes on the screen and it starts to, it looks like a ball is kind of hit and then it disappears. And then they have, like the rest of this, the big, well, a full half of the other side of the screen to try to project. Eventually, the ball, the target is going to come back on and they're going to be able to see it. [00:27:27] Speaker A: No, no, they're not there. They're seeing it after they hit. So they hit into nothingness and then they get feedback of where the target was at the time where they hit. [00:27:39] Speaker B: Okay. Okay. Then I'm gonna let you explain. Cause I'll probably get this wrong, but the, the take home here, before you explain this, is that, um, different levels of ball player, like senior ball players versus freshmen, you know, and on through, had different strategies to complete this task. They all completed the task well, but they had very different strategies, not very, but subtly different strategies. And the reason I bring this up is because, okay, well, now how do you bring that back into the lab. Do you now study freshmen separate from. [00:28:10] Speaker A: Yeah, no, no, I think so. That's a slightly different point. I think so. Okay. So, as you said, like, the task was just to intercept this moving object that had disappeared, and you would get feedback upon hitting it. And we gave them the entire right hand of the screen to do it. And so just, that was something we often now say we shouldn't do any research without prior hypothesis or something. And our hypothesis was about how well you track and how well you intercept. That was what we were interested in. But as I was looking at all these players doing the task, I was like, some hit it, like, if you think of a fly ball that comes down, like, way when it comes down, and others hit it, like, on the way, stow up as it was entering this zone where they could intercept it. And so I was looking at this, and I thought, that's just interesting. I can quickly see the benefit of hitting it right as it enters, because you're reducing the time that it's invisible, and you sort of maybe mapping it to, like, three entry points. And now you just have to recognize which of the three trajectories it is and get your timing right. So maybe you sort of, like, debunking the task a little bit, but then you could also say, you know, as you are, like, if you wait longer, you just have more time to plan your interceptive movement, to adjust, to make online corrections to where you want to go. And so we found sort of two cool things there. The first one was everyone had the same accuracy in interception. So even though some players waited a lot longer, they still ended up at the same level of, like, they got to the target as close, and those. [00:30:09] Speaker B: Were the older players, more experienced. [00:30:12] Speaker A: And that was. That was, it seemed to relate to the level of playing experience that the ones that played a little bit longer tended to intercept later. There was a nice, almost linear mapping. And so, again, it's very hard now to draw a conclusion of why that is. You can say they played longer. So their eye movements. So the other thing that we found is they tracked the target longer and more accurately and sort of relied on their eye movement movements to predict where it's gonna go, whereas the ones that intercepted it very early on, they tended to rely on the feedback that they got. So we found that they intercepted really close to the last feedback they had seen for that trajectory. [00:30:54] Speaker B: So the younger folks were, like, hacking the system, and the older folks were more patient, less impulsive, maybe even slower. I mean, there's so many different confounds, right? [00:31:04] Speaker A: There's so many different confounds. Yeah. And also you can say, you know, the younger ones, maybe they were eager and they felt like, maybe I'm being evaluated, and they wanted to do it, like, really well, whereas the other ones. [00:31:16] Speaker B: Care more what other people think of their performance and. [00:31:19] Speaker A: Yeah, yeah, yeah. But you can also say it nicely for the older folks and say they had better. Like, they were able to track it longer, so they were free to intercept it later. [00:31:30] Speaker B: Sounds like you can say it a thousand different ways. [00:31:33] Speaker A: Yes. So I think there, like, that would not be ready to be transferred to, like, a more, like, monkey task. Like, we would have to do follow up experiments and see, like, what if we now force them to intercept all later? Do we then still see that some people are just able to track it longer and they are better at intercepting it later? Or then you do behavioral work to narrow down what the underlying causes may be. But what I really like about this finding is that, first of all, it was a finding that completely fell out of me, staring at 32 players just doing the task and observing their behavior. And I think that's something, you know, like, first of all, if you do really, really heavy lab based tasks with just a couple dots or something like that, it's hard to observe much different behavior. And then I also think, like, people don't often don't look at, like, you know, we tend to have an undergraduate collect data or something like that. Right. Like, I've done that, too. But there's, like, something in just looking what people do and what you expect them to do may not be necessarily what they're doing. And I think that's sort of like a. Like, for me, I'm interested to understand human behavior. And so for me, it's important to look at human behavior and how people actually do it and sometimes maybe go a little bit exploratory and go wild and. Yeah. [00:33:14] Speaker B: Slowly becoming more interested in behavior only because I have to. Because, I mean, you know, in the past, what, seven, eight years, there have been calls, like, well, we don't understand behavior enough. Why are we already, why have we done 50 years of neuroscience without even understanding behavior? Because that's what you're going to eventually tie it to, or that's what you need to tie it to. And. Because I want to understand brains. Right. [00:33:38] Speaker A: Yeah. [00:33:39] Speaker B: And not like the molecules of brains, but how brains are tied to, quote unquote, cognition and behavior, but, yeah. And going back to, like, why eye movements have traditionally been popular is because you can control so much. It's a very easy story to then like tie your single neuron activity to a certain behavior. Right. That is. [00:34:02] Speaker A: Well, but it's interesting that you say that because I would say I would go the Yarbus route. So Alfred Jarbus was like a Russian. [00:34:10] Speaker B: Yeah, describe the Jarvis stuff because this is classic work. [00:34:12] Speaker A: And a russian scientist and he like, he did actually a lot of funky work on eye movements. But sort of his most famous study was that he showed this picture which is, I forget who painted the picture, but it's called they did not expect him. And it's basically a stranger stepping into a room. And then he asked probably a single participant to say what is the relationship between the stranger entering the room and the people in the room? And then he asks what is, are the people in the room wealthy, for example? And you could see really different patterns of scanning the picture. [00:35:00] Speaker B: So people looked at, we should just say how those like clay cups, suction to eyeballs. I mean, it was like ridiculous. You know, you do think it was like medieval these days with, with the technology. But I mean, it was kind of an ingenious way. They had to like rig up little, little clay cups to people's eyeballs with like little mirrors so they could tell where they were looking. Right. [00:35:23] Speaker A: Where they were looking. Yeah. And that's why you have, you don't have 1000 participants, but just one unlucky soul. [00:35:31] Speaker B: Yes. [00:35:34] Speaker A: But it really, I would say coined or started this other line of research that you can use eye movements to infer about these cognitive processes and these behavioral goals that people have. So when I want to find out the relationship, people just looked at faces and went back and forth between the faces. But when they look at the wealth, they look at the furniture and sort of the room to infer the information about that. And there were several examples and maps really nicely, like the eye move position would really nicely distinguish between these questions. And so I think that was sort of like a psychology route that started using eye position as an indicator of what people are thinking, essentially. And that really contrasts all this more. Really low level driven. I show a stimulus and I record single neuroendectivity and it's like a simple neuronal recording. Right. But I feel like sort of what my end term goal maybe is. I want to sort of link these two because I think, you know, we've learned so much because eye movements are the relatively simple system. You can record where the line of gaze is. You can. We know a lot about visual process like the eye movement areas in the brain. And then we also behaviorally know quite a bit about how we allocate gaze to different things. But I think sort of the, there is like this other work trying to make the, how we allocate our gaze or how we look at what we look, make that very low level driven as well, because that would be easier to link to the neural work. But I think that, to me, is misleading. Do you know what I'm trying to say? [00:37:45] Speaker B: Go ahead. [00:37:48] Speaker A: If you just say, okay, I just show a single stimulus, or I show two stimuli, but I make one very bright, and I look at the bright one, and then I build up my scene from there and I say, oh, even if it's more cluttered, I'll look at the most, like the location with the most information in the statistics of the image I'm showing to a person. And then you're sort of trying to go step by step building up from your neural work where you have very diminished visual information and then build it up to the rich visual information that we have. But that neglects that. Where we look is not actually driven by the visual information of the scene, but is driven by our behavioral goals and by what we want to accomplish and by action goals that we have. [00:38:39] Speaker B: And the visual information, but not. [00:38:41] Speaker A: And the visual information. Yes, of course. Yes, there. And, but, you know, I always think, like, they sort of, you know, one starts out low and I, like, I think there's worth and studying it from both directions, but I feel like, like it's starting off and then it just goes a different path that's never going to meet this yabba's path because it's sort of going in parallel and. Yeah, and, yeah, so I just want to try to bring that more naturalistic or the more cognitive findings that we know where we look. I want to bring that further down so we can bring it in more controlled and vibrance and come there that way. [00:39:32] Speaker B: Yeah, because when the more naturalistic. So I don't know really how this is in the eye movement world since I haven't been in it in a while. But I mean, I remember hearing rumblings that in, like, supernaturalists super quote unquote naturalistic or high dimensional tasks, that you start to lose some of the acute of, like, the clean story of the neural responses because you have a lot of different things overlapping. [00:40:00] Speaker A: Yeah, but so I think I, we very briefly talked about this at the last eye movement meeting, the Gordon meeting. There was this talk by Elizabeth Buffalo, and she had monkeys going to be. [00:40:14] Speaker B: On by the way I saw her in cosign, but, well, I have to email her again. But I just ran into her and I was like, hi. You know, I reintroduced myself and she was like, oh, I owe you an email, don't I? Yeah. So she agreed she's going to come on. [00:40:30] Speaker A: Okay. Yeah, yeah. So, I mean, so she'll probably portray her work a lot better than me just having seen her one talk. But basically they had these monkeys in VR and she was then aligning, I think it was hippocampal data to like events in VR, like behavioral events in VR. And it came out beautifully. And so I think there is sort of, you know, that goes back to. I know, yeah. But I'm just saying it goes back to, you have to know what to align it to. And so for that you have to understand a little bit more naturalistic behavior than like, I know it's really tempting to say, but if I put the stimulus on, I know exactly the time when I put it on so I can align it to that point. Point. But, you know, you don't actually know if monkey or person, whatever maybe is right now thinking about something else and there's actually reacting or what to this visual stimulus slightly later on this single trial than on another one. Whereas if you see a behavior directed towards your stimulus, you know, they are, oh, I don't know if I dare to say attending at least, you know, they're going to attain that goal and, you know, the time they're attaining it and then you can work your way backwards. And so that's sort of, yeah, I just think it's like a promising way to go. And I'm not saying don't do all the other ways, like, because we have learned a lot and we are still learning a lot. And as you mentioned before, there is always visual driven processing as well. [00:42:11] Speaker B: Sure. Yeah. [00:42:12] Speaker A: Yeah. [00:42:13] Speaker B: So there are two different ways that I'm thinking to go right now. I mean, maybe I'll start with this one of the, now I'll start with a simpler one. Right. So let's, let's go back to like super simple, stripped down, very, very controlled tasks. Right. I wonder how much of that is due to and or caused us to continue to think of the brain as like a sort of static information processing, input output computer metaphor device. Right. Maybe that's a case of circular causality where because we started thinking about it like that when the cognitive quote unquote revolution came around, although they were doing those experiments in that way during that time anyway, maybe that kind of perpetuated us thinking about the brain in that way. And then. Go ahead. [00:43:06] Speaker A: Yeah, well, and then I just want to throw this in there because then people started being like, oh, well, we have to show more complex stimuli. So then all this natural scene viewing came along and, you know, that just like, that's what I mean. Like that sort of follows the logic step of you showing a dot. And so now maybe you're showing a scene, but you're still in this really passive mode. And then now if you decompose the properties of your scene and you find these really what's called salient. So these spots in the scene that have rich visual information, you see that people also look there if they don't have a task, like if they're just what's called free viewing, where we actually don't really like what I don't free viewing to this day, to me, is like, what is it even like when I think about myself? Free viewing things that go through my brain as like, ridiculous, right? [00:44:03] Speaker B: Well, most of it you're not even aware of. Your eyes are just kind of bouncing around and. [00:44:07] Speaker A: Yeah. And that, that is driven by just some sort of visual features of the scene, I believe, because, you know. [00:44:15] Speaker B: Well, okay, so this is where I was heading with, this is one of the things that I have appreciated more and more. And in fact, I'm going to have some ecologically ecological psychology minded folks on the podcast soon who've, who have always appreciated. So Gibson's ecological psychology, one of its tenants is that there's never, like, passive perception. It's always like tightly coupled with a tightly coupled action perception loop with the environment and your body and the environment and your brain. And I've appreciated it also because of work like Henry Yin, who, who views in large part the entire brain as kind of a feedback control system, which goes back to the kind of the cybernetics view of things. Yeah, but, okay, so, so I've appreciated that more and more that like, well, this is a, you know, the action and perception are tightly linked. So it's strange to me, even the Yarbus experiments is looking at a static image, right. Because we're constantly moving and our field is constantly changing. And this goes back to my appreciation of your early and continued work, that you've always been interested in linking these action perception couplings, essentially, which has just been a hard thing to do. And you've done it. [00:45:40] Speaker A: Yeah. And that goes back to that eye movements aren't just in space, they're also in time. And then it just immediately becomes complicated and often becomes sort of like, even in visualizing it, right. You're sort of, when you're visualizing the one, you tend to lose the other. Like, it's often that I'm like, okay, so they're moving like they're looking here and then, like, but when do they arrive there? And then you go do maybe a time variant plot, and then you're like, wait, where are they looking now? And then you have to sort of add it, and then you have also, like, the space, like the dimensions. [00:46:17] Speaker B: And, I mean, it's a crazy dynamic, continuous flow of. [00:46:22] Speaker A: Exactly. And you think, you think of brain data and how a complex system and all these interactions and so on, and then often I think, like, oh, eye movements are very understandable, but if you really want to understand them in a time space continuum, it actually also gets quite complicated. But I think it is a, like, they are in a dimension that we can still understand and should maybe be like. I just think for me, it's just a promising way to understand, like, to quantify behavioral choices, you know, in, like this, on this perception action intersection, it gives us sort of a measure, and it's not a direct brain measure, but I mean, eye movements are controlled by the brain, so, you know, and we can continuously record them. And so I feel like it's like a pretty direct indicator. And then if you combine it with something like force, like, if you do, like, you look at grip forces or something like that when you're manipulating objects, that also is a direct result of signals sent by the brain or the spinal cord or by the nervous system, let's say that. And so we have these behavioral measures that are real time, basically. I mean, of course, there is a little bit of the delay, but, you know, I mean, almost any signal, unless you are really recording from single neurons or, like neurons, you have a little bit of a delay. And we know the mapping pretty well. And so why don't we use this real time readout that we have of, like, you know, neural control to understand. [00:48:26] Speaker B: These, because it doesn't map, it doesn't play well with the current popularity of states because then you lose the state. And, I mean, that's one of the things that I won't talk too much about my own research here, but right now I'm studying motor cortex and basal ganglia while animals are performing things. And this recent low dimensionality, it's all dynamics. Explosion has basically been in these, like, highly skilled tasks when there's a goal and a reward and there's something writing on it, you know, and there's like deliberate actions and then these, like, these state space approaches work really well. Like, you know, like in a reaching task, if I reach to the left or to the right, you can decode this high population neural activity into these low dimensional spaces. And voila, it is to the left, to the right. And, like, early returns. Looking at the data that I have from a mouse just kind of wandering around and recording in these areas, oh, man, it's a real mess. And we're trying to figure out how to. Is there a way to map this onto that story? Or in my kind of current thinking is like these, the state space approach is really good. Or when there is like a deliberate, goal oriented task. Right, yeah. And maybe, maybe even like, the motor cortex is not doing anything when just kind of swashing around almost in the right places where it needs to be, if it needs to. If it needs to do something deliberate. Right. So then there's that feedback, hierarchical control, and this is all in like a continuous dynamics, processional, flowing action perception. What a mess state of things, right? [00:50:16] Speaker A: Yeah, yeah. And, you know, I don't know. I don't know how to bridge those two. I really don't. But I think, you know, you also need to try to understand what at least a little bit like, you're saying the mouse is just wandering around. Okay, but maybe the mouse, you know, at some point encounters a wall or. And then it has choice, like explore a wall or like, turn to different wall or something like that. And so there are, like, in the behavioral world, there are these points of decision or points of contact you have with the external world that become more demanding in a behavioral sense. Right. And so then you have, you start, you create a bottleneck. I think that's sort of, that is one thing where I'm like, I think it's very difficult if you just have a mouse walking or even a person walking or anything, like very free, because. [00:51:23] Speaker B: And yet that's the way our behavior normally is. I mean, not like mindless, because we do all sorts of goal orientation and. [00:51:29] Speaker A: We do have these bottlenecks all the time. Right? Like you need to pass through a doorway or you have to step over stairs or you check your phone while. [00:51:41] Speaker B: You do exercise, while you block people's pathways. [00:51:45] Speaker A: Yeah, exactly. And so that creates these bottlenecks, moments where you're using the brain resources for two things or three things or so on at the time. And then I think that can inform you about, like, you know, now let's say you're trying to walk stairs, where we usually like to look at at least some of the stairs. And also, you're checking your phone, like, the moments now that you look at, like that, you coordinate those two things will inform you a little bit of when to look at your brain states or and so on, I think. [00:52:28] Speaker B: Yeah, but it's just, it's sort of like, we have some tools that have worked for particular things, and now. [00:52:35] Speaker A: Yeah. [00:52:36] Speaker B: You know, the tendency is to like, well, these tools have worked for this one thing, so I'm just gonna throw my stuff at it. [00:52:41] Speaker A: Yeah. And it's 100%. Yeah. [00:52:44] Speaker B: Okay. Yeah, yeah. [00:52:46] Speaker A: It's one way to go. [00:52:48] Speaker B: Yeah. But it sounds like you're kind of somewhere in the middle, right? Where, like, do you have, like a. I mean, you started to explain or describe your kind of vision of bringing the Yarbus world down into it. Can you elaborate more on that? And, I mean, do you have, like a five year sort of vision or like an ultimate goal or vision, you know? [00:53:12] Speaker A: Yeah, I mean, I think for me, it always changes a little bit. So, I mean, I do like a five year, I think. Yes. But then the ultimate goal, I think. And I think that can sort of happen when you get into your daily work and you onto something that interests you and you just, you make your whole research about this thing. Like, at some point, maybe it's time to be like, okay, let's step back and see. I can see myself working on something not completely different, but going more on, I don't know, going more into the sports world, for example, or, I don't know, whatever comes my way. But I think for me, it's like trying. In the next five years, I want to bridge a little bit more these, as I was saying earlier, when we act and we do something else at the same time. So you're in these multitasking situations. I think those can really inform a lot about a, what do we need? Like, what visual information do we really need for our motor control? Because that's something the motor control field says a lot about visually guided x, but they actually often don't look at where eye movements are. And so for me, it's like, what visual, like, do you use peripheral vision? Do you use foveal vision? Do you need foveal vision? When do you need foveal vision? Right. Like, these seem like trivial things, but. [00:54:52] Speaker B: I think everything seems trivial until you start doing it. [00:54:56] Speaker A: Exactly. And I feel like, again, Michael Land and Mary Hale were some two people really starting to look at it. And then people just sort of stopped. I don't know. [00:55:06] Speaker B: It became, what did happen to that? Like, because I remember that there was a lot of talk and at least, hey, ho's early work with the sandwich stuff? [00:55:17] Speaker A: Yeah, yeah, I mean, I think so. Of course, there's still people working on this. [00:55:26] Speaker B: Fine people doing fine research. [00:55:28] Speaker A: No, and I think it's going to pick up a little bit again, actually, with, like, the. The recent push towards naturalistic. And so, yeah, so I think for me, like, decomposing, what is the visual information? Like, what is the visual guidance that we need? Like I was saying earlier, we tend to look at something that we grasp, but so do we use foveal vision? And what do we use the foveal vision for? Do we use peripheral vision? And we already know a lot about this, but transferring this to a little bit more, like, say, like sequential movements or something like that. Like, when do you latch off a target and onto a new one? And so on. And so that is one of my goals, just making, characterizing that a little bit more systematically. [00:56:24] Speaker B: I'm not interested in how the brain implements these things because there's enough to do just in the world of behavior. [00:56:31] Speaker A: Well, I wouldn't say that, because I think where I would come back to the brain is we have these reactive movements, visually driven, and they tend to control or lead to a certain behavioral output. That's if you have a sudden target or you have a very salient event or something like that, and then you have these more voluntary or more high level things, like, I want to reach towards this. Right. And so those are controlled probably by a little bit higher levels. And so if you can also see, like, what is prioritized in behavior, again, you can learn of how those, you can at least make hypothesis, how the brain should be coordinating those two. And then for me, I think it'd be more, the interest in the brain would then come through patient studies or expertise, something like that, where I can then say, okay, if x breaks down, I would expect this behavior should maybe, or could translate to a breakdown in this area or something like that. [00:57:47] Speaker B: If you ever get interested in flow, let me know, because I have, with my emerging, continuously changing thoughts about over overarching thoughts about how brains work, you know, related to behavior. I have kind of a pet theory or a way to make sense of flow within that scope, but we can talk offline. [00:58:08] Speaker A: Yeah, I think. I actually think that's a pretty cool. Yeah. And these are a sort of behavioral, again, like, there are like, behavioral states almost where you get into these weird things. Right. And then. But then do we know what really changes? Like, behavioral, even? Behavioral? Yeah. Have we character, like, you know, flow, for example? [00:58:32] Speaker B: Right. [00:58:32] Speaker A: Like, which we should probably say, like, someone. [00:58:36] Speaker B: Oh, yeah. We should define it. [00:58:37] Speaker A: Yeah, maybe. [00:58:39] Speaker B: Well, it's not that well defined, actually, but. [00:58:41] Speaker A: No, it's not. Yeah. How would you define it? [00:58:44] Speaker B: I would define. You're gonna make me do this. Uh, I would define flow as well. Um, the way I used to understand is a little bit different than how I understand it now, but, uh, because you're supposed to be doing something that is, like, challenging, that you are skilled at. So you're kind of at the edge of, um, your abilities, but you're. But then because you are, uh, you're, like, motivated and trying very hard, but then somehow you slip into this state where you are able to kind of watch yourself do it, and you don't necessarily feel like you're deliberately doing anything. You're actually can kind of enjoy your body esque taking over. So in a way, you can kind of perceive yourself doing something that you're really good at, and it also feels really good to be in that state. [00:59:37] Speaker A: And then you were doing it really well while you're in this. [00:59:40] Speaker B: Yes, yes. Because you have highly skilled at the. Yeah, yeah. [00:59:43] Speaker A: And then you have the choking on the other hand, right where you are. [00:59:47] Speaker B: I'm not familiar with choking. I don't know. I've never choked. Have you ever know. Yeah, but go ahead. [00:59:53] Speaker A: Yeah. Where you're very good at a task, and then all of a sudden, under, like, a lot of pressure, you have this opposite that you become very aware of, like, all your bodily positions and so on, and, like, well, like, I mostly know it from the sport literature, but then, like, let's say you're doing a three free throw or something. [01:00:15] Speaker B: Oh, my God. You're. You're describing one of my worst memories during a free throw, during basketball, and I choked, and it haunts me because. [01:00:25] Speaker A: I started feeling you're like, why is my hand, like, is it like this and like that? And like. Or. Or it can be in an academic sense as well, right? We. We've probably all had that and, like, an examination or something where you're like, all of a sudden you're aware of these, like, weird things and you're like, why am I thinking about this? [01:00:43] Speaker B: My problem is I've never been in flow in academia. I've only been in the choke state. [01:00:48] Speaker A: Yeah, well, maybe you get into flow during your podcast sometime. [01:00:53] Speaker B: I don't know about that. Any where were we before I took us into the flow because you were talking about your vision. Right. And what you're interested in. [01:01:02] Speaker A: Yeah, yeah. And so I think I want to be one of the people picking up that naturalistic sort of hey ho work, let's call it that. And in fact, because we talked earlier, I'm going to start a postdoc with Konstantin Rootkopf and he trained with Mary Hayhoe and he does a lot of modeling, like behavioral modeling of, you know, like I don't know. What comes to mind right now is let's say you're doing, you're doing a visomotor tracking task. So you're trying to, you have a dot moving across the screen and you're trying to track it with your cursor or something like that. And so then, you know, you have, you have all these different processes going on. You have like a motor process going on, but you also have the perceptual process going on. And maybe you also have a process going on that someone is trying to figure out like a cognitive. Right. Like what is the underlying structure of the path of the moving thing or something like that. And so as far as I understand the approach right now, it's sort of like trying to build all these modules and then by fitting the model then to the behavior, you can say, okay, this much is related, this much of the variability is related to motor versus and so on. Right. And so I think it's like a really nice, yeah. Like, I think it's like trying to get towards more naturalistic and sequential behavior and then trying to build models that then can help us understand what, like break down the tasks and, you know, then when we see differences in behavior, we can sort of then try to say, okay, this may be because, you know, if you're a very like good mortar person or something like that. [01:03:10] Speaker B: Right. Yeah, but I'm saying, but, but only because I'm having a reaction to the kind of mechanistic component nature of doing that and to say, like, it's this much motor. [01:03:24] Speaker A: Yeah, I mean, I don't like, this is my probably very naive current understanding. [01:03:30] Speaker B: So you're just starting, this is your second. [01:03:34] Speaker A: Yes, I'll dive in more deeply starting in the fall and then like, yeah, I'm sure I'll be able to explain it better. [01:03:47] Speaker B: Um, no, I think that was, that was fine. I just was picking on, of course, jump on whatever. I mean, I, so let's talk just a little bit more in this domain. I want to then switch to AI for just a few minutes before we go into some extra Patreon time, if you're good with that. [01:04:07] Speaker A: Yeah. [01:04:08] Speaker B: Okay. So I was mentioning and we've been discussing how my views have kind of morphed over time. And in some sense I've felt that I've never had a view like a perspective, right. So I'm just continuously kind of forming it. It's not like I started out and I thought, the computer is a brain. You know, it was very much starting out and I still don't know anything. Right. But I don't know things in different ways now. So do you have a sense of how your views on behavior or perception, action or just what you've studied over time have changed? [01:04:49] Speaker A: Yeah, I mean, I think I came in very naive, which I think was a good thing because I didn't come from psychology or even cognitive science or something. I did physics and then I did biomedical engineering. But really I would say I just lived my life. I was just a student, right? I did what? Like, I did my exams and my courses, but I didn't really, I wasn't really interested in any, like, I was interested in a lot of things, but, you know, I was like living a student life and sort of just going through it. And then I did this internship at UBC with my then later supervisor and I, that was the first sort of experience and research that really captivated me and being like, okay, I could think about this more, but I didn't have any of the psychology training or this maybe even baggage. So I really looked at it like. And then I also didn't have any physics baggage because I wasn't like a good physicist or, you know, like, I was just sort of like, I think I had like, pretty good tools training, like, you know, like solving problems in general, but I didn't have any theoretical baggage of expectations or something like that. [01:06:06] Speaker B: But you learned that baggage pretty quickly in, I don't know, in your case, a master's program. I mean, maybe not because I was really naive, too. It sounds like you're my long lost sibling or maybe twin or something. Maybe throughout these things. Although I was a much worse student than you, I'm sure, in college. [01:06:23] Speaker A: Oh, I know. I don't know. But, yeah, we can battle that out later. Yeah, no, I think. But I think when I joined the lab, so the expectation was also always this. What we had before was like, you have a stimulus and you follow, like, you look at the stimulus. Like you started with the stimulus. Like, you show something on the screen and then you move your eyes to however, what the stimulus is and then you process that information. And so the perception action thing may be a loop, but it usually starts with perception. [01:07:01] Speaker B: That was domineering. Still the domineering. [01:07:05] Speaker A: Yeah. And so I think the first thing that happened to me was that I changed that to like maybe it starts more with action. Like maybe it's you know, like you decide you're gonna move somewhere and then you point your eyes or your effector, like your hand, whatever, to that point that you want to grab the information from and then you perceive it. So that was the first change that happened in my own view. But again, I wasn't really. How far along much biased. [01:07:37] Speaker B: How far along were you when you. [01:07:39] Speaker A: That was like, I'd say like second year of my PhD or something. [01:07:45] Speaker B: See, you learn faster than I do. [01:07:49] Speaker A: But I wasn't much biased to start with. Right. It was sort of just the way I read papers was always like perception first. But I wasn't. That's why, I mean, I didn't really have baggage. Like for me. I was like, oh, well, if it's circular, like I could also hop into circle. [01:08:03] Speaker B: Yeah, yeah. [01:08:04] Speaker A: Like into another point. But actually now, last year at SFN, or like shortly before when I was preparing my talk for SFET, I was like, I think that whole circle idea I don't like anymore because it, to me it sort of implies sequentially, sequentially, like it implies this. And I think it's a coordination. I think we're constantly coordinating these processes and you know, they're, you know, sometimes they're going to be sequentially because you have these bottlenecks where you can't actually do things at the same time. But if they're light load, if you have like a light perceptual load and a light motor load, you can do them in parallel. And then if it becomes, you know, like really critical to have information for one, then you narrow it down and it becomes, let's say you need, I don't know, let's say you want to thread a needle, then you need the visual information for your action. And then, you know, it becomes this and so on. Or like the other way around. If you're really trying to understand a text that you're reading, you know, you may all off your chair or something. [01:09:19] Speaker B: Because you're don't off, are you? Oh yeah, we'll have to have a beer for that one and fall off our stools again. Are you familiar with the phrase coordination dynamics work from people like Scott Kelso? Who's been talking about metastability for a long time. [01:09:43] Speaker A: Not too much. [01:09:43] Speaker B: No. Okay, I'll send you. I've recently become. I was surprised when you said that, because I've recently become in. In this concept of coordination dynamics. And I think that he would sort of largely agree with you. Uh, okay. [01:09:55] Speaker A: Yeah. [01:09:55] Speaker B: Sense, but, yeah, yeah, that's exciting. [01:09:57] Speaker A: Yeah. [01:09:57] Speaker B: Um, yeah. So, anyway, so that's. That's a, you know, most people can't thread a narrative about how, you know, have some metacognition about how they used to think about things. And. And, I mean, I can come up with a narrative, too, but I'm not confident that it would be accurate. I might be confabulating. Right. Because I didn't have a view, like you were saying. [01:10:19] Speaker A: Yeah. But, you know, and I think that's why I don't want to say, like, my long term goal for, because I know it's just, as I'm working on new problems, it may again change and be like, oh, coordination is like, what a wild idea. Maybe it's like, I don't know, whatever. It's still like, it evolves. Right. But it can evolve in several directions. [01:10:49] Speaker B: All right, so kind of going back to then, the beginning of our conversation, when we were talking about naturalistic tasks and the reason why people have traditionally studied eye movements and the value of what you have done, studying eye movements in coordination with hand movements and doing interception tasks and the multitasking and just having that coordination. Speaking of last two minutes, those coordination problems, I don't know what is really happening in the robot AI world these days. And I don't know if you have a lot of insight, either, but I'm just curious if things that, you know, have found and other others like you, if those kind of coordination, continuous dynamical action perception loop principles, you know, are being applied in the robotics world. And one of the reasons why I asked this is because there's none of that in a transformer, for example. [01:11:49] Speaker A: Yeah, yeah. I mean, I think so. I know there are. Going back to what you mentioned earlier. So the Gibsonian view, I know, like, I haven't followed on it lately, but Bill Warren, he's like, he did a lot of walking research, and he wrote a few papers that I really, really enjoyed. And there he talked about these ways where we can use information, like ecological information, to control robots. And, you know, it's like navigating down the hallway. I think you had that episode with someone talking about bees before. [01:12:34] Speaker B: Yeah. [01:12:35] Speaker A: And, like, you use the. I don't know, the if. [01:12:38] Speaker B: Yeah, that's true. [01:12:39] Speaker A: Change of rate as you're moving through. And so I think going down that path, and I don't know why I'm thinking about this now, but there's something about, like, when people go through doorways or, like, slits. [01:12:56] Speaker B: Shoulder width ratio. [01:12:58] Speaker A: Yeah, yeah. When you turn. And so, like, having these. These sort of, you know, relative principles of change in information, like, using those as a control system rather than using, like, absolute information. [01:13:17] Speaker B: Okay. [01:13:19] Speaker A: I think like, that. Like, I don't know who's working on. [01:13:22] Speaker B: It because it's kind of related to early, like, Rodney Brooks work, which is kind of anti representational work, where it's not like the robot has a computer in its brain, and it processes the doorway and goes, beep, boop, boop, boop, boop, boop, boop. Yeah, I will move. It's. It's more just like seeing a robot. Shoulder width ratio to door size ratio. Distance moving, so moving through the world continuously. [01:13:49] Speaker A: And so, you know what's funny? I don't know if you read Gibson, but he actually writes about the idea that the brain is a computer is completely stupid. And then he also. [01:14:00] Speaker B: I didn't know he was so disparaging about it. [01:14:02] Speaker A: Well, I mean, that was my reading of it. But he says it's, like, almost as, like, it's along the line, the idea that the eye is a camera, which he also finds unreasonable. And so I think it's just sort of, you know, if you're a moving. A moving participant in the world, a moving agent, I guess, is the word. I feel like you need to have some sort of relational information. Like, you can't have this absolute. Like, it just makes. Like, I don't know much about it, but if I think, like, I just. It makes no sense to me to start, like, trying to map out everything. [01:14:44] Speaker B: How do you make sense, then, of how well transformers perform, for example, or any other modern AI system when we're giving it a task? Because they don't. [01:14:53] Speaker A: Well, you're giving it a task, right? You're giving it a task. And, you know, I think it's useful. Like, to me, it's. Most of the technology that we're using is like that, right? We're giving the technology a task like, we used to have to, or humans used to be. Have to do a lot of physical labor, and then you develop machines that take away the strains of this physical labor. [01:15:19] Speaker B: But supposedly they're based on neural networks, which are based on our brains, so they must be the same. And you can predict brain activity with a convolutional. [01:15:28] Speaker A: Yeah, well, because the brain is also doing, sometimes doing a similar task. Right. Like, there are, like, just. The brain is also doing other tasks. Like, there are situations in which the brain is doing, I don't know, object recognition task. Yeah, well, but baseball is like, if you build a baseball robot, it's going. [01:15:49] Speaker B: To suck right now. [01:15:52] Speaker A: Yeah. But I think there was this actually some work and showing that. Oh, man. Okay. I don't want to say anything wrong. [01:16:01] Speaker B: Now, but I have editing capabilities. [01:16:06] Speaker A: Yeah, I'll look it up. There is a paper using some, you know, like, some invariant information, basically, or, like, change in that can actually perform pretty well at interception. Like, an agent like that is actually a problem that I think is solvable by. The other problem then, is how do you implement the physics of the thing moving and things like that? And they have to have battery and stuff like that. [01:16:45] Speaker B: What if the wind blows? [01:16:50] Speaker A: Yeah, I mean, that, again, that'd be like then if you really. If you could have a wind detector. Right. But that is going to influence the path that the moving object takes. And so that's going to influence the rate of change that it has in relation to other things. And so, like, in the end, it doesn't really matter if the wind blows or. I don't know what else. Like an earthquake happens or something. If the. [01:17:19] Speaker B: I think you canceled the game. If an earthquake happens, you cancel the baseball game, I believe. [01:17:24] Speaker A: Or, like, the spin rate. I don't know why that makes you. [01:17:28] Speaker B: It's funny, there's a. We've been taking my kids to. There's a science museum here, Carnegie Science Museum, which just changed names, but that's no matter. One of the things that they have there is this giant robot arm that was made by Carnegie Mellon, I don't know, ten years ago, maybe. But its sole job is it's fed a basketball, like, in a little small railings, and it uses two prongs to pick up the basketball. It turns and it shoots a free throw and it makes 90 something percent. And you're looking at this thing and you're thinking, that would have been super impressive. Ten years ago, maybe. Yeah. And now it looks so dumb, it's kind of ridiculous, right? I mean, it probably looks cool for kids, but in the sense of, like, modern technology, it just looks like, man, what is the point of this thing? And how easy would that be to implement? But back then, it wouldn't have been easy. [01:18:28] Speaker A: Yeah. [01:18:30] Speaker B: I don't know why. [01:18:30] Speaker A: Yeah. And I mean, and that's why. Yeah, that's why, again, I feel like that is, like, an area where making predictions is, like, very dangerous because there may be, like, a breakthrough and then that. [01:18:45] Speaker B: Oh, yeah, I know I've already said some things that I have regretted saying in the past five minutes. You know, so, like, that it would be terrible at baseball, and I bet next week there's gonna be, like, some awesome. But they have those robot soccer competitions that are, you know, they're kind of. [01:19:01] Speaker A: Yeah, yeah. And. But I think, I think the focus there again, and that's because there's so many things, like, if you design a robotic system, you're worried about the shape of the thing, and you're probably limited by shapes you can think up off and you want to. Does it locomote or does it drive? But then you'd say, why would you make it locomote? That's stupid. But then if you, like, there's stairs. Like, oh, now I had a robot that is only on wheels. Like, it can't go upstairs. Right? Like, I don't know if you've pushed a stroller through any city, it's like a nightmare. [01:19:39] Speaker B: So it's like, yes, I have. Come on. Did you really need to ask me that? No, my wife pushed the stroller all the way. Always. [01:19:48] Speaker A: There's always. Exactly. There's so many, there's so many things where, where you have design decisions to make that in the end, I think the information seeking strategy is just one of them. And then again, it interacts with the way you've. Right. Like, what did you build on it? How do you have the camera? You're not going to have a phobia that's going to be moving. So if you're building a robotic device, it's going to seek information differently from how humans seek information. [01:20:21] Speaker B: I don't know why you wouldn't have a photo. Yeah. Moving fovea. There are those. [01:20:25] Speaker A: Well, you could. There are those. But then, you know, it's like, then you go down that path, and then you have limitations again. That, like, do you just want to build something that's like another human? Like, I don't. I don't know why you would. [01:20:40] Speaker B: I don't know why you would either, but. So, yeah. And now we're kind of nitpicking and thinking about the minutiae. But my original question was really, you know, whether you knew, like, if any of these things, you know, we're currently being researched or implemented, and it's an unfair question because you're not in the robotics world. But this is at a, ostensibly, it used to be more so a podcast devoted to neuroscience and AI. So I always have to ask. [01:21:05] Speaker A: Yeah, I mean, I think, yeah. I mean, I'm sure, but I don't. [01:21:13] Speaker B: Yeah, yeah, fair enough. [01:21:14] Speaker A: I don't know the particulars. And I, you know, and for me, it's really the question, like, yeah, where I wouldn't even know if I wanted to build a robot. I wouldn't even know what to tackle first, basically is my point, because I would be worried about the interaction with the environment of this device. Right. And that, and let's, that is like bodily and, but it's also like information, like, like information processing. Like you have both of these that you need to worry about. [01:21:47] Speaker B: But could you, im thinking of your work that we talked about earlier where youre tracking a ball, right. And you see that the way that humans track the ball is they have some smooth pursuit and then they kind of get behind and they have to catch up with the saccade. Right. And then sometimes they smooth pursuit and then they can jump ahead with the saccade to predict where they think the balls. And so theres this interaction and the dynamics of those which we were talking about, the timing of eye movements and what kind of things, I mean, could those potentially help you design a system that would optimally work? [01:22:23] Speaker A: I think so, yeah, I definitely think so. Because the smooth pursuit allows you to extract really accurate motion information of the object you're trying to intercept. And then the saccade could be sort of like a, well, first of all, it can help you catch up if your eye is too slow, and then it can also move you predictively to somewhere where you thought it was going. And then you sort of confirm by again having your phobia on it whether you were right with your prediction or not. And so absolutely, that could help inform the robotic system. [01:23:10] Speaker B: Yeah, I'm not, you know, I think I was going to mention this earlier, and I just wanted to appreciate just how crazy dynamical our eye movement system is without us even appreciating it because we're just constantly tracking saccade and then we don't even talk about micro saccades, which it's like the eyes just have to like jitter around. Yeah, but it's just a crazy. [01:23:34] Speaker A: And, which are also like active, probably. [01:23:37] Speaker B: And have like, yeah. Attentional and all those things. [01:23:40] Speaker A: Yeah, yeah, yeah. But see, that's the thing. Like, now that we say that again, I'm like, well, would it even make sense to build any robotic device that has a phobia, because it's sort of. [01:23:51] Speaker B: You know, recreating the way we do it. [01:23:54] Speaker A: Yeah. And it's, you know, if you. And then I think then it's interesting to go into, like, vision in the wild and see, you know, predators have, like, really. Well, like, they have these vertical slits as eyes, and they can have really good depth precession, and this. Right. And they're probably the ones that would be best at catching a ball. Like, if I look at my cat, like, it's so much better at catching things than I am. And so, you know, then, like, for me, the approach would be, okay, I want to build a robotic device to do x. Like, who in the world is best at x? And then understand their visual system and the way they are designed, and then you can maybe be inspired by that. Whereas we, as humans, we have sort of, like, a good solution for most of the problems that we face in our environment, but that doesn't mean that that be a good solution for an artificial system. [01:25:00] Speaker B: Yolanda, we're the best at everything. That's why we need to model everything we do. After humans, we need human, like, AI. [01:25:10] Speaker A: I mean, you are a male specimen, so speak for yourself. I'm on that. [01:25:16] Speaker B: Oh, my God. All right, Yolanda, is there anything that we missed that we didn't cover that you're particularly excited about, or. Or did I ask you to death questions? [01:25:28] Speaker A: No, I think we covered a lot. [01:25:31] Speaker B: Of ground what we didn't cover, and maybe we can next time. And we talked a little bit about some of the struggles of parenting, and you mentioned you have three kids, and I will link to in the show notes, you wrote a little article about an experience that you had as a parent going to a conference decision whether to take them there or not. So I'll direct people to that, and maybe we can chat more about it next time. [01:25:55] Speaker A: Yeah, yeah. And if any, you know, if anyone of, like, your listeners is in. In the situation that they're like, I don't know, struggling with kids in academia. [01:26:07] Speaker B: I'm always happy go to on twitter, offline. Yeah, yeah. I really. I really enjoyed this conversation, and. And maybe I was telling my wife yesterday that I was going to have this conversation with this person who was, like, in the eye movement world halfway. I mean, you're fully in it, but that's not, like, your only focus. And I was like, I'm afraid I've forgotten too much to have a decent conversation about this. But I kept up. Okay. [01:26:37] Speaker A: Yeah, yeah. It's a slow moving field. [01:26:40] Speaker B: Well, it's a slow moving field. I'll say it. I won't get in trouble. It's a slow moving field. I mean, all neuroscience is. But I was struck going to that, that research conference about just how little has changed in the five, six, whatever years that it's been. But that's the way it goes. [01:27:00] Speaker A: That's the way it goes. Well, thank you. [01:27:02] Speaker B: All right. Thank you. I alone produce brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, Neuro AI, the quest to explain intelligence. Go to Braininspired Co. To learn more. To get in touch with me, email Paul at Braininspired Co. You're hearing music by the new year. Find [email protected]. dot thank you. Thank you for your support. See you next time.

Other Episodes

Episode 0

November 17, 2019 01:33:24
Episode Cover

BI 053 Jon Brennan: Linguistics in Minds and Machines

Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and...

Listen

Episode 0

August 28, 2022 01:25:52
Episode Cover

BI 145 James Woodward: Causation with a Human Face

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode 0

June 16, 2021 01:26:12
Episode Cover

BI 108 Grace Lindsay: Models of the Mind

Grace’s websiteTwitter: @neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace’s work using convolutional...

Listen