BI 122 Kohitij Kar: Visual Intelligence

December 12, 2021 01:33:18
BI 122 Kohitij Kar: Visual Intelligence
Brain Inspired
BI 122 Kohitij Kar: Visual Intelligence

Dec 12 2021 | 01:33:18

/

Show Notes

Support the show to get full episodes and join the Discord community.

Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo’s lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.

Transcript

Ko    00:00:04    I kind of wake up every day to sort of think that maybe my research is going to help someone’s life. And I think this is kind of like, oh, well, what a great person you are. But like, I really, I mean, I think I’m going to do like a small story. Maybe this is please, you can cut it out. If it’s not relevant, let’s go to 5,000 BC trying to explain it. I’m trying time traveling back then trying to explain the motion adaptation model to them. They’ll be like, go away. Like, you know, what are you talking about? This is not, I don’t understand anything. So all these models are not real models of the brain. Like, I don’t know, how is the network failing? How do we know it is failing? And like what could be the additions that you can make to the models that improves it? I think to actually have a good quantitative, tangible grasp on those questions. I think you need a platform like brain score to actually be there. This is the model that tells you that what is going to be the predicted neural response for any given image. I think that’s what, where we are in terms of that. We think of this as a stronger test of the model, because there are many models than there can come up with different images. Then you can test those as well.  

Speaker 0    00:01:18    This is brain inspired.  

Paul    00:01:31    Hello, good people on Paul attempt her of good, uh, personhood master of none. Today. I bring you Coheed H Carr, who also goes by co master of core visual object recognition. So co has been a post-doc for the past few years in Jim DeCarlos lab. And if you remember, I had Jim DiCarlo on back on episode 75, talking about the approach that his lab takes to figure out our ventral visual processing stream and how we recognize objects. And much of the work that Jim and I actually talked about was done in part by co. Now co is an assistant professor at York university, where he’ll be starting his lab this summer. His lab is called the visual intelligence and technological advances lab. And he’s part of a group of people who were hired into a fancy new visual neurophysiology center at York that is going to be led by none other than my previous post-doc advisor, Jeff Shaw.  

Paul    00:02:29    So Cohen, I kind of continue the conversation about using convolutional neural networks to study the ventral visual processing stream. And on this episode, we talk, uh, about that background a little bit, and also CO’s ideas for where it’s going. So as you may know, what started out, uh, as a forward convolutional neural network has since been extended and expanded and co continues to extend and expand both of the models to account for object recognition and experimental work that will be used in conjunction with the models to help us understand visual object recognition. And that includes adding other brain areas and therefore models to more wholly encompass and explanation of our visual intelligence. So I get CO’s thoughts on what’s happening, what will happen and how to think about visual intelligence and a lot more topics I linked to his lab and he is hiring as he says at the end. So if you’re interested in this kind of research, you should check it out. I link to it in the show notes at brain inspired.co/podcast/ 122. Thank you as always to my Patrion supporters. If you decide you want to support the podcast for just a few bucks a month, you can check that out on the [email protected] as well. All right. Enjoy COHI teach car.  

Paul    00:03:49    Uh, are you an electrical engineer? Are you a neuroscientist? What the heck are you?  

Ko    00:03:55    Um, yeah, I think I’m an electronics engineer, according to my undergraduate, um, education and training, uh, and then sort of a move slowly, gradually into like biomedical engineering, one step towards neuroscience maybe. And then finally did a PhD in neuroscience.  

Paul    00:04:12    What was it that got you interested in a neuroscience?  

Ko    00:04:16    I think like all of, a lot of us, I think those were discussions about consciousness and things like that, that I kind of cringe upon a little bit now, but those were the, those were the introduction to neuroscience. And I think I particularly got influenced by a lot of these very nice storytellers, like, uh, so I was doing my masters at New Jersey Institute of technology, but I was sort of, um, cross registering for courses at Rutgers where Jackie was, was a professor back then and just like listening to him and the way he talks about the brain. I think those kinds of those were sort of the initial hooks to like, oh, I really want to be in this field and be with these people and like talk about the brain with them, things like that, like sort of at a very artificial superficial level. Um, mostly, and I remember going to one talk, uh, from V S Ramachandran at Princeton and it was like, those kinds of things was like, wow, like this is such a, you know, uh, interesting system and I want to work on it. And, and I think that were the initial things that kind of like drew me towards studying the brain  

Paul    00:05:16    Storytellers, the  

Ko    00:05:18    Storytellers pretty much. And I think now I’m kind of like thinking that could be something beyond storytelling and like, uh, but, but the storytellers are perfectly fine scientists and they also do a lot of stuff that I kind of do now. So like, there’s nothing against storytelling, but I think that component that I sometimes kind of feel like, oh, what is the use of that? I think that’s really useful because like to tell a story about your science in a way that sort of attracts young minds, I think is great,  

Paul    00:05:46    But now, so, so consciousness and storytelling, uh, drew you in, but now you’ve discarded both of them as a frivolous.  

Ko    00:05:54    I don’t think I have discarded them as freewill as I, I just have, I think my time is spent better doing other things than that. I think, I don’t think those are like bad problems to work on or like you useless things. I think there’s actually very useful, but I kind of realized that that’s not my sort of, you know, a forte, like that’s not, my expertise is not doing  

Paul    00:06:18    What percentage of people do you think who, uh, you know, are, are drawn in, are drawn in because of like the big questions like that. And then, um, you know, I said discard or, uh, you know, whatever I said, but then go on to realize, you know, start asking very specific questions and, and kind of leave those larger things by the wayside. It’s a really high percentage, isn’t it?  

Ko    00:06:42    I think so. I think it’s a very high percentage, but, but I think also it probably is useful to kind of keep reminding ourselves what the big questions are and like, uh, so I think that simultaneously very important. And it’s just that the,  

Paul    00:06:56    Sorry. No, no, that’s fine. I was just going to say that I think part of the reason, and I don’t really know the whole reason, but I think part of the reason is that, um, those big questions get you in, and then you realize that there are a lot of big questions that are super interesting that aren’t those questions. I don’t know. Does that seem on point?  

Ko    00:07:13    I think that’s right. And I think there’s kind of like very similar to how I feel right now. And I think, I mean, I know that it’s sort of like saved multiple times, that it’s all about asking the right questions and like, the questions are very important, but what, at least from my perspective, I think I realized that like the answers and what do I consider satisfactory answers to those questions often determine like how you approach your science and things like that. So to me, like, it’s just not about the question is also like, what kind of answers am I satisfied with and why am I seeking that answer? I think those are the real drivers of what I actually do in the lab. Yeah. Um, yeah, of course I would like to like, you know, simulate, I dunno, uh, consciousness in an artificial system, but I think that that is going to be a very difficult, um, kind of objective to, you know, go for in a lab and get funding for it. I mean, I’m, I’m, I’m really happy that some people are trying to do that who are more privileged than probably I am. But  

Paul    00:08:11    Congratulations on the new job, I guess that’s not so new now, but where are you? Where are you sitting right now? You’re not at York yet. Are you?  

Ko    00:08:18    No, no. I’m S I’m still at, in Cambridge messages at MIT, my governance.  

Paul    00:08:24    So when are you headed to New York?  

Ko    00:08:27    Yeah, I’m starting in July. Oh, nice.  

Paul    00:08:31    Well, congratulations.  

Ko    00:08:33    Thank you. Thank you. Yeah. Um, I’m very excited and it was a very interesting hire because all of this happened during the pandemic. Yeah. I’m still supposed to go and see the department to some degree. It’s, it’s really, it’s really a virtual remote, but I’m very happy so far with what I’ve all the discussions that I’ve had with colleagues there. And then I’m very to start  

Paul    00:08:55    Working. You’ll be, uh, you’ll be near my, um, my postdoc advisor, Jeff Shaw up there. So  

Ko    00:09:02    Yeah, I’m very much looking forward forward to working,  

Paul    00:09:05    Um, that I, you know, I’ve asked him a couple of times he’s been pretty busy cause he just moved to New York as well. Tell him that I am still waiting for him to come on the podcast, so.  

Ko    00:09:14    Okay.  

Paul    00:09:15    So, um, so you, you, um, your, your most recent work, uh, I was a postdoc in Jim DeCarlo’s lab and Jim’s spot on the show and you guys are one of the reasons why I asked you about your engineering background is because you guys are quote unquote reverse engineering, the visual system, uh, I guess it all it off with a convolutional neural networks, uh, and the feed forward story of convolutional neural networks. Um, but, and I don’t know how you got into a deep learning, but I do know that you were discouraged at one point from, uh, from studying deep learning or using it. Can you tell that story?  

Ko    00:09:55    Sure. Yeah, so, I mean, it’s an old story. It’s like probably like now already 10, 11 years old. This was 2008 when I started my masters in biomedical engineering. And, um, I think I kinda realized talking to a lot of people back then as an even saying the word or saying something like, oh, I’m working with a computational model and I’m in a neuroscience program. It’s sort of like, you’re looked down upon as a fake neuroscientist. You’re not one of the real people that is doing the real neuroscience,  

Paul    00:10:26    Please. You’re not doing experiments  

Ko    00:10:27    Because I was, at that time, I was not doing experiments. And I was mostly trying to like, look at things like, for example, like, you know, I was doing the working on auto encoders or neural network models, trained with, backpropagation basically looking at how internals of these networks might match some neurophysiological data that I had or some behavioral data. It came to things that everybody including me is all excited about these days. But like,  

Paul    00:10:51    But that was before the quote-unquote deep-learning revolution in 2012. Right. So  

Ko    00:10:57    I think it was still popular back then among certain groups, I guess. But I, I just did not, I mean, I couldn’t have predicted that if I had worked on that, maybe there could have been some nice papers or nice, you know, uh, studies, uh, that I could have done. But as I was saying that, like, I kind of got a bit discouraged because like I just started realizing that, oh, this is not the real neuroscience, because I’m not sitting there with a slice of four mice brain patch, clamping. And like looking at neurons, voltage is going up and down on a stupid monitor or something. Like, I kind of feel like, you know, that’s, that’s the real deal. And I remember I, I prepared a poster for a conference and I was going to present this poster, which is work done with like these artificial neural networks.  

Ko    00:11:41    And I think I was so, you know, um, I was, I was afraid that I will be ridiculed at that conference in the morning of the day. I can just got out of there. Like, I’m not going to present this, forget about it. I’m going to go back and I’m going to do real neuroscience and look what I’m doing right now. So I have a, unfortunately it’s really ridiculous, but it’s kind of pathetic, but there’s a paper that I wrote back there with all these ideas of like, oh, back probably re reinforcement learning autoencoder student teacher network. And I it’s, it’s really badly written and it don’t don’t look at it, but like, I kind of use that as a joke with my friends, like, oh, only if I had, you know, pursued this, you know, like all this work from Dan and Jim like, oh, I was way before that. It was ridiculous. Uh, no, I don’t think he would take that paper is a joke. Yeah.  

Paul    00:12:32    Well, yeah. Well you were talking so real neuroscience. That’s interesting because what you described with the mouse, uh, brain slices and patch clamping is exactly how I cut my teeth in neuroscience because I was a real neuroscientist. Right. So you think the definition of what a real neuroscientist is has changed now, so that, uh, people, you know, doing what you do is, um, do you feel, uh, like a valid neuroscientists now?  

Ko    00:12:57    Yeah. Well, I kind of validated myself by doing monkey physiology and the partner patients. So whenever I’m doing that, whenever I’m leading that life, I feel like as a real neuroscientist, I mean, I still think that like, actually it helps to look at the brain and biological data to get the right perspective about the system. So I definitely value that, but I think with time, the importance of computational techniques and analysis techniques are so important now, just, I think as we were discussing, like there’s a, there’s an answer that you’re seeking. And the answer to me is it’s going to be in the form of that, uh, those models. And so like if you’re not talking that language, it’s sort of becomes difficult to communicate, it will become difficult to communicate any neuroscientific finding in the future. So I think in that regard, that might become the real talk of neuroscientists in a few years. If it hasn’t been,  

Paul    00:13:51    You’re looking for an answer, what’s the question.  

Ko    00:13:54    That’s a very good question. So I think, um, exactly. So the question that I think a lot of people are interested in is that how do we solve certain tasks? Like at least that’s the way how I look at, from leading questions. Like I’m interested in neuroscience because I’m interested in a behavior and why I’m interested in that behavior, particularly, maybe because if that behavior goes missing, I’ll be in deep trouble. So like, that’s kind of my sort of, um, way of getting into this space of like, okay, there’s a behavior then what does it mean to do a behavior? And how do you actually scientifically study? If so we measured this behavior, we operationalize that behavior with some task and we measure that. And then the understanding, or the question is that like, how does the brain solve that problem or, or give rise to that behavior. And then we start by building models of that behavior. And depending on what type of answers we’re looking for, are we looking at how different neurons come together and produce that behavior or how different brain areas are participating in that behavior? We, we try to like, you know, uh, build specific units or parts of that model and look at them carefully. So at least that’s how I formulate the question, but the bigger question is like, okay, there’s a big behavior and how are we actually solving it? You know?  

Paul    00:15:11    Well, so you can correct me if I’m wrong here, but the, yeah, the story is I see it, uh, from Jim’s lab and from the convolutional neural network work is that, you know, you’re trying to solve object, uh, core object recognition. Um, and you know, it started off with a feedforward neural network, uh, you know, that was built through many years. And then, um, you know, the deep learning world came on the scene and, uh, you guys realized that these networks accounted well for, uh, predicted the brain activity well, and kind of went on from there, but, uh, things have developed. So the reason why I asked what the question was, uh, is because, you know, it’s interesting, it’s almost like an isolated system, right? So you have this convolutional neural network and it is modeled, the layers are modeled after the ventral visual processing, hierarchical layers in the brain. Um, and you know, the goal is to understand a vision, right. And I don’t know what that means. Um, you, do you feel like you guys have a, where are we in, uh, understanding vision?  

Ko    00:16:18    Yeah. I think that there’s a lot of questions in, in that, in those sentences, because like, let me maybe like explain a little bit about what I think of what understanding means maybe like, so, um, I, I think one definition of understanding that I have in my head is that it is basically coming up with a falsifiable model of, of something. Like, if I understand something and you can tell me that it’s a wrong understanding, and if I can basically have a model that is falsifiable, which like I can make predictions and you can tell me that, oh, you’re wrong. So for example, and there could be different levels of this understanding. So I understand how my coffee machine works because I can predict which button to press and the coffee is going to come out. So it’s like a concrete prediction. You can test me like, oh, you can tell me, like, go turn on the coffee mission.  

Ko    00:17:04    I go, I press the wrong button music. You don’t have any understanding of how this machine works, but if I press the right button and the coffee starts coming out, but then if the machine breaks down, there’s a different level of understanding. I might need to fix it. So like, then I, then you might ask me like, which part of the machine to fix and how does it work? And there’s a more detailed level of understanding required. So in the same way, I feel like understanding vision would require multiple levels. And I think one of it is like at the behavioral level, like I predict the behavior. So that’s where we started. But all of this sort of relies on models like concrete, computational models, at least that’s what my current sort of, I was gonna say, understanding like my, my current sort of opinion of like, what understanding for me might mean is that like you have concrete, computational models that make explicit predictions about, you know, how, how our system is gonna work or, or perform.  

Ko    00:17:56    And then you, you get to test it and that’s sort of the understanding, and then that’s moving. Now the problem, I think usually is that if we define understanding this way, then we have to also sort of have common, I don’t know, goals of like, what are we trying to understand? Like, what is that behavior? What, and my current, uh, you know, view of the, of the field is that we actually don’t have common goals like that. We’re kind of like all doing our own things. And so I think it’s kind of important to maybe like have certain specific goals as like, this is what we are trying to predict. These are the, you know, behavior of the system. These are the neural data or something that we are trying to predict and then come together, like come up with what are the best models that can do that. And some of it is we are currently trying to do, do it with this, um, website and platform called brain score. Um, and, and trying to have integrative approach to like all kinds of data and all kinds of model and things like  

Paul    00:18:57    That. So how’s brain square going into are a lot of people using it.  

Ko    00:19:01    Yeah. I think the user base of brains is definitely increasing. And I think, uh, we are, we are having a conference now. Uh, well, we still submitted it at co-sign and we are potentially going to have a competition. It’s like, I think it’s going to feel a little bit more like an image net competition or something like that. But my, my personal opinion is that maybe someone can look at brain score and say, it’s too early for some someone to like start making these models and, or scoring them and being so concrete about it. Right. But, um, but I think it has to be done. Like, that’s kind of my goal. And like it’s, if you ask me where vision, like understanding of vision is to me, like pointing to some kind of platform, like brain score is a concrete answer that I can give, like, that’s my way of quantifying it.  

Paul    00:19:47    Yeah. So it’s a benchmark. And, but, you know, on the other hand, uh, benchmarks have gotten some flack because like you were talking about, uh, we don’t know whether that’s the right benchmark, right. Whether it’s the right question. Yeah. So it is concrete, but, um, I guess we’re progressing and asking better questions. W would you agree with that? Yeah.  

Ko    00:20:08    Yeah, absolutely. And I think there is no like three or four benchmarks that will define our understanding. So I think the goal is to have more and more benchmarks, and hopefully you, we will see that, like, because it’s the same brain that is giving rise to all that data. So if you are actually modeling that particular brain, then we should be converging to like a very small space of models, eventually, at least as the dream. Um, so of course, like there could be multiple different benchmarks and different ways people are probing the system. But I think the value add of brain score is that if we can get those, all those experimentalists and modelers on board, then they can provide those data or provide those sort of as also targets for current systems. Instead of saying that like, oh, you know, your network is never going to predict that like, okay, that’s okay. That’s fine. I mean, the networks are falsified under all possible benchmarks, so it’s not a big, you know, sentence to say, but it’s just like, how is the network failing? How do we know it is failing? And like what could be the additions that you can make to the models that improves it? I think to actually have a good quantitative, tangible grasp on those questions. I think you need a platform like brain score to actually be there  

Paul    00:21:21    And ask you about falsification, because you’ve talked about how that’s one of the useful parts of the modeling, uh, push is that they are falsifiable. Um, but then, you know, you have models like the forward convolutional neural network, uh, that predict what is it like 50% of the neural variance somewhere around there. Right. How does one falsify a model of  

Ko    00:21:42    Yes. So I, I don’t think, I mean, in, in the, in the sense of falsification, those models are all falsified anyways, but then the question is, how do you build the next best model? When I think of that, I feel like given some numbers like that, it then makes me kind of figure out if I build a better model, it should at least be better than the current numbers that are coming out of, you know, the feedforward neural networks or something like that. So I think of course you can, I mean, is the question that you dismiss the entire space of models, our family of models, uh, as like completely useless, or you say like, that’s a good start. Let’s build upon that and start adding elements to that, to build the next best model. I think I, I’m mostly motivated by this idea that yeah, like we have a good grasp and thanks to machine learning and AI for that for actually building these real models and not like time models.  

Ko    00:22:32    And so now that we have these models, let’s capitalize on this momentum and get, get going and build the next, probably build the next best models. Although I’m talking about models in this way, but I, my personal life is mostly spent doing experiments, trying to put holes on that, on those modeling frameworks or models. So I’m actually very happy that those are all falsified because I feel like that’s my job to falsify them. And, but the other part of my job that I feel like is important is that it’s not only just to falsify them, but also get some data that is in the same scale and in the same sort of spirit that would help build the next better, best model or something. So it’s just not good to shit on them. It’s just also provide some, you know, material for them to work on to on and become a little bit better. So  

Paul    00:23:19    In your, uh, so you’re doing a lot of experimentation, uh, what’s faster modeling or experiments,  

Ko    00:23:26    Experiments. Uh, yeah. Uh, I mean, I think building a better model is very, more difficult than doing an experiment. Uh, this is, yeah. I, I, I think I’ll debate anybody about that because I think for me, like, you know, Alex, so again, depends on which field you are, if you are building this model for, or purposes, or like, so there, of course modeling is way more faster than any behavioral experiment or any neural experiment. But if we are trying to build a model of the brain it’s it’s, uh, so it’s like, uh, we are discussing about this engineering things that, okay, I have a, I have a problem of like, how is the brain working, but the solutions cannot be anything it’s like constrained by this biological system. So there’s like a specific solution that we’re trying to look at. And I think aligning the models with that is a very, I mean, I can build a model that might solve action perception or action prediction better than the current system, but that might not align with the brain.  

Ko    00:24:21    I think when I said like the modeling of it’s slower, I think it’s that bit, which is like having models that are more aligned with brain because, you know, like 2012 Alex and came up, uh, and then now we don’t even talk about Alex. And in terms of the computer vision, I think no serious computer vision scientists would say like, Alex, that is my model that I started with, but it came to neuroscience and it’s still here. We are still using Alex nursery. I think things come to neuroscience and the stare for a longer time, because it’s just very difficult to falsify or like discriminate among these models even. And I think there are maybe in, I mean, for us, there are some, some of them are like, there’s some deeper questions in here as well. Maybe because like, when we say we have a model of primary vision, like, like what do we actually mean?  

Ko    00:25:05    We have a model of a specific human or like a specific monkey, or are we modeling the shared variance across humans or monkey or, you know, are we developing a model of the, you know, all the possibilities, like a superset of vision? Like, so, so, so how well should a model of object recognition even predict behavior of, of one subject or some neurons that I’m recording for in a monkey brain? I think we need to think carefully about those, those questions, because like, yeah, sure. Like the model might predict, you know, one neuron in a monkey’s brain at 50% explained variance, but then how well does any other neuron in any other human brain predict that other, another humans, it neurons or something like, so I think quantifying and sort of setting up the ceilings based on what we actually are, modeling, remodeling individual human beings, or individual monkeys, or, you know, the shared monkey published.  

Ko    00:25:58    I think those questions are sort of important. And then maybe we are done with like predicting core object recognition, feed forward responses because, you know, one monkey predicts another monkey or 50%, and there’s no way you can improve beyond that. Something like, so it’s, to me, I think because of these kinds of, um, of course I’m sitting and I’m realizing that like, this is basically like, it’s, it’s empirically challenged in something like that. Like, so it’s, it’s actually the experiments that have to provide these answers and we’re sort of like limited by technology and how well we can probe the system. So that’s why I think slower.  

Paul    00:26:31    Yeah. Okay. Well you said ceilings, so, uh, the way you’ve talked about it, it makes me it’s a fuzzy ceiling, I suppose. They’re fuzzy ceilings and that respect. Yeah. All right. So your I’m slowly coming around to the fact that modeling takes a long time. I did, I didn’t do any deep learning modeling. I did like a kind of a psychological model. It was very simple, right. And it took a long time, but experiments, I had to go in every day and it just was, uh, you know, years to publish a single paper.  

Ko    00:27:02    I see where you’re coming from. And I, I totally, I mean, that’s has been sort of my experience as well. I mean, it takes a long time to train a monkey to implant the arrays and, and get the data and maybe, you know, the area doesn’t get implanted. Well, then you have to implant again. Like there are multiple problems that can come up. Um, but I just feel like at the end of the day you have some data and if you have designed your experiments properly, that’s like, especially neuroscience, which I think is still in the dark ages. Like it’s sort of like novel data, you know, it’s like, just it’s anything you do. You can, you can basically, it’s like novel data and a target for like a model to sort of predict. And I think in that way, it’s faster because they, I can build a model like one minute, just, you know, put some two convolutional layers together and call it a model. But is that really useful or is that really taking the field forward? I mean, I mean, I, maybe I answered it too fast about like, you know, experiments are slower.  

Ko    00:27:59    I might have to think about it, but I think that, but I think I’m trying to sort of like tell you a little bit about why I think modeling is actually going to be slower, especially like modeling  

Paul    00:28:07    There’s physical time, but then there’s also heartache time. So maybe those are two orthogonal things. Right. So like the other question would be like, where do you experience more heartache and, uh, obstacles. And do you think modeling would be the answer to that  

Ko    00:28:23    Again, depending on your experience? So like, if, if I, if I’m like running a monkey, like after I have like a, you know, brought a monkey to the lab and done an experiment, I have zero energy to do anything else in the day. So it’s like I’m done for the day. And, um, and I think that way, yes, it’s, it’s a lot more, I mean, at least because that’s the experience I had, I can’t tell how bad it is for like a modeling person or how banks like to come up with, you know, giant Mo I mean, I  

Paul    00:28:53    Feel like,  

Ko    00:28:54    Like most of these, like the libraries are not loading, the version is not correct. So those are the problems that I usually face. Uh, but, but yeah, but at the end of the day, once the model is training and I don’t know, I mean, at the end of the day, I feel a modeler is going to be more disappointed because the models don’t really predict much more than the previous model. That’s like the neuroscience experiments, if it’s designed properly to begin with, I think is always going to give more insight, just a biased opinion means.  

Paul    00:29:24    Yeah, we’re all, we’re all biased as we know. All right, co so, um, again, correct me if I’m wrong, but, uh, the way that I see it, there’s this core object recognition story that at the core of it, uh, is a feed forward convolutional neural network. And, um, you know, you guys in Jim’s lab, uh, have done a lot to explain neural data. So that’s kind of like the basis, the way that I see it. And then from there you’ve done a lot of other work. Like you’ve started adding bells and whistles like recurrence and you’ve controlled, you’ve synthesized images to, uh, predict, you know, which neuron is going to be driven by a particular image. So you, so you’re making the models more complicated. Um, and I’ve heard you argue that these, that what we need is more complicated models. Whereas, you know, from a, from an, uh, Phil, uh, philosophy of science classic perspective, what we like are simple models, right? And because part of the problem with these deep learning models is that we don’t exactly know how they’re doing what they’re doing and to use a complicated model, to explain a complicated organ, like the brain, uh, there’s a pushback on how much that actually buys us in terms of understanding. But you argue that no, we actually need them more complicated. Why is that?  

Ko    00:30:49    Yeah, I think it depends on how you define complication, because I think the reason why I might say that it’s, we need more complicated models because the models are not really predicting what we are. We set out to predict. So I think making them simpler. I mean, I don’t know. I mean, I don’t, I don’t think that’s going to be the answer because the brain is complicated. So anything that is a simulation of the brain will look complicated in some sense, in the other sense, it will not look complicated because if you have correspondences and alignments with the brain, you can point to a part of the model and say, oh, that’s before. And that you can say like that’s before in the brain. So that in that way it might be, become less complicated over the course of it. Just the definition of like what complication and like, what is interpretability and what is understanding?  

Ko    00:31:32    I think those, and because there is no objective definition of those things. I mean, I think these kind of conversations usually, you know, lead nowhere. I mean, I kind of think I’m trying to think of this thing. Like, for example, when I was in my, in my graduate, uh, studies in may do my PhD, we had models of motion after effect. And, uh, if I spoke to anyone like at VSS or SFN, or co-signed about these models, uh, everybody would say like, oh, this is completely understandable, interpretable, simple models that we have intuitions about, which is like, okay, you show coherent. Uh, so you still show a random motion pattern and you have this motion detectors, they’re all firing and they’re all fighting equally. There is no, like, basically if you, after that, if you show a stimulus that is moving upward, the upward neurons will do something.  

Ko    00:32:21    And it’s going to be like some response, which is going to be higher compared to the rest of the group. If you are only showing outward motion for a long time, those are the neurons that are going to fire and get fatigued. And then when you show like random pattern, you’ll see like everything else is firing higher and there. And the outward motion detectors are kind of finding slightly lower. So overall you will have bias towards saying, okay, it’s going. Maybe the motion is going downward, something like that. And this can be modeled and people have modeled this. And I think those models compared to artificial neural networks now, like there might be considering a simpler, more intuitive understanding, understandable models, less complicated. Now I’m thinking like, let’s go to 5,000 BC. People are talking Tamil or Sanskrit and a Greek or some other language, trying to explain it.  

Ko    00:33:07    I’m trying time traveling back then trying to explain the motion adaptation model to them. They’ll be like, go away. Like, you know, what are you talking about? This is not, I don’t understand anything. So I’ll, these models are not real models of the brain. Like, I don’t know. And I think, I feel like the same thing is happening now, which is like artificial neural net. But, but, but, but remember like the motion model that I, that I just mentioned was predicting this, this adaptation phenomenon, this behavior. So that was kind of the, the goal of this Mo modeling effort. It had some relevance with how people have looked at the brain and neurons. And so, but if I tell this in 5,000 BC and people will be like, I don’t know, this is not mapping into our worldview. And I think the same thing might happen right now with con condition neural networks and sometimes mineralogist and things like that.  

Ko    00:33:50    It’s like, okay, this is too complicated. This man, I don’t, I cannot like fit into my low dimensional kind of behavioral space of like how this high dimensional, you know, areas are functioning or like responding. So I don’t take that complaint seriously because I think with more familiarity with this terms and models, that that company is just going to go away, as the models become more and more powerful in predicting different behaviors. And we will see, for example, use of having these models in sort of like, you know, real world applications. And I think that kind of fear of, oh, this is a too complicated of a system is just going to go away. And for those, for whom like this won’t go away, they’ll just probably have to live with it.  

Paul    00:34:32    Okay.  

Ko    00:34:34    But maybe the more, I think one of the reasons why I feel people like simpler models is it allows them to like maybe think through like, if the model gets stuck, what to do to improve it. And I think that to me is like real value of having a simpler, more interpretable model. And there is a question of efficiency. If you can have a complicated model kind of self-correct yourself, yourself improve itself, but it’s kind of a future goal. Maybe that might just be a more efficient way of dealing with this problem. Then, then kind of like humans kind of coming up with their own intuitions of like, what is a better model and things like, and I think we were discussing what is engineering background to me that might be something I’m more prone to accepting because of my engineering background, because I just feel like there’s a question there’s a solution and these are just tools to get to the solution. It doesn’t matter if I intuitively kind of understand it or not, as long as it’s aligned with the brain data and things like that is fine.  

Paul    00:35:35    So one of the, I actually got even more excited to talk to you because after we had set up, uh, this episode, uh, someone in my course asked, um, because I talk in my course, I talk a lot about, I use, uh, Jim’s work and your work to talk about convolutional neural networks and how, you know, how it relates to the ventral visual stream. And then someone in the course asked, what about the dorsal stream? Because I talk about the two visual streams and, uh, this goes back to the question of like what it means to understand vision. And I know that one of the things that you’re doing, uh, so the question was like, why aren’t there models for the dorsal stream as well? Why is it all ventral stream? And I know that you are starting to incorporate it and you have some background with the dorsal stream as well. Um, and maybe we should talk about what the dorsal stream is just, uh, to bring everyone up to speed, but w uh, what are you, so are you just starting to incorporate other brain areas now? W what is your, yeah,  

Ko    00:36:29    Well, the first thing is that maybe if that student is interesting in doing a PhD or a post-doc send them my way, because that’s, that’s the kind of question I was also asking about, like, what is an artist dream doing? Because I had spent like five, six years studying that also stream, which is slightly system of the ventral stream in, in sort of anatomical location in the brain plus  

Paul    00:36:51    Say what the dorsal stream is like, what it classically is. Uh, do you want to say it, are you happy to, as well? You can say so. Yeah. So classically, there are two ventral, uh, two visual streams, um, it hits V1, and then it kind of branches off into a ventral stream, which is, uh, what the massive amounts of neuro AI and core object recognition is about where, uh, it gets processed over hierarchical areas, uh, through V2 before it, until, uh, we suddenly have neurons that respond to whole objects, but the dorsal stream is classically the where, or how stream, um, which is much more related to, uh, the motion and spatial, uh, aspects and, uh, our actions. Right. So, um, it’s activity related to, and that’s where I spent my career is basically from more or less in the dorsal stream. Yeah. So I don’t know. Did I explain that? Okay,  

Ko    00:37:49    Absolutely. I I’m usually now, like, I think I’m usually very careful about assigning some behavioral function to areas. I mean, mostly start talking about anatomical locations and like, who knows, like you might find that you could also stream is just big part of corporate recognition. Right?  

Paul    00:38:06    Well, well, yeah, I mean, so the thing that has been, um, I guess always known, but not paid so much attention to, is that there’s a lot of crosstalk between the dorsal and the ventral stream, but we’ve kind of studied them in isolation right. As to, uh, individual separate things.  

Ko    00:38:24    Yeah. I mean, I think that’s a sort of, I see that as an opportunity, uh, to sort of like really take this sort of studies forward and trying to incorporate, um, looking at dorsal stream as well. Just one point I wanted to make is that I think there are folks who are beginning to build models of the dorsal stream in the same way as you know, the venture stream modeling has gone. I think I recently saw a paper from Chris Becks group and sorry if I’m forgetting other authors. I think Blake Richard was part of it. Patrick was like, there, there there’s, there’s a, I think it’s a bio archive at least there’s work done from Brian Tripp’s group tried to model the system. Like, of course, like dorsal stream has a lot of modeling, you know, prior modeling work that is not kind of similar to the convolution neural network, uh, stuff, but, but I think people are beginning to build them and, and their different objectives that they’re proposing as like sort of normative framework for like how the dorsal stream gets sort of trained up. Like, I think it was a nice hypothesis and we’ll see like, whether the data actually supports those models or stuff like that. But I think for me sort of trying to get into this area, those are really nice work because like that gives me some baseline ideas or baseline models to start testing and proving, you know, when I start designing my experiments, I think those models will really help me to sort of make a good experimental design.  

Paul    00:39:48    Uh, but are you building, so I actually don’t know like what kind of a model, because it wouldn’t be just the same, would you wouldn’t just use a convolutional neural network to model the dorsal stream. Right. Um, and so are you building models yourself also, or are you going to incorporate,  

Ko    00:40:04    I have not personally built any models right now. I’ve just been testing some of the models like, so I started testing some of the models that were mostly used for action perception or action recognition models. They have these like temporal filters, like they’re still convolutional. It’s just like more dimensions to the convolution is like a time dimension. So I think those are like good starting points because like they’re easy to build maybe because they can use the same kind of like training procedure. But I think we have to, at some point become a little bit, be okay with being a little bit, you know, uh, go lower in terms of prediction because we need to move from static kind of domain to a dynamic domain. And I think my usual experience has been that whenever you go, you make this jump, like all these models start to kind of like not perform as well.  

Ko    00:40:51    Not, not predict the neuro responses as well. And so I think to me, like that might be one of the reasons why maybe some people are building these models and they’re not really coming out because they don’t really predict next. So maybe backing up a little bit, like my main, um, sort of interest in this, um, docile venture interaction question kind of started when I was mostly recording, you know, showing static images to the, to the monkeys and recording their responses in it. And I, and these are, you know, uh, objects that are either like natural photographs or, you know, some kind of synthesized, uh, images. And I started thinking about my previous work in dorsal stream and it was like about motion and, you know, like there are dots moving and gradings moving, but if I think about the real world, like I never see dots moving a great exposing in the real world, like there’s objects and moving.  

Ko    00:41:39    And to be like, if I, my, my, if I have to have any real world relevance of my current research, I just felt like, you know, it’s a dynamic world, I’m moving my eyes and I’m moving myself and w the objects are moving. And if I think of these questions are typical, these behaviors, dorsal stream kind of pops up in any literature search that I do. It’s like self motion, you know, motion of objects or motion of like, not objects, but like maybe motion of like some something in my visual field. But, but then I was wondering like, you know, like it has this nice representation of what the object is and if the object starts to move, is it all, is, does all of it fall apart? Like what happens? And so just out of curiosity, I just started recording from these neurons and when the objects were actually moving, and then I started kind of, you know, this is, this work has not been published, but it’s like the sort of the, uh, preliminary result is that, well, it kind of can predict where the object is headed, uh, where it is moving.  

Ko    00:42:38    It’s not, we know from previous studies from Jim’s lab that from looking at it representations, you can tell where an object is. This was from Hong and Dan 2016, where the object is located. You can tell in a static image. So there’s one trivial solution where, okay, like, if you can tell where the object is located at different time builds, you can maybe combine that information to tell where the object is heading. It is going, what I started finding is that, like, it’s not only that it’s like, you can just take a snapshot of like, like maybe after you have started this movie 200 millisecond, 10 milliseconds later, you can just look a small time in, and you can tell where the object has been going. So it’s like, there’s a predictive signal of where the objects are headed. So that’s sort of like, then I started thinking like, maybe this is coming from the dorsal stream, or is it, you know, like, but, but again, these are, again, like ways of thinking that I’ve kind of discarded in the last few years.  

Ko    00:43:30    So I feel like the way to think about this is like, can a vanilla ventral stream model explain this neural responses already, then not be involved with our system at all. Maybe they will fail. And then the dorsal stream models are actually necessary to account for this neural responses. And also the behaviors that I can test based on these kinds of stimuli. So that has sort of been my approach and the docile and the quick update on the ventral stream results is that these models, they, they’re not really predictive of these kinds of responses at all to some degree. That gives me hope. Yeah.  

Paul    00:44:03    It’s always good to have hope when you’re starting your career. Although not that you’re starting your career, but your new, your new start. Yeah, exactly. But you said you’ve discarded, um, thinking about it, uh, in that way from like, uh, different brain areas. Is that because, uh, you’ve discarded thinking about different assigning roles to individual brain areas or,  

Ko    00:44:23    Yeah, sure. Absolutely. I think, I mean, that’s that whole way of thinking is like, I think it’s primitive. It’s not going to lead to like, the brain is doing what it is to like, make us go through the day. And like all areas are coming together in some form or the other. And so I think it’s, I will never, I don’t want to come up with the answer that like Darcel is doing blah, blah. Like, I think it’s just part of a system that is trying to solve a behavior. And the answer is going to be here is a model that has elements in it that are corresponding to neurons in the dorsal stream. And together they, you know, solve a behavior. Now you can ask the question if I really want to satisfy someone, like, what is it also stream green? You can start doing perturbation experiments in the model or in the brain and select what happens to the behavior.  

Ko    00:45:11    If I take out, you know, part of the dorsal stream or part of this and that. And, but then I’m mostly worried, like, what is the answer? Like what my answer is going to be like, oh, it takes a 10% hit for video air versus video. See, like, I feel like those are the kinds of answers that are really going to come out, but people are going to, I mean, I might spin this off as like, oh, but this is about, you know, function X or like, it’s about something about predictive coding, or I can give the answer in that form. But I think at the end of the day, it’s just going to be a big lookup table of like, you partner up this part of the larger stream, you get X hit on this particular behavior, this particular video. So I, that’s why I feel like my answers need to be in the, in the, you know, in the modeling kind of framework. Yeah.  

Paul    00:45:58    Words, w we’re we’re limited by our language. Uh, it turns out this very special thing that we have language also is very limiting in some respects, I suppose.  

Ko    00:46:07    But I think if the models can relate back to the language, I think then some of the, you know, uh, problem or the tension might be relieved a little bit, because I think now there, so for example, I mean, this is maybe slightly off topic from the dorsal ventral discussion, but like, if you look at a model of ventral stream, like you can look at brain score and they say, okay, resonate 101 or something has some numbers associated, like some scores. I can see why people have a problem with that model. And why people say this is not interpretable because like, there are parts of the model that are just don’t know what it is like, how does it map to the brain? Like I can call it some part of it, or some part is thousand different things in between that I have no clue of what they are.  

Ko    00:46:48    And like, maybe the model is not performing because of those, you know, computations that are happening in those layers. How do I relate this back to the brain or something? So I feel like that is a real problem. And I think it is in our interest to start coming up with commitments to different parts of the model and then falsifying them based on those commitments, if it’s like interpretable models should, to me, should be like, if I write up, so what is the interpretable thing in neuroscience? Like a paper, like the abstract of the paper is completely, should be at least interminable to anybody. So if a model has components to it that can talk to each part of the abstract, like, you know, you have a task, you have a neuron, you say something. And if you can basically map your abstract to parts of the, of the model, and if the model can map onto the parts of the abstract, clearly that I think just gives the model interpretability. And I think that level of crosstalk and language I think should exist. And I think that language, um, you know, trying to sort of develop myself to, even when I’m thinking about modeling and experiments,  

Paul    00:47:52    Well, I mean, after all that, about how we shouldn’t assign roles to individual brain areas, uh, you are doing some inactivation, uh, experiments, right? So what’s going on, what’s going on in there? Why are you inactivating individual brain areas?  

Ko    00:48:07    Yeah. So I think that that’s basically try to maybe, so there are a couple of studies that I think at least I’ve done recently, one has been already published, which is like inactivating ventrolateral, PFC, and looking at CoreLogic recognition behavior, and also looking at representations in it when the monkey was doing that task. And, uh, the goal was to basically expand or test whether these feedback loops that are existing between these areas, are they actually playing a role in that specific behavior that, that we are studying because the current models are incomplete and they’re not predicting enough. So like, it’s kind of make sense that maybe there are other areas and there are other connections that are important. So that is not to say like, POC does X, right. It’s like,  

Paul    00:48:54    It does everything apparently. Yeah.  

Ko    00:48:57    I’m sure it does a lot of things. It’s just like for, for me to actually ground this problem, it was more like water, what kind of role, or what kind of signals do I, at least from the inactivation? Cause like what kind of signals go missing in it when I inactivate them inactivate PFC and what kind of, sort of, uh, deficits do I seem behavior? And then the data, again, as I was saying is, is not like all, you cannot, um, identify objects in an occluded scene or something. It’s not an answer like that. It’s mostly like, here’s a big data set. It’s like, it’s not satisfactory to many people. Like here are the giant data set. It’s clearly like you see there’s an average effect, PFC, no PFC, okay. I’ve shown you this. There is a prediction that is coming out of a model that is like, this model is a feed forward model.  

Ko    00:49:48    It might not be doing XYZ. And while I like that, those are the images where these effects are also much more concentrated on. So there’s a story. There is like, okay, it’s clearly part of a system that is not the feed forward system that is maybe going beyond the current feed forward system. But like at the end of the day, I think the next step is to build a model that has a unit or module that is called, you know, VLP, IFC, and partner being that should, should produce the same kind of deficits it’s like, and this is where I think it’s a very hard thing to do. It’s actually easier thing for me to like part our PFC and like get this data and say like, okay, this area is involved, but then build this model. I think that’s going to be really difficult.  

Ko    00:50:31    And I think, and there are limitations to partner patient data, for example, I think, um, and this is, might be like relevant to the conversation about perturbation experiments, because I think even, even after this part innovation experiment, I think actually recording in that specific area with the same kind of task and same kind of stimuli might be more constraining for the next generation of models. And that is exactly what I’m doing currently. But at the same time I was thinking like, what kind of perturbation experiments might be like, you know, may have more, uh, benefit for the kind of models that we have right now. And that kind of led me to, um, developing mostly we say developing a lot, but it’s like basically testing, um, this sort of chemo, genetic strategies where you inject, um, a virus in a brain area. And for, in my case, I also implanted a Utah area on top of it so we injected dreads in V4. There was supposed to be like, you know, silencing or, you know, down-regulating the activity in before, and then we implant a uterus.  

Paul    00:51:29    Sorry, can you say what dreads are? Because we haven’t even, I don’t think we’ve even, I think we’ve mentioned them on the podcast before, but what are dreads? And then I also want to ask you, so you, uh, injected and then you in a separate surgery, then you implanted  

Ko    00:51:43    No, it was done on the same. Uh, it was done on the same exact surgery.  

Paul    00:51:48    Okay. Sorry, sorry to interrupt you. Yeah,  

Ko    00:51:51    No, no problem. So, so the key, so, so, so the basic idea is that you inject a virus that, that ends up, uh, sort of manifesting as a receptor in a neuron that you can activate or deactivate with various means. It’s the same idea with optogenetics, the same idea with chemo, genetics in the optogenetics to, to sort of activate or, you know, that particular receptor you need to show shine light on, on, on that neuron, right? Uh, on that area for, for chemo, genetic, you need to basically inject a drug into, into the system. And, uh, so there are some pros and cons of these two different, multiple different things. For example, you’re kind of like limited in terms of where you might want to inject for up till because you know, light delivery is tricky because you have to be mainly maybe, you know, restricted to the surface of the, of the brain.  

Ko    00:52:49    Deeper structures might be very difficult to target at scale. Maybe you can target like one or two neurons in, in chemo, genetic, you can basically inject the virus, the virus anywhere you want in the brain. And it kind of gets activated through the sort of like injection that you do in the bloodstream. So it basically activates or tries to activate all these receptors that has been, uh, you know, um, produced. But then there’s like temporal limitations. So after can like go very fast, quick on off, but the key, the dreads are more like museum organisms as they it’s on the effect is on for some time. And how long are in my calculus? No, no, I don’t know the weaker I would be, that would not be so good. I think it’s most so from my, from my estimates, I think it’s mostly on for like maybe a couple of hours and sort of, there is like very similar, at least the main times like museum walls.  

Ko    00:53:44    So, um, and what, what I have been doing is like, we have these areas that you can actually test, you can show the same images over and over again, after you have injected the activator drug, and you can sort of see how quickly, or what is the kind of, um, time course of neurons responding lower or higher. And then you can have behavior on top of it. Like the monkey is also behaving on different blocks. So you can kind of see like, you know, there are some deficits that are coming up and then the deficit sort of like go away at the end of the, at the end of the day or something. So I think I’m at least thinking of like, how do I take this? And like, make it useful for models. Like, okay, I can say like a V4 is involved in, you know, object recognition.  

Ko    00:54:28    That’s, I don’t know, not too many people will be interested to listen, but, but if, if, if you give me like, okay, brain score has like thousand models that all have like 0.5 correlation for V4 activity, but now I give you some V4 inactivation data and then 900 of them fall off and they cannot really predict the kind of, you know, pattern of deficits that V4 has that might be as, you know, important than, than to learn maybe the important problem. But, but as you see, like here, you need to have a model that has like a brain tissue mapping of V4. And, you know, where are you injecting the virus in the model versus in the actual brain? So, I mean, there are parts of this problem that are still more complicated, but I think this, the chemo genetic strategy, at least for areas like before, you know, where you’re injecting and these are mostly regular topic area.  

Ko    00:55:22    So there’s some level of, um, you know, uh, correspondence in the models. And then you have a neural data on that. So you can actually just say like, you know, like I don’t care about like your assumptions just like fit to the neural data. You have the, for neural data within, with our activation, you have, you know, your model within, with our activation just fit to all the data that you have got and then predict what happens to it or predict what happens to behavior in the model. And that’s how you validate the model. I think that is a very, I think that’s a stronger form of using sort of this perturbation experiments, because I think it’s not uncommon to see, you know, experiments where someone says, you know, this area I perturbed did nothing happened. And someone said like, no, no, no, you didn’t do this, blah, blah, blah.  

Ko    00:56:05    So it’s like, if the answer’s always yes and no, I think it will just stay there. It has to be sort of falsification of like competing models. And then maybe some data will be more useful than the others. The other, I think upshot of having something like this is that imagine, have monkeys that are doing these tasks in their home cages. Like we have a lot of monkeys that are trained up and they do these tasks all day in their home cages whenever they want, because they have a tablet, they can do these tasks. You can pair this up with, with that system and you just need one person to just go and inject like something in a monkey. And then basically you have days where you can like, you know, run this with an, with, with some part of their brain Concordia activated and you can multiplex, even with the viruses, you can like, you know, target inhibitory neurons, et cetera. In neurons, you can have different viruses inject in different parts of the rate that have their own corresponding activator drugs. So that the, I think there’s a lot of kind of interesting data sets that can come out of this approach, which should bear on, on the modeling questions.  

Paul    00:57:10    How much of your, uh, future, what I want to know is like the vision that you have for your own lab and how much of it is going to be this kind of work and how much of it is going to be modeling and so on.  

Ko    00:57:24    I think a lot of this is going to be this kind of work and like just pushing the boundaries of experimental neuroscience. I think the modeling is like, it’s, it’s like that’s going to be the backbone of the lab. Like the computational part is like, no answers can be provided from the lab if there is no model attached to it. So I will be collaborating with others. I’ll have people, you know, working with people in the lab who will be building probably these models as well and testing them out. But I think that, I don’t think I will be happy at the end of my career if I did not improve like, like a model or something of the system, even after doing all these different experiments. So it’s going to be a mix of that. I mean, maybe, I don’t know. I mean, I should probably mention this, like, I’m not, honestly, I’m not really interested in building the best model for corrupted recognition or dynamic visual perception or visual cognition, uh, just for the sake of building that model and understanding how the brain works.  

Ko    00:58:21    I mean, I don’t quite motivate myself that way, I think. And it kind of, I mean kind of interesting because like, I think for training purposes, these were the most concrete fields and most concrete labs that I thought, okay, this is where I should get trained, but I think I kind of wake up everyday to sort of think that maybe my research is going to help someone’s life. And I think this is kind of like, oh wow, what a great person you are. But like, I really, I mean, I think I’m going to like a small story. Maybe this is please, you can cut it out. If it’s not relevant. I was, I was. So I’ve been working in visual neuroscience and people know that I work in visual neuroscience back home in India. And  

Paul    00:59:00    What do you mean? People know like the, like India knows,  

Ko    00:59:03    Like my family, my family, my family, sorry, people, 1 billion people. No, no like five people, 1 billion people.  

Paul    00:59:11    Oh, that’s more than, that’s more than what I do. So there you go.  

Ko    00:59:17    Yeah. So among those five people, maybe like 10 Indian families tend to, so, and one of those 50 people, then there are some of them that I don’t, I think they have some idea of like what I might be doing, which is completely wrong. And I think I had this encounter with someone, uh, and, uh, unfortunately their kid, um, had, had got diagnosed, um, to be in the autism spectrum. Uh, and so I was meeting them and th they, they asked me like, also, what are you working on these days? And I’m like, okay, I’m working on visual cognition of saying stuff like, how do we reason and things like that. And, um, this, this person turns to the kid and, and tell them that, you know, your elder brother will one day, like, you know, it’s working towards the solution. And this kid is like, very young can understand anything of what they’re saying, but they’re basically telling them that he is going to come up with a solution that will cure you.  

Ko    01:00:16    Right. And it just felt like I just was feeling like I was thinking like, I’m failing enough, do that failing. I cannot find any connection to like, you know, what this translates to. And, and that really, I mean, that was kind of a pivotal, like, like a point where I started thinking, like, I need to find real connections with what I’m doing and how that really impacts or translates, not just this, you know, like the first paragraph of a grant saying like, you know, I’m working in dyslexia. Like if this is relevant to like, blah, blah, really trying to schizophrenia, et cetera. So like, it’s really trying to find them that I started actually. I mean, that’s going to be at least some part of my, my, my future research is like trying to find out how, you know, having these models that are these concrete models with brain maps, how are they beneficial to diagnosis and potentially treatment strategies insert some of these neurological disorders. And I started working a little bit towards these goals, and I’m very excited about this because I think there are real benefits. And I think you were mentioning about this, like neural control studies. I think those are the kinds of studies that are really, um, sort of giving me hope that like, there is a way to like, contribute to this, to this  

Paul    01:01:30    That’s kind of a magical thing. So, so that, wasn’t your motivator for a long part of your career, but, uh, from a place of guilt it’s, but, but it’s developed into guilt is a great motivator. Uh, but it’s developed into like a real motivation for you, but I never had that. I, I, I don’t care about helping people. And so I always felt bad writing schizophrenia in a grant, for example. Right.  

Ko    01:01:59    Yeah. I mean, it’s a little bit philosophical. Like I don’t even know. I cared about helping people in somebody. Maybe I’m basically thinking I’m trying to help people, but I’m just trying to help myself maybe thinking like, well, what if I have Alzheimer’s or something in my old age, but like, yeah. But I think currently at least I do feel like that gives me some level of satisfaction to think there is potentially some link of my research that might be getting help to some book for someone. Yeah.  

Paul    01:02:29    I mean, it is interesting to think how through our work, um, through your work, through people’s work, uh, your interests change and, uh, as you develop and as you ask different questions and answer different questions, it’s just kind of a magical thing. So that’s thanks for telling that story.  

Ko    01:02:48    Yeah. I think that that definitely like impacted me a lot. So, but, but I, I’m also, like, I think these are related issues. Like, I think, like you were asking about understanding and progress and like things that like understanding vision and visual combination. I think the moment we start to like, measure our understanding, like in the brain score where something like, then I think this answers to like the clinical translation becomes more concrete, maybe like, like, so I think they’re very related. It’s just, for me, it took me a little bit to like figure out and maybe I’m still working on it, like to figure out where exactly are the most relevant parts of it. And I think my interaction with a lot of folks who are doing autism research, like really helped, for example, I’ve been in touch with, uh, Ralph arrows at, at Caltech and we’re sort of collaborating on a project. I think those like those discussions and like reading the papers, like really, I think, I think they have a lot to contribute to what I do. And I think our way of thinking about the system has a lot to contribute to that research. Interesting.  

Paul    01:03:55    So, uh, w I, you mentioned the, uh, image synthesis work a little bit. Um, can we talk briefly about that because maybe you can just describe what the work is. I talked with Jim about this when he was on the podcast, but, uh, we can kind of recap because it was kind of splashy. Right. Um, and I, I kind of want to hear your thoughts on how you currently think about that work as well.  

Ko    01:04:18    Yeah. So this work was, was done in collaboration with , um, who’s at McGill now, and then, so me Puja and Jim were basically, we did the study together. And, um, so the basic idea was that, um, we, we were recording in V4 and we have models of before neuron. And the question was that, can you, from the model, you know, come up with stimuli using the model, can you come up with stimuli that puts the neuron in specific desired states. And, and one of the states that we considered was like, let’s make it fire the most we can. So the model will tell me,  

Paul    01:04:53    Yeah. So, so this is the control aspect of understanding.  

Ko    01:04:57    So that, that, that is like, you know, prediction and control. And this is the control part. So the models could predict, but maybe they couldn’t control because maybe the images that were synthesized. I mean, there’s a part where there’s a separate technique, which is the, how are you synthesizing the images? And maybe there are ways in which that doesn’t need to be attached to the model that, that specific model that you’re using to predict that they can be two separate things. But, but again, like for us, it feels like, you know, we were using the same model to come up with the images as well. So we came up with the images, we were trying to control the neuron, and we said, we were targeting like, okay, V4, let’s make these neuron fire as high as possible. That was one of the goals. The other goal was let’s take a bunch of you, four neurons that kind of share the same receptive field, um, properties and try to set one of them to very high and the others to be very low.  

Ko    01:05:47    This is like a population level, you know, control. So these were the two goals that at least we thought let’s start here. And then we were asking like, okay, this question seems like, you know, you’ve heard of this before because like, oh, what does before neurons do? Like, they, they respond to curvatures. What does V1 neurons do? They’re like Gabor’s and orientation and V2 is like texture. And like its faces, like now you come up with the stimulii and you look at them and like, I don’t know what to call them. Like, maybe they’re something, but then for us, we kind of ignored that problem. We just said like, okay, let’s just take these images and like, see whether the model’s prediction is right, because then that piece, you should show that like using these models, you can control the neurons to some, some degree. And that, that, that was basically the study. We have some success and we were comparing our, you know, success rates with like taking a random sample of neuro images are using the previous sort of thoughts on what are the stimulus space that excites these neurons, like curvatures from,  

Paul    01:06:46    I want to hammer this home because the, the images that, uh, drove the neurons were, and you mentioned this, but I just want to reiterate that they were terribly unnatural. Right. They’re not, not something that you would see. Well, I mean, there are elements that you would see in nature. Right. But the majority of them weren’t, they just something that  

Ko    01:07:06    I don’t know what even called. I mean, there’s some pixel, you know, conglomerations. Like I, I, so there, there, there are two studies that came out on the same day and I think the other images are even more scary. So this one from Carlos policy and Muslim instant Gabriel crime and we’ll show, so there, they were trying to control it, or they’re trying to like, you know, come up with the images for it. Those images look even more scarier, but like, did they have, because they have some kind of natural relevance, they look like out of a horror movie or something, but like the before and the, before images were more like texturey kind of images. And we were also restricting ourselves to like, you know, black and white images and things. So I think that that was part of the, it was constrained in certain ways that led to those images.  

Ko    01:07:49    But as you were saying that, yeah, I did get a lot of attention and then, but, but I think some folks have gotten excited about the wrong thing from the paper and the resulting images that drove before, I think cannot be the protagonist of the story, because I think that kind of became the story because like we like to say like faces excited, it neurons are XYZ, excited, XYZ areas. And I think in that formulation, then it about the images as sort of the, our new understanding of the system. Whereas that was not about the images. It was about look how, what you can do with this model, because this is the model that tells you that what is going to be the predicted neural response for any given image. So I think that’s what, where we are in, in terms of like, we think of this as a stronger test of the model, because there are many models than that can come up with different images, then you can test those as well.  

Ko    01:08:43    And I think there’s work very, very interesting work from cricket, Nico cricket, Carter’s lab about controversial stimulii. I think those are the right kinds of approaches. At least to me, like you pit these neural networks against each other and then synthesize stimuli and then test them, it’s a different kind of control experiment. But at the end, it’s basically about model separation and finding the best, the best model. It’s not about looking at those images and making kind of stories, stories about them. Yeah. The other side of the story though, is that this should not make someone feel like, oh, you know, the solves call up the recognition. This is the model. Yeah. So yeah, I mean, that, that, that’s the other thing I feel like, you know, there’s ways of presenting data that can pull our point. It’s a proof of concept study to me still. It’s like, you know, look like if you take this approach versus the other approach, this approach, like our approach is better or something like that’s kind of the way to present the study. But that doesn’t mean that our approach is like the best approach or like we are done. So  

Paul    01:09:46    Do you have people suggesting that we’re done, uh, do that?  

Ko    01:09:50    I don’t think we have people who explicitly suggest that we are done, but they might use this as an example of like, look how great the CNS are. And I think it depends on whom you’re talking to, because I can also use the same example to kind of like talk to somebody who’s just basically saying, oh, CNNs have adversarial images. And this is like a completely wrong domain of like models. I can then use this example to say, look, you can do some useful stuff, but if I’m coming up with things like, you know, you need recurrence and you need other areas to incorporate, someone might go like, but you can control reasonably Relic. Why do you need to incorporate all of that? So if you really look into the models, you know, look at the generalization of the models, it’s not that good. It’s, it’s like, again, not that is a very arbitrary, like word usage. Yeah.  

Paul    01:10:40    Yeah. But you feel like, um, in some sense, you’re your own worst critic, right? Because you, uh, you see all of the nuts and bolts and you see what’s missing and what needs to happen. And so do you feel like people are too complimentary are too impressed with the current work because I, you know, you should be well, yeah, I shouldn’t be,  

Ko    01:11:03    I think they shouldn’t be, but I think they shouldn’t also like everything else. They should just, I mean, I actually think this is our responsibility. I mean, to sort of also expose where the, I mean, if you read the two papers together, like the neural control paper and the reference paper, they’re basically one paper is sort of highlighting how you can use them. The other papers sort of highlighting, like here are the images that humans and monkeys are good and the models are failing, so these are the ways to improve it. So I think if you take all of these studies together, then you might get a more balanced perspective. And I think my goal, at least, I mean sometimes for a lot of reasons, I mean, you know, better that like you need to sell the studies in a certain way, but I think in these kind of discussions are like in papers in the discussion sections, like we, we should always be highlighting sort of the confounds or the potential, you know, places to improve these models. I mean, even for core object recognition, these models failed in very trivial ways that are, maybe some people are just reading this paper might be like, oh, this probably already solved.  

Ko    01:12:06    Maybe they don’t exist. Maybe this is the thing that I’ve created in my head.  

Paul    01:12:09    More guilt, more. Uh, yeah,  

Ko    01:12:12    Absolutely.  

Paul    01:12:14    Um, I know that one of the things that you’re interested in is, uh, visual reasoning. Right. And, uh, I don’t know if you want to explain why you’re interested in it and what it is, but, um, one of the ongoing criticisms, so, so, uh, non-human primates is kind of like the gold standard, right in neurophysiology. And you need an N of two, you need two monkeys to publish, um, classically. But, uh, and, um, recently there have been, you know, a lot of people working more and more in rodents and mice, and of course there’s always been the disconnect between mouse, brain and human brain. And one of the reasons why people like to study non-human primates is because it’s like the closest thing that we can study, uh, that resembles human brains. Uh, do you see, um, limits to studying non-human primates, uh, to, you know, get at our intelligence? And so the reason why I asked you about the visual reasoning is because you’re starting to ask, so object recognition is a fairly simple thing, right? I know it’s not simple, but you know, we recognize objects, but now you’re starting to ask a more cognitive higher cognitive quote-unquote questions. And I’m wondering if you see limits to using non-human primates for that.  

Ko    01:13:30    Yeah. I think the answer will be sort of, I mean, my answer to that question would be maybe based on the kind of data that I will be collecting in some sense. So the way I see this problem is that like, you know, ultimately at least for myself, I’m not suggesting that everybody has this approach, but I’m pretty human centric in my worldview. And I think my goal is to find out like how humans solve a particular problem. So they are basically like the main model that I’m interested in. Um, so I think we start from human behavior on different tasks. And ideally we’ll have a model which is like currently maybe, you know, some form of convolution neural network, which has many areas other than venture scream, like dorsal stream PFC. And they will be kind of like predicting parts of the behavior of the humans and maybe at full capacity or something.  

Ko    01:14:19    And I think at least one angle of approaching the monkey research would be like, can I get some neural data that might be constraining for those models might improve those models are. And usually, I mean, the way people go about it is that they collect some neural data, come up with an inference that is more, can be summarized as like a very smaller kind of principle, like have recurrence or like, like a smaller model. And then they incorporate that idea into the bigger model and ask like, do they improve my, my model, my bigger model? Uh, we, I can do that. I mean, I’m probably gonna do a bit of that basically, like saying, look like it looks like this other areas in the monkey brain is associated with this particular behavior and maybe that is going to improve my, my, my, my, my, my development of the models.  

Ko    01:15:08    The other thing could be like, you just directly, you know, feed the data that you’re collecting into the model building itself. So you’re getting a lot of monkey data. Then it’s a matter of like questions of like how much data is enough data. And I think we are getting more and more data. So I think this is the right time, like start putting them in the models. Like, so right now I’m involved in a project where all the data that I’ve collected is getting kind of filtered into the training part of the model and the models have been regularized with that data essentially. So like, and those models are becoming better predictors of core object recognition. So that is one way of bringing in the monkey neural data and the monkey behavior, maybe to this problem. The other way I think about this is that maybe, you know, uh, humans and monkeys share a very, I mean, it’s, maybe it’s probably proven in many ways that we share a very similar visual system.  

Ko    01:16:01    So if even if I just get responses of the visual neurons in it or other areas during showing some of these movies or some of these like, you know, videos on which the task is based off, I can be providing constraining data for the model of like, you know, you need to be in this representational space and then solve a problem. So, like it’s a, two-part kind of approach where the, the, the neural data is basically constraining the representational space of the model. And then on top of that, you add a decoding layer that is the reading those representation, and you can have multiple ways of decoding the task. And then you ask like which one, you know, or you can then compare it to human behavior. And I think this, this could sound novel or surprising, but like, this is exactly the thing that Jim’s lab like our lab has been doing for core of the recognition for quite a while, where we were recording in monkey brain, but then comparing the decoding models output to human behavior.  

Ko    01:16:57    I have now started working, like, because I was also getting the behavioral data too from monkeys. I have started now working, looking at trial by trial and like image by image behavioral correspondences with monkey neurons and, you know, human sorry, monkey behavior, but it was basically monkey neuron human behavior. We had a paper with Rishi, Roger, lingam looking at huge monkey neural responses to like wards and non words and their correspondence to human behavior on those sort of orthographic processing tasks. So I think there’s a way to like, do this kind of separated from a behavioral task is I think maybe if you’re asking, do we, does the monkey need to do the behavior for them to be relevant to this task? And I think the same applies to rodents and other species it’s just to me that the correspond, ultimately, again, as I was saying continuously to no discussions that at the end, there is a model and whatever you do, you need to kind of show that that adds to improvement of the model on something.  

Ko    01:17:55    And now I, from my, just what we’re talking about, I can say like, maybe my goal is like not to improve like prediction on human behavior to ceiling, but maybe it’s like, if I’m doing maybe predicting behavior of neuro-typical subjects versus, you know, people with autism, do I have some traction on that problem? Maybe like I can do, you know, like individually, et cetera, imbalances, I can create them more easily with chemo, genetic perturbation in a monkey, and then test what those representational spaces are. And those could be like kind of constraining ideas for when you’re building models of people with autism. So I think there are many, many ways in which, and I’m, I’m, I’m seeing all of this ideas and with the risk of sounding Legos, a scatterbrained person who has to, but I think at the, at the end of the day, I think these are the things that excited me. So I think I won’t be able to solve it all by myself. I am hoping that a lot of people who are kind of maybe similar minded, we all come together and kind of pry and tackle this  

Paul    01:18:55    SoCo neuro AI. Uh, so, you know, um, a lot of your, at least, you know, most recent career has been using deep learning models to shed light on brains on, so this is the arrow from AI to neuroscience. Um, do you see, and part of what you’re doing also is using, uh, brain architecture and neuroscience, some details to improve the models bit by bit like you were discussing. Do you see neuroscience helping AI, uh, or does, does AI not need neuro-science can AI just scale up and go to AGI or what  

Ko    01:19:34    That’s a interesting question. And also, I think I’m probably not the, my answer might not be that satisfactory just because of my lack of knowledge in a lot of these domains. But I think, I think of this problem in different ways. So like, if I think of this as like, okay, I’m going to build a calculator and should I constrain myself with the brain data? No, it’s going to be like terrible calculator for scientific computing or something. So like, if that’s the goal of an intelligence systems like to compute, you know, calculate things fast and like, then I think constraining it, neuroscientific ideas and data as like a bad idea. Um, now if maybe we can make a distinction of like behavioral data and actual neural data. So I, if, if I want to prioritize in my head, like which data might be more informative to building models in for AI, I think behavioral data will come first before neural data. Some of the examples might be like moral machines, kind of data. That is part of the MIT media lab. I think if we are trying to constraint a system to work like humans, the human behavioral data, I think will be key to constraint. This  

Paul    01:20:47    That’s kind of been the success of deep learning, right. Is because it, um, the old way in neuroscience was to build a model out of kind of intuition, uh, and then compare it to data and the new deep learning approaches to build a model and train it, to optimize it for a task, uh, like an animal or organism, uh, would perform. And so it’s all about behavior and lo and behold, the, uh, model predicts neural data. Well also, right? Yeah.  

Ko    01:21:12    Yeah, definitely. But I mean, I was maybe making a slight distinction between like overall performance in a behavior versus like following the pattern of human behavior and the error pattern. So like image imaging, a train models are trying to get the labels. Correct. Which is a behavior, but like humans might not always get those labels. Correct. And like the might have different patterns. So I think I was mostly thinking like this error pattern of like, what kind of decision do we make given some kind of confusing stimulii or things like that, those kinds of data might be more relevant to models if they want to sort of operate in a human regime, because I’m thinking of like a system that might be like, you know, helping somebody go through life, who are unable to do things in their life that that machine or robot has to interact with with the person.  

Ko    01:22:00    And then it’s, I think might be important for that, that, that system to be constrained with human behavior to some degree. Um, if are those purposes, I think behavioral data is very valuable. At least that’s how I think about it. Um, for example, also AI in healthcare might be something that is, might be very constrained and there, I think maybe the neural data might have some bearing on that. I mean, it still has to be shown, I think, I mean, yeah, but I feel like there might be some, I mean, as I was saying that this ideas of like, you know, how does the brain, uh, differ in a neuro-typical subject versus atypical subject? That kind like, it just depends on the scale of the data and how we are getting it that, and that that’s the relationship of the brain, uh, representation to behavior.  

Ko    01:22:49    I think those kinds of data might help us to build better models of the atypical systems and then use solutions that might be catered to the atypical system. I mean, now I’m kind of, you know, being very abstract with me. I mean, I can come up with like a dream sort of example, where if you know exactly how like a system is learning, for example, a new task, and, um, you can do that for both atypical and neuro-typical populations. You might be able to use the atypical model to kind of come up with learning sequence that produces new typical behavior, even though it’s atypical system. So I think that kind of, that is definitely within, I think the, the genre of like AI healthcare kind of like approaches. So I think that way neuro to AI links probably are more clear to me. I think generative models, um, might have a, you know, a boost if they’re regularized with neurosystem data. That, that, that is another maybe, uh, angle, but yeah, but, but it’s it’s so I, I would just not, um, what I’m mostly worried is that it’s not like it doesn’t obvious that you have some brain inspiration or like neural data is going to improve AI models. Right. That’s, that’s what I’m kind of very pushing back against. It’s like, maybe you can get behavioral data and that’s enough and you don’t need to poke around.  

Paul    01:24:13    Isn’t it interesting that, you know, these deep learning neural networks are based on 70, 80 year old neuroscience, like fundamentally the idea of a neural network back with even the logical units. I mean, uh, so, and you’re adding more biological constraints to your models. So it’s an interesting,  

Ko    01:24:32    That’s true. I mean, I’m, I’m thinking of that. Like, so the first part, I agree that that’s like, you know, that that’s where all of these ideas might’ve come up and that’s a good reason to keep, you know, looking at neuroscience for, you know, inspiration for building better models. But if I look at the last 10 years, I really don’t see a concrete example of like, you read a paper in nature, neuroscience journal neuroscience, and took that idea and implemented in a model that dropout being  

Paul    01:24:57    Run by  

Ko    01:24:59    The end, they’re like engineering hacks. Like, I mean, yeah, the groups like to use it as PR, which is, I think the reason why, so it’s great for that purpose, but I think in reality at the end, you can have it. I mean, I mean, and that’s, that’s fine to me. Like, even if you have an idea of a dropout and then you figure out how to really like tweak it to make it part of a model that does something that’s great. And I think in that way, it’s really good to have neuroscience as an inspirational kind of umbrella on top of everything. Good, good for my career and I’ll be able to talk to them. But I think, I definitely think there is, there is purpose of, of neuro, I mean, yeah, that would be use of neuroscience for AI, but we need to be careful to not oversell it.  

Ko    01:25:40    Maybe, maybe we should. I know, but I think it’s the other way around for me makes more, to me it’s more valuable, especially because I think, you know, you’re trying to measure data in the brain that is noisy. This is like sample limited and then build theories and models around that. Like what to expect, like how to think about high dimensional spaces, blah blah. So like to me, like once you have a model that is doing a very, you know, uh, high level behavior and very accurately, that complex system gives us the opportunity to like really figure out how to even analyze a complex system. Like, so it’s, to me that’s a huge bonus from these net networks, because you were saying this, I think have been trying to do both things at the same time, like, like build a complex system and then figure out how to analyze the complex system. And here are networks that are already built up and you can formulate like different theories based. I think to me, that’s like a huge advantage of having these networks and they stay, they really become like the starting points and the hypothesis, maybe base hypothesis for a lot of these neuroscientific experiments. So that’s kind of like, at least how I have been mostly getting excited about the, the cross tab between the two fields.  

Paul    01:26:54    We talked about how there’s this kind of archaic, uh, fallacy, I suppose, for, you know, naming a brain region, giving it a role. Right. And, uh, the modularity of the brain prefrontal cortex does X that, that sort of thing. Um, and we’ve talked about, well, I guess I mentioned about, um, how language actually limits us in some sense, do you feel like we understand what intelligence is? Do we have the right notion of what intelligence even is to, uh, start trying to, you know, to continue trying to build quote unquote, uh, AI?  

Ko    01:27:31    I, I don’t re I mean, I know which we scientists, we are thinking of, like, I think for me, I probably don’t have a complete understanding of what intelligence is, but I have a friend of understanding of what kind of intelligent behavior I would like to build models for. And so that’s where I’m just the kind of the engineering engineering, maybe like talking, because I know what problem I have defined. I won’t know the solution. So like this kind of tasks that are slightly above, you know, recognizing an object and like trying to figure out like what different agents are doing in a, in an environment, or like trying to predict what might happen next. Like these kinds of behaviors I think are fairly intelligent behaviors. And my goal is to build models and, and try to figure out how the brain is actually trying to solve that problem.  

Ko    01:28:18    So in that way, I’m fairly happy about the definitions of intelligence, but then again, we’ll get into trouble. Like I’ll get in trouble and saying, what is intelligence? They be like the, you know, the typical, like it, you know, scores or IQ, it IQ scores, I think they’re heavily debated. And so I just feel like, what I want to say is that we can keep debating about what is the right score, what is the right way of quantifying intelligence, but we have to do it in some way, if we want to have any measurable progress. So I have defined it in some way and I will keep, you know, improving the definition and, you know, expanding on the definition. But, but I think, uh, intelligent behaviors are, to me that controversial, anything that I can do that my three year old son cannot do almost seems like a definition of like little bit more intelligent, but he might be learning faster than me. So at this stage, like the kind of definitions like that, maybe that exists, but like, yeah,  

Paul    01:29:16    You have a three-year-old  

Ko    01:29:18    I do have a two year,  

Paul    01:29:19    Two year old. Is that the only child?  

Ko    01:29:21    Yeah. Yeah. He’s our,  

Paul    01:29:23    Oh man. That’s, uh, that’s kind of a hard, um, patch going through and starting a new job and all that. So I feel sorry for you. I mean, it’s a wonderful thing obviously, but you know, it’s challenging early on, so  

Ko    01:29:36    Yeah. Yeah. It’s uh, yeah, yeah.  

Paul    01:29:40     are you, um, go ahead, go ahead.  

Ko    01:29:45    Um, I, I must say like, it’s, I’m happier, uh, on average, um, like in con like after taking into consideration everything around the child, I think overall I’m happier that we have a son that’s the most, I will say Tiniest, like a P P equal to 0.04.  

Paul    01:30:07    I used to draw this, uh, a pie chart where a, that I would show people like, you know, why do you like having kids? And it’s like 51%. Yes. 49. No. Yeah. All right. Maybe I’ll cut this because I sound like a real jerk. Um, are you, uh, are you hiring in the lab? Are you looking for, uh, students? What’s what’s the situation.  

Ko    01:30:30    Yeah. Yeah. I’m definitely looking for post-docs and grad students to work together, uh, in my lab. So I think if folks are interested, I mean, the grad students are, they’re basically going to be, um, recruited through York’s, um, graduate program. Um, and the post-doctoral candidates. I think I’m just going to talk to them individually and then see where the sort of, you know, alignments lie in. Yeah, definitely. If, if folks are interested in, in whatever we spoke about, and maybe if they read some of the papers and things, they’re interesting directions that they might want to pursue. I’m definitely interested in talking.  

Paul    01:31:09    He’s the future of neuro AI flux. It’s a, this has been a lot of fun co um, congratulations again on the job. And, uh, gosh, I’m just excited for you. It sounds like you, you have a lot to pursue and, um, things are, uh, looking up. Not, not that they were ever looking down, but, um, congrats.  

Ko    01:31:30    Thanks, Paul. I mean, I mean, there has been a lot of promises made. I feel like I’m, I’m kind of like making a lot of promise and I hope I am able to deliver. I feel like as long as I can quantify what those promises are, I can tell you in maybe a year where I have been, how much I have, you know, delivered.  

Paul    01:31:47    So check in in a year,  

Ko    01:31:50    We should check in. But yeah, I’m excited. I think, I think this is worth doing so. So I feel like I’m, I’m all excited to get on with it.  

Paul    01:31:59    That’s been great, Kyle. Thank you.  

Speaker 0    01:32:00    Thank you so much.  

Paul    01:32:07    

0:00 – Intro
3:49 – Background
13:51 – Where are we in understanding vision?
19:46 – Benchmarks
21:21 – Falsifying models
23:19 – Modeling vs. experiment speed
29:26 – Simple vs complex models
35:34 – Dorsal visual stream and deep learning
44:10 – Modularity and brain area roles
50:58 – Chemogenetic perturbation, DREADDs
57:10 – Future lab vision, clinical applications
1:03:55 – Controlling visual neurons via image synthesis
1:12:14 – Is it enough to study nonhuman animals?
1:18:55 – Neuro/AI intersection
1:26:54 – What is intelligence?

View Full Transcript

Episode Transcript

Speaker 1 00:00:04 I kind of wake up every day to sort of think that maybe my research is going to help someone's life. And I think this is kind of like, oh, well, what a great person you are. But like, I really, I mean, I think I'm going to do like a small story. Maybe this is please, you can cut it out. If it's not relevant, let's go to 5,000 BC trying to explain it. I'm trying time traveling back then trying to explain the motion adaptation model to them. They'll be like, go away. Like, you know, what are you talking about? This is not, I don't understand anything. So all these models are not real models of the brain. Like, I don't know, how is the network failing? How do we know it is failing? And like what could be the additions that you can make to the models that improves it? I think to actually have a good quantitative, tangible grasp on those questions. I think you need a platform like brain score to actually be there. This is the model that tells you that what is going to be the predicted neural response for any given image. I think that's what, where we are in terms of that. We think of this as a stronger test of the model, because there are many models than there can come up with different images. Then you can test those as well. Speaker 0 00:01:18 This is brain inspired. Speaker 2 00:01:31 Hello, good people on Paul attempt her of good, uh, personhood master of none. Today. I bring you Coheed H Carr, who also goes by co master of core visual object recognition. So co has been a post-doc for the past few years in Jim DeCarlos lab. And if you remember, I had Jim DiCarlo on back on episode 75, talking about the approach that his lab takes to figure out our ventral visual processing stream and how we recognize objects. And much of the work that Jim and I actually talked about was done in part by co. Now co is an assistant professor at York university, where he'll be starting his lab this summer. His lab is called the visual intelligence and technological advances lab. And he's part of a group of people who were hired into a fancy new visual neurophysiology center at York that is going to be led by none other than my previous post-doc advisor, Jeff Shaw. Speaker 2 00:02:29 So Cohen, I kind of continue the conversation about using convolutional neural networks to study the ventral visual processing stream. And on this episode, we talk, uh, about that background a little bit, and also CO's ideas for where it's going. So as you may know, what started out, uh, as a forward convolutional neural network has since been extended and expanded and co continues to extend and expand both of the models to account for object recognition and experimental work that will be used in conjunction with the models to help us understand visual object recognition. And that includes adding other brain areas and therefore models to more wholly encompass and explanation of our visual intelligence. So I get CO's thoughts on what's happening, what will happen and how to think about visual intelligence and a lot more topics I linked to his lab and he is hiring as he says at the end. So if you're interested in this kind of research, you should check it out. I link to it in the show notes at brain inspired.co/podcast/ 122. Thank you as always to my Patrion supporters. If you decide you want to support the podcast for just a few bucks a month, you can check that out on the [email protected] as well. All right. Enjoy COHI teach car. Speaker 2 00:03:49 Uh, are you an electrical engineer? Are you a neuroscientist? What the heck are you? Speaker 1 00:03:55 Um, yeah, I think I'm an electronics engineer, according to my undergraduate, um, education and training, uh, and then sort of a move slowly, gradually into like biomedical engineering, one step towards neuroscience maybe. And then finally did a PhD in neuroscience. Speaker 2 00:04:12 What was it that got you interested in a neuroscience? Speaker 1 00:04:16 I think like all of, a lot of us, I think those were discussions about consciousness and things like that, that I kind of cringe upon a little bit now, but those were the, those were the introduction to neuroscience. And I think I particularly got influenced by a lot of these very nice storytellers, like, uh, so I was doing my masters at New Jersey Institute of technology, but I was sort of, um, cross registering for courses at Rutgers where Jackie was, was a professor back then and just like listening to him and the way he talks about the brain. I think those kinds of those were sort of the initial hooks to like, oh, I really want to be in this field and be with these people and like talk about the brain with them, things like that, like sort of at a very artificial superficial level. Um, mostly, and I remember going to one talk, uh, from V S Ramachandran at Princeton and it was like, those kinds of things was like, wow, like this is such a, you know, uh, interesting system and I want to work on it. And, and I think that were the initial things that kind of like drew me towards studying the brain Speaker 2 00:05:16 Storytellers, the Speaker 1 00:05:18 Storytellers pretty much. And I think now I'm kind of like thinking that could be something beyond storytelling and like, uh, but, but the storytellers are perfectly fine scientists and they also do a lot of stuff that I kind of do now. So like, there's nothing against storytelling, but I think that component that I sometimes kind of feel like, oh, what is the use of that? I think that's really useful because like to tell a story about your science in a way that sort of attracts young minds, I think is great, Speaker 2 00:05:46 But now, so, so consciousness and storytelling, uh, drew you in, but now you've discarded both of them as a frivolous. Speaker 1 00:05:54 I don't think I have discarded them as freewill as I, I just have, I think my time is spent better doing other things than that. I think, I don't think those are like bad problems to work on or like you useless things. I think there's actually very useful, but I kind of realized that that's not my sort of, you know, a forte, like that's not, my expertise is not doing Speaker 2 00:06:18 What percentage of people do you think who, uh, you know, are, are drawn in, are drawn in because of like the big questions like that. And then, um, you know, I said discard or, uh, you know, whatever I said, but then go on to realize, you know, start asking very specific questions and, and kind of leave those larger things by the wayside. It's a really high percentage, isn't it? Speaker 1 00:06:42 I think so. I think it's a very high percentage, but, but I think also it probably is useful to kind of keep reminding ourselves what the big questions are and like, uh, so I think that simultaneously very important. And it's just that the, Speaker 2 00:06:56 Sorry. No, no, that's fine. I was just going to say that I think part of the reason, and I don't really know the whole reason, but I think part of the reason is that, um, those big questions get you in, and then you realize that there are a lot of big questions that are super interesting that aren't those questions. I don't know. Does that seem on point? Speaker 1 00:07:13 I think that's right. And I think there's kind of like very similar to how I feel right now. And I think, I mean, I know that it's sort of like saved multiple times, that it's all about asking the right questions and like, the questions are very important, but what, at least from my perspective, I think I realized that like the answers and what do I consider satisfactory answers to those questions often determine like how you approach your science and things like that. So to me, like, it's just not about the question is also like, what kind of answers am I satisfied with and why am I seeking that answer? I think those are the real drivers of what I actually do in the lab. Yeah. Um, yeah, of course I would like to like, you know, simulate, I dunno, uh, consciousness in an artificial system, but I think that that is going to be a very difficult, um, kind of objective to, you know, go for in a lab and get funding for it. I mean, I'm, I'm, I'm really happy that some people are trying to do that who are more privileged than probably I am. But Speaker 2 00:08:11 Congratulations on the new job, I guess that's not so new now, but where are you? Where are you sitting right now? You're not at York yet. Are you? Speaker 1 00:08:18 No, no. I'm S I'm still at, in Cambridge messages at MIT, my governance. Speaker 2 00:08:24 So when are you headed to New York? Speaker 1 00:08:27 Yeah, I'm starting in July. Oh, nice. Speaker 2 00:08:31 Well, congratulations. Speaker 1 00:08:33 Thank you. Thank you. Yeah. Um, I'm very excited and it was a very interesting hire because all of this happened during the pandemic. Yeah. I'm still supposed to go and see the department to some degree. It's, it's really, it's really a virtual remote, but I'm very happy so far with what I've all the discussions that I've had with colleagues there. And then I'm very to start Speaker 2 00:08:55 Working. You'll be, uh, you'll be near my, um, my postdoc advisor, Jeff Shaw up there. So Speaker 1 00:09:02 Yeah, I'm very much looking forward forward to working, Speaker 2 00:09:05 Um, that I, you know, I've asked him a couple of times he's been pretty busy cause he just moved to New York as well. Tell him that I am still waiting for him to come on the podcast, so. Speaker 1 00:09:14 Okay. Speaker 2 00:09:15 So, um, so you, you, um, your, your most recent work, uh, I was a postdoc in Jim DeCarlo's lab and Jim's spot on the show and you guys are one of the reasons why I asked you about your engineering background is because you guys are quote unquote reverse engineering, the visual system, uh, I guess it all it off with a convolutional neural networks, uh, and the feed forward story of convolutional neural networks. Um, but, and I don't know how you got into a deep learning, but I do know that you were discouraged at one point from, uh, from studying deep learning or using it. Can you tell that story? Speaker 1 00:09:55 Sure. Yeah, so, I mean, it's an old story. It's like probably like now already 10, 11 years old. This was 2008 when I started my masters in biomedical engineering. And, um, I think I kinda realized talking to a lot of people back then as an even saying the word or saying something like, oh, I'm working with a computational model and I'm in a neuroscience program. It's sort of like, you're looked down upon as a fake neuroscientist. You're not one of the real people that is doing the real neuroscience, Speaker 2 00:10:26 Please. You're not doing experiments Speaker 1 00:10:27 Because I was, at that time, I was not doing experiments. And I was mostly trying to like, look at things like, for example, like, you know, I was doing the working on auto encoders or neural network models, trained with, backpropagation basically looking at how internals of these networks might match some neurophysiological data that I had or some behavioral data. It came to things that everybody including me is all excited about these days. But like, Speaker 2 00:10:51 But that was before the quote-unquote deep-learning revolution in 2012. Right. So Speaker 1 00:10:57 I think it was still popular back then among certain groups, I guess. But I, I just did not, I mean, I couldn't have predicted that if I had worked on that, maybe there could have been some nice papers or nice, you know, uh, studies, uh, that I could have done. But as I was saying that, like, I kind of got a bit discouraged because like I just started realizing that, oh, this is not the real neuroscience, because I'm not sitting there with a slice of four mice brain patch, clamping. And like looking at neurons, voltage is going up and down on a stupid monitor or something. Like, I kind of feel like, you know, that's, that's the real deal. And I remember I, I prepared a poster for a conference and I was going to present this poster, which is work done with like these artificial neural networks. Speaker 1 00:11:41 And I think I was so, you know, um, I was, I was afraid that I will be ridiculed at that conference in the morning of the day. I can just got out of there. Like, I'm not going to present this, forget about it. I'm going to go back and I'm going to do real neuroscience and look what I'm doing right now. So I have a, unfortunately it's really ridiculous, but it's kind of pathetic, but there's a paper that I wrote back there with all these ideas of like, oh, back probably re reinforcement learning autoencoder student teacher network. And I it's, it's really badly written and it don't don't look at it, but like, I kind of use that as a joke with my friends, like, oh, only if I had, you know, pursued this, you know, like all this work from Dan and Jim like, oh, I was way before that. It was ridiculous. Uh, no, I don't think he would take that paper is a joke. Yeah. Speaker 2 00:12:32 Well, yeah. Well you were talking so real neuroscience. That's interesting because what you described with the mouse, uh, brain slices and patch clamping is exactly how I cut my teeth in neuroscience because I was a real neuroscientist. Right. So you think the definition of what a real neuroscientist is has changed now, so that, uh, people, you know, doing what you do is, um, do you feel, uh, like a valid neuroscientists now? Speaker 1 00:12:57 Yeah. Well, I kind of validated myself by doing monkey physiology and the partner patients. So whenever I'm doing that, whenever I'm leading that life, I feel like as a real neuroscientist, I mean, I still think that like, actually it helps to look at the brain and biological data to get the right perspective about the system. So I definitely value that, but I think with time, the importance of computational techniques and analysis techniques are so important now, just, I think as we were discussing, like there's a, there's an answer that you're seeking. And the answer to me is it's going to be in the form of that, uh, those models. And so like if you're not talking that language, it's sort of becomes difficult to communicate, it will become difficult to communicate any neuroscientific finding in the future. So I think in that regard, that might become the real talk of neuroscientists in a few years. If it hasn't been, Speaker 2 00:13:51 You're looking for an answer, what's the question. Speaker 1 00:13:54 That's a very good question. So I think, um, exactly. So the question that I think a lot of people are interested in is that how do we solve certain tasks? Like at least that's the way how I look at, from leading questions. Like I'm interested in neuroscience because I'm interested in a behavior and why I'm interested in that behavior, particularly, maybe because if that behavior goes missing, I'll be in deep trouble. So like, that's kind of my sort of, um, way of getting into this space of like, okay, there's a behavior then what does it mean to do a behavior? And how do you actually scientifically study? If so we measured this behavior, we operationalize that behavior with some task and we measure that. And then the understanding, or the question is that like, how does the brain solve that problem or, or give rise to that behavior. And then we start by building models of that behavior. And depending on what type of answers we're looking for, are we looking at how different neurons come together and produce that behavior or how different brain areas are participating in that behavior? We, we try to like, you know, uh, build specific units or parts of that model and look at them carefully. So at least that's how I formulate the question, but the bigger question is like, okay, there's a big behavior and how are we actually solving it? You know? Speaker 2 00:15:11 Well, so you can correct me if I'm wrong here, but the, yeah, the story is I see it, uh, from Jim's lab and from the convolutional neural network work is that, you know, you're trying to solve object, uh, core object recognition. Um, and you know, it started off with a feedforward neural network, uh, you know, that was built through many years. And then, um, you know, the deep learning world came on the scene and, uh, you guys realized that these networks accounted well for, uh, predicted the brain activity well, and kind of went on from there, but, uh, things have developed. So the reason why I asked what the question was, uh, is because, you know, it's interesting, it's almost like an isolated system, right? So you have this convolutional neural network and it is modeled, the layers are modeled after the ventral visual processing, hierarchical layers in the brain. Um, and you know, the goal is to understand a vision, right. And I don't know what that means. Um, you, do you feel like you guys have a, where are we in, uh, understanding vision? Speaker 1 00:16:18 Yeah. I think that there's a lot of questions in, in that, in those sentences, because like, let me maybe like explain a little bit about what I think of what understanding means maybe like, so, um, I, I think one definition of understanding that I have in my head is that it is basically coming up with a falsifiable model of, of something. Like, if I understand something and you can tell me that it's a wrong understanding, and if I can basically have a model that is falsifiable, which like I can make predictions and you can tell me that, oh, you're wrong. So for example, and there could be different levels of this understanding. So I understand how my coffee machine works because I can predict which button to press and the coffee is going to come out. So it's like a concrete prediction. You can test me like, oh, you can tell me, like, go turn on the coffee mission. Speaker 1 00:17:04 I go, I press the wrong button music. You don't have any understanding of how this machine works, but if I press the right button and the coffee starts coming out, but then if the machine breaks down, there's a different level of understanding. I might need to fix it. So like, then I, then you might ask me like, which part of the machine to fix and how does it work? And there's a more detailed level of understanding required. So in the same way, I feel like understanding vision would require multiple levels. And I think one of it is like at the behavioral level, like I predict the behavior. So that's where we started. But all of this sort of relies on models like concrete, computational models, at least that's what my current sort of, I was gonna say, understanding like my, my current sort of opinion of like, what understanding for me might mean is that like you have concrete, computational models that make explicit predictions about, you know, how, how our system is gonna work or, or perform. Speaker 1 00:17:56 And then you, you get to test it and that's sort of the understanding, and then that's moving. Now the problem, I think usually is that if we define understanding this way, then we have to also sort of have common, I don't know, goals of like, what are we trying to understand? Like, what is that behavior? What, and my current, uh, you know, view of the, of the field is that we actually don't have common goals like that. We're kind of like all doing our own things. And so I think it's kind of important to maybe like have certain specific goals as like, this is what we are trying to predict. These are the, you know, behavior of the system. These are the neural data or something that we are trying to predict and then come together, like come up with what are the best models that can do that. And some of it is we are currently trying to do, do it with this, um, website and platform called brain score. Um, and, and trying to have integrative approach to like all kinds of data and all kinds of model and things like Speaker 2 00:18:57 That. So how's brain square going into are a lot of people using it. Speaker 1 00:19:01 Yeah. I think the user base of brains is definitely increasing. And I think, uh, we are, we are having a conference now. Uh, well, we still submitted it at co-sign and we are potentially going to have a competition. It's like, I think it's going to feel a little bit more like an image net competition or something like that. But my, my personal opinion is that maybe someone can look at brain score and say, it's too early for some someone to like start making these models and, or scoring them and being so concrete about it. Right. But, um, but I think it has to be done. Like, that's kind of my goal. And like it's, if you ask me where vision, like understanding of vision is to me, like pointing to some kind of platform, like brain score is a concrete answer that I can give, like, that's my way of quantifying it. Speaker 2 00:19:47 Yeah. So it's a benchmark. And, but, you know, on the other hand, uh, benchmarks have gotten some flack because like you were talking about, uh, we don't know whether that's the right benchmark, right. Whether it's the right question. Yeah. So it is concrete, but, um, I guess we're progressing and asking better questions. W would you agree with that? Yeah. Speaker 1 00:20:08 Yeah, absolutely. And I think there is no like three or four benchmarks that will define our understanding. So I think the goal is to have more and more benchmarks, and hopefully you, we will see that, like, because it's the same brain that is giving rise to all that data. So if you are actually modeling that particular brain, then we should be converging to like a very small space of models, eventually, at least as the dream. Um, so of course, like there could be multiple different benchmarks and different ways people are probing the system. But I think the value add of brain score is that if we can get those, all those experimentalists and modelers on board, then they can provide those data or provide those sort of as also targets for current systems. Instead of saying that like, oh, you know, your network is never going to predict that like, okay, that's okay. That's fine. I mean, the networks are falsified under all possible benchmarks, so it's not a big, you know, sentence to say, but it's just like, how is the network failing? How do we know it is failing? And like what could be the additions that you can make to the models that improves it? I think to actually have a good quantitative, tangible grasp on those questions. I think you need a platform like brain score to actually be there Speaker 2 00:21:21 And ask you about falsification, because you've talked about how that's one of the useful parts of the modeling, uh, push is that they are falsifiable. Um, but then, you know, you have models like the forward convolutional neural network, uh, that predict what is it like 50% of the neural variance somewhere around there. Right. How does one falsify a model of Speaker 1 00:21:42 Yes. So I, I don't think, I mean, in, in the, in the sense of falsification, those models are all falsified anyways, but then the question is, how do you build the next best model? When I think of that, I feel like given some numbers like that, it then makes me kind of figure out if I build a better model, it should at least be better than the current numbers that are coming out of, you know, the feedforward neural networks or something like that. So I think of course you can, I mean, is the question that you dismiss the entire space of models, our family of models, uh, as like completely useless, or you say like, that's a good start. Let's build upon that and start adding elements to that, to build the next best model. I think I, I'm mostly motivated by this idea that yeah, like we have a good grasp and thanks to machine learning and AI for that for actually building these real models and not like time models. Speaker 1 00:22:32 And so now that we have these models, let's capitalize on this momentum and get, get going and build the next, probably build the next best models. Although I'm talking about models in this way, but I, my personal life is mostly spent doing experiments, trying to put holes on that, on those modeling frameworks or models. So I'm actually very happy that those are all falsified because I feel like that's my job to falsify them. And, but the other part of my job that I feel like is important is that it's not only just to falsify them, but also get some data that is in the same scale and in the same sort of spirit that would help build the next better, best model or something. So it's just not good to shit on them. It's just also provide some, you know, material for them to work on to on and become a little bit better. So Speaker 2 00:23:19 In your, uh, so you're doing a lot of experimentation, uh, what's faster modeling or experiments, Speaker 1 00:23:26 Experiments. Uh, yeah. Uh, I mean, I think building a better model is very, more difficult than doing an experiment. Uh, this is, yeah. I, I, I think I'll debate anybody about that because I think for me, like, you know, Alex, so again, depends on which field you are, if you are building this model for, or purposes, or like, so there, of course modeling is way more faster than any behavioral experiment or any neural experiment. But if we are trying to build a model of the brain it's it's, uh, so it's like, uh, we are discussing about this engineering things that, okay, I have a, I have a problem of like, how is the brain working, but the solutions cannot be anything it's like constrained by this biological system. So there's like a specific solution that we're trying to look at. And I think aligning the models with that is a very, I mean, I can build a model that might solve action perception or action prediction better than the current system, but that might not align with the brain. Speaker 1 00:24:21 I think when I said like the modeling of it's slower, I think it's that bit, which is like having models that are more aligned with brain because, you know, like 2012 Alex and came up, uh, and then now we don't even talk about Alex. And in terms of the computer vision, I think no serious computer vision scientists would say like, Alex, that is my model that I started with, but it came to neuroscience and it's still here. We are still using Alex nursery. I think things come to neuroscience and the stare for a longer time, because it's just very difficult to falsify or like discriminate among these models even. And I think there are maybe in, I mean, for us, there are some, some of them are like, there's some deeper questions in here as well. Maybe because like, when we say we have a model of primary vision, like, like what do we actually mean? Speaker 1 00:25:05 We have a model of a specific human or like a specific monkey, or are we modeling the shared variance across humans or monkey or, you know, are we developing a model of the, you know, all the possibilities, like a superset of vision? Like, so, so, so how well should a model of object recognition even predict behavior of, of one subject or some neurons that I'm recording for in a monkey brain? I think we need to think carefully about those, those questions, because like, yeah, sure. Like the model might predict, you know, one neuron in a monkey's brain at 50% explained variance, but then how well does any other neuron in any other human brain predict that other, another humans, it neurons or something like, so I think quantifying and sort of setting up the ceilings based on what we actually are, modeling, remodeling individual human beings, or individual monkeys, or, you know, the shared monkey published. Speaker 1 00:25:58 I think those questions are sort of important. And then maybe we are done with like predicting core object recognition, feed forward responses because, you know, one monkey predicts another monkey or 50%, and there's no way you can improve beyond that. Something like, so it's, to me, I think because of these kinds of, um, of course I'm sitting and I'm realizing that like, this is basically like, it's, it's empirically challenged in something like that. Like, so it's, it's actually the experiments that have to provide these answers and we're sort of like limited by technology and how well we can probe the system. So that's why I think slower. Speaker 2 00:26:31 Yeah. Okay. Well you said ceilings, so, uh, the way you've talked about it, it makes me it's a fuzzy ceiling, I suppose. They're fuzzy ceilings and that respect. Yeah. All right. So your I'm slowly coming around to the fact that modeling takes a long time. I did, I didn't do any deep learning modeling. I did like a kind of a psychological model. It was very simple, right. And it took a long time, but experiments, I had to go in every day and it just was, uh, you know, years to publish a single paper. Speaker 1 00:27:02 I see where you're coming from. And I, I totally, I mean, that's has been sort of my experience as well. I mean, it takes a long time to train a monkey to implant the arrays and, and get the data and maybe, you know, the area doesn't get implanted. Well, then you have to implant again. Like there are multiple problems that can come up. Um, but I just feel like at the end of the day you have some data and if you have designed your experiments properly, that's like, especially neuroscience, which I think is still in the dark ages. Like it's sort of like novel data, you know, it's like, just it's anything you do. You can, you can basically, it's like novel data and a target for like a model to sort of predict. And I think in that way, it's faster because they, I can build a model like one minute, just, you know, put some two convolutional layers together and call it a model. But is that really useful or is that really taking the field forward? I mean, I mean, I, maybe I answered it too fast about like, you know, experiments are slower. Speaker 1 00:27:59 I might have to think about it, but I think that, but I think I'm trying to sort of like tell you a little bit about why I think modeling is actually going to be slower, especially like modeling Speaker 2 00:28:07 There's physical time, but then there's also heartache time. So maybe those are two orthogonal things. Right. So like the other question would be like, where do you experience more heartache and, uh, obstacles. And do you think modeling would be the answer to that Speaker 1 00:28:23 Again, depending on your experience? So like, if, if I, if I'm like running a monkey, like after I have like a, you know, brought a monkey to the lab and done an experiment, I have zero energy to do anything else in the day. So it's like I'm done for the day. And, um, and I think that way, yes, it's, it's a lot more, I mean, at least because that's the experience I had, I can't tell how bad it is for like a modeling person or how banks like to come up with, you know, giant Mo I mean, I Speaker 2 00:28:53 Feel like, Speaker 1 00:28:54 Like most of these, like the libraries are not loading, the version is not correct. So those are the problems that I usually face. Uh, but, but yeah, but at the end of the day, once the model is training and I don't know, I mean, at the end of the day, I feel a modeler is going to be more disappointed because the models don't really predict much more than the previous model. That's like the neuroscience experiments, if it's designed properly to begin with, I think is always going to give more insight, just a biased opinion means. Speaker 2 00:29:24 Yeah, we're all, we're all biased as we know. All right, co so, um, again, correct me if I'm wrong, but, uh, the way that I see it, there's this core object recognition story that at the core of it, uh, is a feed forward convolutional neural network. And, um, you know, you guys in Jim's lab, uh, have done a lot to explain neural data. So that's kind of like the basis, the way that I see it. And then from there you've done a lot of other work. Like you've started adding bells and whistles like recurrence and you've controlled, you've synthesized images to, uh, predict, you know, which neuron is going to be driven by a particular image. So you, so you're making the models more complicated. Um, and I've heard you argue that these, that what we need is more complicated models. Whereas, you know, from a, from an, uh, Phil, uh, philosophy of science classic perspective, what we like are simple models, right? And because part of the problem with these deep learning models is that we don't exactly know how they're doing what they're doing and to use a complicated model, to explain a complicated organ, like the brain, uh, there's a pushback on how much that actually buys us in terms of understanding. But you argue that no, we actually need them more complicated. Why is that? Speaker 1 00:30:49 Yeah, I think it depends on how you define complication, because I think the reason why I might say that it's, we need more complicated models because the models are not really predicting what we are. We set out to predict. So I think making them simpler. I mean, I don't know. I mean, I don't, I don't think that's going to be the answer because the brain is complicated. So anything that is a simulation of the brain will look complicated in some sense, in the other sense, it will not look complicated because if you have correspondences and alignments with the brain, you can point to a part of the model and say, oh, that's before. And that you can say like that's before in the brain. So that in that way it might be, become less complicated over the course of it. Just the definition of like what complication and like, what is interpretability and what is understanding? Speaker 1 00:31:32 I think those, and because there is no objective definition of those things. I mean, I think these kind of conversations usually, you know, lead nowhere. I mean, I kind of think I'm trying to think of this thing. Like, for example, when I was in my, in my graduate, uh, studies in may do my PhD, we had models of motion after effect. And, uh, if I spoke to anyone like at VSS or SFN, or co-signed about these models, uh, everybody would say like, oh, this is completely understandable, interpretable, simple models that we have intuitions about, which is like, okay, you show coherent. Uh, so you still show a random motion pattern and you have this motion detectors, they're all firing and they're all fighting equally. There is no, like, basically if you, after that, if you show a stimulus that is moving upward, the upward neurons will do something. Speaker 1 00:32:21 And it's going to be like some response, which is going to be higher compared to the rest of the group. If you are only showing outward motion for a long time, those are the neurons that are going to fire and get fatigued. And then when you show like random pattern, you'll see like everything else is firing higher and there. And the outward motion detectors are kind of finding slightly lower. So overall you will have bias towards saying, okay, it's going. Maybe the motion is going downward, something like that. And this can be modeled and people have modeled this. And I think those models compared to artificial neural networks now, like there might be considering a simpler, more intuitive understanding, understandable models, less complicated. Now I'm thinking like, let's go to 5,000 BC. People are talking Tamil or Sanskrit and a Greek or some other language, trying to explain it. Speaker 1 00:33:07 I'm trying time traveling back then trying to explain the motion adaptation model to them. They'll be like, go away. Like, you know, what are you talking about? This is not, I don't understand anything. So I'll, these models are not real models of the brain. Like, I don't know. And I think, I feel like the same thing is happening now, which is like artificial neural net. But, but, but, but remember like the motion model that I, that I just mentioned was predicting this, this adaptation phenomenon, this behavior. So that was kind of the, the goal of this Mo modeling effort. It had some relevance with how people have looked at the brain and neurons. And so, but if I tell this in 5,000 BC and people will be like, I don't know, this is not mapping into our worldview. And I think the same thing might happen right now with con condition neural networks and sometimes mineralogist and things like that. Speaker 1 00:33:50 It's like, okay, this is too complicated. This man, I don't, I cannot like fit into my low dimensional kind of behavioral space of like how this high dimensional, you know, areas are functioning or like responding. So I don't take that complaint seriously because I think with more familiarity with this terms and models, that that company is just going to go away, as the models become more and more powerful in predicting different behaviors. And we will see, for example, use of having these models in sort of like, you know, real world applications. And I think that kind of fear of, oh, this is a too complicated of a system is just going to go away. And for those, for whom like this won't go away, they'll just probably have to live with it. Speaker 2 00:34:32 Okay. Speaker 1 00:34:34 But maybe the more, I think one of the reasons why I feel people like simpler models is it allows them to like maybe think through like, if the model gets stuck, what to do to improve it. And I think that to me is like real value of having a simpler, more interpretable model. And there is a question of efficiency. If you can have a complicated model kind of self-correct yourself, yourself improve itself, but it's kind of a future goal. Maybe that might just be a more efficient way of dealing with this problem. Then, then kind of like humans kind of coming up with their own intuitions of like, what is a better model and things like, and I think we were discussing what is engineering background to me that might be something I'm more prone to accepting because of my engineering background, because I just feel like there's a question there's a solution and these are just tools to get to the solution. It doesn't matter if I intuitively kind of understand it or not, as long as it's aligned with the brain data and things like that is fine. Speaker 2 00:35:35 So one of the, I actually got even more excited to talk to you because after we had set up, uh, this episode, uh, someone in my course asked, um, because I talk in my course, I talk a lot about, I use, uh, Jim's work and your work to talk about convolutional neural networks and how, you know, how it relates to the ventral visual stream. And then someone in the course asked, what about the dorsal stream? Because I talk about the two visual streams and, uh, this goes back to the question of like what it means to understand vision. And I know that one of the things that you're doing, uh, so the question was like, why aren't there models for the dorsal stream as well? Why is it all ventral stream? And I know that you are starting to incorporate it and you have some background with the dorsal stream as well. Um, and maybe we should talk about what the dorsal stream is just, uh, to bring everyone up to speed, but w uh, what are you, so are you just starting to incorporate other brain areas now? W what is your, yeah, Speaker 1 00:36:29 Well, the first thing is that maybe if that student is interesting in doing a PhD or a post-doc send them my way, because that's, that's the kind of question I was also asking about, like, what is an artist dream doing? Because I had spent like five, six years studying that also stream, which is slightly system of the ventral stream in, in sort of anatomical location in the brain plus Speaker 2 00:36:51 Say what the dorsal stream is like, what it classically is. Uh, do you want to say it, are you happy to, as well? You can say so. Yeah. So classically, there are two ventral, uh, two visual streams, um, it hits V1, and then it kind of branches off into a ventral stream, which is, uh, what the massive amounts of neuro AI and core object recognition is about where, uh, it gets processed over hierarchical areas, uh, through V2 before it, until, uh, we suddenly have neurons that respond to whole objects, but the dorsal stream is classically the where, or how stream, um, which is much more related to, uh, the motion and spatial, uh, aspects and, uh, our actions. Right. So, um, it's activity related to, and that's where I spent my career is basically from more or less in the dorsal stream. Yeah. So I don't know. Did I explain that? Okay, Speaker 1 00:37:49 Absolutely. I I'm usually now, like, I think I'm usually very careful about assigning some behavioral function to areas. I mean, mostly start talking about anatomical locations and like, who knows, like you might find that you could also stream is just big part of corporate recognition. Right? Speaker 2 00:38:06 Well, well, yeah, I mean, so the thing that has been, um, I guess always known, but not paid so much attention to, is that there's a lot of crosstalk between the dorsal and the ventral stream, but we've kind of studied them in isolation right. As to, uh, individual separate things. Speaker 1 00:38:24 Yeah. I mean, I think that's a sort of, I see that as an opportunity, uh, to sort of like really take this sort of studies forward and trying to incorporate, um, looking at dorsal stream as well. Just one point I wanted to make is that I think there are folks who are beginning to build models of the dorsal stream in the same way as you know, the venture stream modeling has gone. I think I recently saw a paper from Chris Becks group and sorry if I'm forgetting other authors. I think Blake Richard was part of it. Patrick was like, there, there there's, there's a, I think it's a bio archive at least there's work done from Brian Tripp's group tried to model the system. Like, of course, like dorsal stream has a lot of modeling, you know, prior modeling work that is not kind of similar to the convolution neural network, uh, stuff, but, but I think people are beginning to build them and, and their different objectives that they're proposing as like sort of normative framework for like how the dorsal stream gets sort of trained up. Like, I think it was a nice hypothesis and we'll see like, whether the data actually supports those models or stuff like that. But I think for me sort of trying to get into this area, those are really nice work because like that gives me some baseline ideas or baseline models to start testing and proving, you know, when I start designing my experiments, I think those models will really help me to sort of make a good experimental design. Speaker 2 00:39:48 Uh, but are you building, so I actually don't know like what kind of a model, because it wouldn't be just the same, would you wouldn't just use a convolutional neural network to model the dorsal stream. Right. Um, and so are you building models yourself also, or are you going to incorporate, Speaker 1 00:40:04 I have not personally built any models right now. I've just been testing some of the models like, so I started testing some of the models that were mostly used for action perception or action recognition models. They have these like temporal filters, like they're still convolutional. It's just like more dimensions to the convolution is like a time dimension. So I think those are like good starting points because like they're easy to build maybe because they can use the same kind of like training procedure. But I think we have to, at some point become a little bit, be okay with being a little bit, you know, uh, go lower in terms of prediction because we need to move from static kind of domain to a dynamic domain. And I think my usual experience has been that whenever you go, you make this jump, like all these models start to kind of like not perform as well. Speaker 1 00:40:51 Not, not predict the neuro responses as well. And so I think to me, like that might be one of the reasons why maybe some people are building these models and they're not really coming out because they don't really predict next. So maybe backing up a little bit, like my main, um, sort of interest in this, um, docile venture interaction question kind of started when I was mostly recording, you know, showing static images to the, to the monkeys and recording their responses in it. And I, and these are, you know, uh, objects that are either like natural photographs or, you know, some kind of synthesized, uh, images. And I started thinking about my previous work in dorsal stream and it was like about motion and, you know, like there are dots moving and gradings moving, but if I think about the real world, like I never see dots moving a great exposing in the real world, like there's objects and moving. Speaker 1 00:41:39 And to be like, if I, my, my, if I have to have any real world relevance of my current research, I just felt like, you know, it's a dynamic world, I'm moving my eyes and I'm moving myself and w the objects are moving. And if I think of these questions are typical, these behaviors, dorsal stream kind of pops up in any literature search that I do. It's like self motion, you know, motion of objects or motion of like, not objects, but like maybe motion of like some something in my visual field. But, but then I was wondering like, you know, like it has this nice representation of what the object is and if the object starts to move, is it all, is, does all of it fall apart? Like what happens? And so just out of curiosity, I just started recording from these neurons and when the objects were actually moving, and then I started kind of, you know, this is, this work has not been published, but it's like the sort of the, uh, preliminary result is that, well, it kind of can predict where the object is headed, uh, where it is moving. Speaker 1 00:42:38 It's not, we know from previous studies from Jim's lab that from looking at it representations, you can tell where an object is. This was from Hong and Dan 2016, where the object is located. You can tell in a static image. So there's one trivial solution where, okay, like, if you can tell where the object is located at different time builds, you can maybe combine that information to tell where the object is heading. It is going, what I started finding is that, like, it's not only that it's like, you can just take a snapshot of like, like maybe after you have started this movie 200 millisecond, 10 milliseconds later, you can just look a small time in, and you can tell where the object has been going. So it's like, there's a predictive signal of where the objects are headed. So that's sort of like, then I started thinking like, maybe this is coming from the dorsal stream, or is it, you know, like, but, but again, these are, again, like ways of thinking that I've kind of discarded in the last few years. Speaker 1 00:43:30 So I feel like the way to think about this is like, can a vanilla ventral stream model explain this neural responses already, then not be involved with our system at all. Maybe they will fail. And then the dorsal stream models are actually necessary to account for this neural responses. And also the behaviors that I can test based on these kinds of stimuli. So that has sort of been my approach and the docile and the quick update on the ventral stream results is that these models, they, they're not really predictive of these kinds of responses at all to some degree. That gives me hope. Yeah. Speaker 2 00:44:03 It's always good to have hope when you're starting your career. Although not that you're starting your career, but your new, your new start. Yeah, exactly. But you said you've discarded, um, thinking about it, uh, in that way from like, uh, different brain areas. Is that because, uh, you've discarded thinking about different assigning roles to individual brain areas or, Speaker 1 00:44:23 Yeah, sure. Absolutely. I think, I mean, that's that whole way of thinking is like, I think it's primitive. It's not going to lead to like, the brain is doing what it is to like, make us go through the day. And like all areas are coming together in some form or the other. And so I think it's, I will never, I don't want to come up with the answer that like Darcel is doing blah, blah. Like, I think it's just part of a system that is trying to solve a behavior. And the answer is going to be here is a model that has elements in it that are corresponding to neurons in the dorsal stream. And together they, you know, solve a behavior. Now you can ask the question if I really want to satisfy someone, like, what is it also stream green? You can start doing perturbation experiments in the model or in the brain and select what happens to the behavior. Speaker 1 00:45:11 If I take out, you know, part of the dorsal stream or part of this and that. And, but then I'm mostly worried, like, what is the answer? Like what my answer is going to be like, oh, it takes a 10% hit for video air versus video. See, like, I feel like those are the kinds of answers that are really going to come out, but people are going to, I mean, I might spin this off as like, oh, but this is about, you know, function X or like, it's about something about predictive coding, or I can give the answer in that form. But I think at the end of the day, it's just going to be a big lookup table of like, you partner up this part of the larger stream, you get X hit on this particular behavior, this particular video. So I, that's why I feel like my answers need to be in the, in the, you know, in the modeling kind of framework. Yeah. Speaker 2 00:45:58 Words, w we're we're limited by our language. Uh, it turns out this very special thing that we have language also is very limiting in some respects, I suppose. Speaker 1 00:46:07 But I think if the models can relate back to the language, I think then some of the, you know, uh, problem or the tension might be relieved a little bit, because I think now there, so for example, I mean, this is maybe slightly off topic from the dorsal ventral discussion, but like, if you look at a model of ventral stream, like you can look at brain score and they say, okay, resonate 101 or something has some numbers associated, like some scores. I can see why people have a problem with that model. And why people say this is not interpretable because like, there are parts of the model that are just don't know what it is like, how does it map to the brain? Like I can call it some part of it, or some part is thousand different things in between that I have no clue of what they are. Speaker 1 00:46:48 And like, maybe the model is not performing because of those, you know, computations that are happening in those layers. How do I relate this back to the brain or something? So I feel like that is a real problem. And I think it is in our interest to start coming up with commitments to different parts of the model and then falsifying them based on those commitments, if it's like interpretable models should, to me, should be like, if I write up, so what is the interpretable thing in neuroscience? Like a paper, like the abstract of the paper is completely, should be at least interminable to anybody. So if a model has components to it that can talk to each part of the abstract, like, you know, you have a task, you have a neuron, you say something. And if you can basically map your abstract to parts of the, of the model, and if the model can map onto the parts of the abstract, clearly that I think just gives the model interpretability. And I think that level of crosstalk and language I think should exist. And I think that language, um, you know, trying to sort of develop myself to, even when I'm thinking about modeling and experiments, Speaker 2 00:47:52 Well, I mean, after all that, about how we shouldn't assign roles to individual brain areas, uh, you are doing some inactivation, uh, experiments, right? So what's going on, what's going on in there? Why are you inactivating individual brain areas? Speaker 1 00:48:07 Yeah. So I think that that's basically try to maybe, so there are a couple of studies that I think at least I've done recently, one has been already published, which is like inactivating ventrolateral, PFC, and looking at CoreLogic recognition behavior, and also looking at representations in it when the monkey was doing that task. And, uh, the goal was to basically expand or test whether these feedback loops that are existing between these areas, are they actually playing a role in that specific behavior that, that we are studying because the current models are incomplete and they're not predicting enough. So like, it's kind of make sense that maybe there are other areas and there are other connections that are important. So that is not to say like, POC does X, right. It's like, Speaker 2 00:48:54 It does everything apparently. Yeah. Speaker 1 00:48:57 I'm sure it does a lot of things. It's just like for, for me to actually ground this problem, it was more like water, what kind of role, or what kind of signals do I, at least from the inactivation? Cause like what kind of signals go missing in it when I inactivate them inactivate PFC and what kind of, sort of, uh, deficits do I seem behavior? And then the data, again, as I was saying is, is not like all, you cannot, um, identify objects in an occluded scene or something. It's not an answer like that. It's mostly like, here's a big data set. It's like, it's not satisfactory to many people. Like here are the giant data set. It's clearly like you see there's an average effect, PFC, no PFC, okay. I've shown you this. There is a prediction that is coming out of a model that is like, this model is a feed forward model. Speaker 1 00:49:48 It might not be doing XYZ. And while I like that, those are the images where these effects are also much more concentrated on. So there's a story. There is like, okay, it's clearly part of a system that is not the feed forward system that is maybe going beyond the current feed forward system. But like at the end of the day, I think the next step is to build a model that has a unit or module that is called, you know, VLP, IFC, and partner being that should, should produce the same kind of deficits it's like, and this is where I think it's a very hard thing to do. It's actually easier thing for me to like part our PFC and like get this data and say like, okay, this area is involved, but then build this model. I think that's going to be really difficult. Speaker 1 00:50:31 And I think, and there are limitations to partner patient data, for example, I think, um, and this is, might be like relevant to the conversation about perturbation experiments, because I think even, even after this part innovation experiment, I think actually recording in that specific area with the same kind of task and same kind of stimuli might be more constraining for the next generation of models. And that is exactly what I'm doing currently. But at the same time I was thinking like, what kind of perturbation experiments might be like, you know, may have more, uh, benefit for the kind of models that we have right now. And that kind of led me to, um, developing mostly we say developing a lot, but it's like basically testing, um, this sort of chemo, genetic strategies where you inject, um, a virus in a brain area. And for, in my case, I also implanted a Utah area on top of it so we injected dreads in V4. There was supposed to be like, you know, silencing or, you know, down-regulating the activity in before, and then we implant a uterus. Speaker 2 00:51:29 Sorry, can you say what dreads are? Because we haven't even, I don't think we've even, I think we've mentioned them on the podcast before, but what are dreads? And then I also want to ask you, so you, uh, injected and then you in a separate surgery, then you implanted Speaker 1 00:51:43 No, it was done on the same. Uh, it was done on the same exact surgery. Speaker 2 00:51:48 Okay. Sorry, sorry to interrupt you. Yeah, Speaker 1 00:51:51 No, no problem. So, so the key, so, so, so the basic idea is that you inject a virus that, that ends up, uh, sort of manifesting as a receptor in a neuron that you can activate or deactivate with various means. It's the same idea with optogenetics, the same idea with chemo, genetics in the optogenetics to, to sort of activate or, you know, that particular receptor you need to show shine light on, on, on that neuron, right? Uh, on that area for, for chemo, genetic, you need to basically inject a drug into, into the system. And, uh, so there are some pros and cons of these two different, multiple different things. For example, you're kind of like limited in terms of where you might want to inject for up till because you know, light delivery is tricky because you have to be mainly maybe, you know, restricted to the surface of the, of the brain. Speaker 1 00:52:49 Deeper structures might be very difficult to target at scale. Maybe you can target like one or two neurons in, in chemo, genetic, you can basically inject the virus, the virus anywhere you want in the brain. And it kind of gets activated through the sort of like injection that you do in the bloodstream. So it basically activates or tries to activate all these receptors that has been, uh, you know, um, produced. But then there's like temporal limitations. So after can like go very fast, quick on off, but the key, the dreads are more like museum organisms as they it's on the effect is on for some time. And how long are in my calculus? No, no, I don't know the weaker I would be, that would not be so good. I think it's most so from my, from my estimates, I think it's mostly on for like maybe a couple of hours and sort of, there is like very similar, at least the main times like museum walls. Speaker 1 00:53:44 So, um, and what, what I have been doing is like, we have these areas that you can actually test, you can show the same images over and over again, after you have injected the activator drug, and you can sort of see how quickly, or what is the kind of, um, time course of neurons responding lower or higher. And then you can have behavior on top of it. Like the monkey is also behaving on different blocks. So you can kind of see like, you know, there are some deficits that are coming up and then the deficit sort of like go away at the end of the, at the end of the day or something. So I think I'm at least thinking of like, how do I take this? And like, make it useful for models. Like, okay, I can say like a V4 is involved in, you know, object recognition. Speaker 1 00:54:28 That's, I don't know, not too many people will be interested to listen, but, but if, if, if you give me like, okay, brain score has like thousand models that all have like 0.5 correlation for V4 activity, but now I give you some V4 inactivation data and then 900 of them fall off and they cannot really predict the kind of, you know, pattern of deficits that V4 has that might be as, you know, important than, than to learn maybe the important problem. But, but as you see, like here, you need to have a model that has like a brain tissue mapping of V4. And, you know, where are you injecting the virus in the model versus in the actual brain? So, I mean, there are parts of this problem that are still more complicated, but I think this, the chemo genetic strategy, at least for areas like before, you know, where you're injecting and these are mostly regular topic area. Speaker 1 00:55:22 So there's some level of, um, you know, uh, correspondence in the models. And then you have a neural data on that. So you can actually just say like, you know, like I don't care about like your assumptions just like fit to the neural data. You have the, for neural data within, with our activation, you have, you know, your model within, with our activation just fit to all the data that you have got and then predict what happens to it or predict what happens to behavior in the model. And that's how you validate the model. I think that is a very, I think that's a stronger form of using sort of this perturbation experiments, because I think it's not uncommon to see, you know, experiments where someone says, you know, this area I perturbed did nothing happened. And someone said like, no, no, no, you didn't do this, blah, blah, blah. Speaker 1 00:56:05 So it's like, if the answer's always yes and no, I think it will just stay there. It has to be sort of falsification of like competing models. And then maybe some data will be more useful than the others. The other, I think upshot of having something like this is that imagine, have monkeys that are doing these tasks in their home cages. Like we have a lot of monkeys that are trained up and they do these tasks all day in their home cages whenever they want, because they have a tablet, they can do these tasks. You can pair this up with, with that system and you just need one person to just go and inject like something in a monkey. And then basically you have days where you can like, you know, run this with an, with, with some part of their brain Concordia activated and you can multiplex, even with the viruses, you can like, you know, target inhibitory neurons, et cetera. In neurons, you can have different viruses inject in different parts of the rate that have their own corresponding activator drugs. So that the, I think there's a lot of kind of interesting data sets that can come out of this approach, which should bear on, on the modeling questions. Speaker 2 00:57:10 How much of your, uh, future, what I want to know is like the vision that you have for your own lab and how much of it is going to be this kind of work and how much of it is going to be modeling and so on. Speaker 1 00:57:24 I think a lot of this is going to be this kind of work and like just pushing the boundaries of experimental neuroscience. I think the modeling is like, it's, it's like that's going to be the backbone of the lab. Like the computational part is like, no answers can be provided from the lab if there is no model attached to it. So I will be collaborating with others. I'll have people, you know, working with people in the lab who will be building probably these models as well and testing them out. But I think that, I don't think I will be happy at the end of my career if I did not improve like, like a model or something of the system, even after doing all these different experiments. So it's going to be a mix of that. I mean, maybe, I don't know. I mean, I should probably mention this, like, I'm not, honestly, I'm not really interested in building the best model for corrupted recognition or dynamic visual perception or visual cognition, uh, just for the sake of building that model and understanding how the brain works. Speaker 1 00:58:21 I mean, I don't quite motivate myself that way, I think. And it kind of, I mean kind of interesting because like, I think for training purposes, these were the most concrete fields and most concrete labs that I thought, okay, this is where I should get trained, but I think I kind of wake up everyday to sort of think that maybe my research is going to help someone's life. And I think this is kind of like, oh wow, what a great person you are. But like, I really, I mean, I think I'm going to like a small story. Maybe this is please, you can cut it out. If it's not relevant. I was, I was. So I've been working in visual neuroscience and people know that I work in visual neuroscience back home in India. And Speaker 2 00:59:00 What do you mean? People know like the, like India knows, Speaker 1 00:59:03 Like my family, my family, my family, sorry, people, 1 billion people. No, no like five people, 1 billion people. Speaker 2 00:59:11 Oh, that's more than, that's more than what I do. So there you go. Speaker 1 00:59:17 Yeah. So among those five people, maybe like 10 Indian families tend to, so, and one of those 50 people, then there are some of them that I don't, I think they have some idea of like what I might be doing, which is completely wrong. And I think I had this encounter with someone, uh, and, uh, unfortunately their kid, um, had, had got diagnosed, um, to be in the autism spectrum. Uh, and so I was meeting them and th they, they asked me like, also, what are you working on these days? And I'm like, okay, I'm working on visual cognition of saying stuff like, how do we reason and things like that. And, um, this, this person turns to the kid and, and tell them that, you know, your elder brother will one day, like, you know, it's working towards the solution. And this kid is like, very young can understand anything of what they're saying, but they're basically telling them that he is going to come up with a solution that will cure you. Speaker 1 01:00:16 Right. And it just felt like I just was feeling like I was thinking like, I'm failing enough, do that failing. I cannot find any connection to like, you know, what this translates to. And, and that really, I mean, that was kind of a pivotal, like, like a point where I started thinking, like, I need to find real connections with what I'm doing and how that really impacts or translates, not just this, you know, like the first paragraph of a grant saying like, you know, I'm working in dyslexia. Like if this is relevant to like, blah, blah, really trying to schizophrenia, et cetera. So like, it's really trying to find them that I started actually. I mean, that's going to be at least some part of my, my, my future research is like trying to find out how, you know, having these models that are these concrete models with brain maps, how are they beneficial to diagnosis and potentially treatment strategies insert some of these neurological disorders. And I started working a little bit towards these goals, and I'm very excited about this because I think there are real benefits. And I think you were mentioning about this, like neural control studies. I think those are the kinds of studies that are really, um, sort of giving me hope that like, there is a way to like, contribute to this, to this Speaker 2 01:01:30 That's kind of a magical thing. So, so that, wasn't your motivator for a long part of your career, but, uh, from a place of guilt it's, but, but it's developed into guilt is a great motivator. Uh, but it's developed into like a real motivation for you, but I never had that. I, I, I don't care about helping people. And so I always felt bad writing schizophrenia in a grant, for example. Right. Speaker 1 01:01:59 Yeah. I mean, it's a little bit philosophical. Like I don't even know. I cared about helping people in somebody. Maybe I'm basically thinking I'm trying to help people, but I'm just trying to help myself maybe thinking like, well, what if I have Alzheimer's or something in my old age, but like, yeah. But I think currently at least I do feel like that gives me some level of satisfaction to think there is potentially some link of my research that might be getting help to some book for someone. Yeah. Speaker 2 01:02:29 I mean, it is interesting to think how through our work, um, through your work, through people's work, uh, your interests change and, uh, as you develop and as you ask different questions and answer different questions, it's just kind of a magical thing. So that's thanks for telling that story. Speaker 1 01:02:48 Yeah. I think that that definitely like impacted me a lot. So, but, but I, I'm also, like, I think these are related issues. Like, I think, like you were asking about understanding and progress and like things that like understanding vision and visual combination. I think the moment we start to like, measure our understanding, like in the brain score where something like, then I think this answers to like the clinical translation becomes more concrete, maybe like, like, so I think they're very related. It's just, for me, it took me a little bit to like figure out and maybe I'm still working on it, like to figure out where exactly are the most relevant parts of it. And I think my interaction with a lot of folks who are doing autism research, like really helped, for example, I've been in touch with, uh, Ralph arrows at, at Caltech and we're sort of collaborating on a project. I think those like those discussions and like reading the papers, like really, I think, I think they have a lot to contribute to what I do. And I think our way of thinking about the system has a lot to contribute to that research. Interesting. Speaker 2 01:03:55 So, uh, w I, you mentioned the, uh, image synthesis work a little bit. Um, can we talk briefly about that because maybe you can just describe what the work is. I talked with Jim about this when he was on the podcast, but, uh, we can kind of recap because it was kind of splashy. Right. Um, and I, I kind of want to hear your thoughts on how you currently think about that work as well. Speaker 1 01:04:18 Yeah. So this work was, was done in collaboration with , um, who's at McGill now, and then, so me Puja and Jim were basically, we did the study together. And, um, so the basic idea was that, um, we, we were recording in V4 and we have models of before neuron. And the question was that, can you, from the model, you know, come up with stimuli using the model, can you come up with stimuli that puts the neuron in specific desired states. And, and one of the states that we considered was like, let's make it fire the most we can. So the model will tell me, Speaker 2 01:04:53 Yeah. So, so this is the control aspect of understanding. Speaker 1 01:04:57 So that, that, that is like, you know, prediction and control. And this is the control part. So the models could predict, but maybe they couldn't control because maybe the images that were synthesized. I mean, there's a part where there's a separate technique, which is the, how are you synthesizing the images? And maybe there are ways in which that doesn't need to be attached to the model that, that specific model that you're using to predict that they can be two separate things. But, but again, like for us, it feels like, you know, we were using the same model to come up with the images as well. So we came up with the images, we were trying to control the neuron, and we said, we were targeting like, okay, V4, let's make these neuron fire as high as possible. That was one of the goals. The other goal was let's take a bunch of you, four neurons that kind of share the same receptive field, um, properties and try to set one of them to very high and the others to be very low. Speaker 1 01:05:47 This is like a population level, you know, control. So these were the two goals that at least we thought let's start here. And then we were asking like, okay, this question seems like, you know, you've heard of this before because like, oh, what does before neurons do? Like, they, they respond to curvatures. What does V1 neurons do? They're like Gabor's and orientation and V2 is like texture. And like its faces, like now you come up with the stimulii and you look at them and like, I don't know what to call them. Like, maybe they're something, but then for us, we kind of ignored that problem. We just said like, okay, let's just take these images and like, see whether the model's prediction is right, because then that piece, you should show that like using these models, you can control the neurons to some, some degree. And that, that, that was basically the study. We have some success and we were comparing our, you know, success rates with like taking a random sample of neuro images are using the previous sort of thoughts on what are the stimulus space that excites these neurons, like curvatures from, Speaker 2 01:06:46 I want to hammer this home because the, the images that, uh, drove the neurons were, and you mentioned this, but I just want to reiterate that they were terribly unnatural. Right. They're not, not something that you would see. Well, I mean, there are elements that you would see in nature. Right. But the majority of them weren't, they just something that Speaker 1 01:07:06 I don't know what even called. I mean, there's some pixel, you know, conglomerations. Like I, I, so there, there, there are two studies that came out on the same day and I think the other images are even more scary. So this one from Carlos policy and Muslim instant Gabriel crime and we'll show, so there, they were trying to control it, or they're trying to like, you know, come up with the images for it. Those images look even more scarier, but like, did they have, because they have some kind of natural relevance, they look like out of a horror movie or something, but like the before and the, before images were more like texturey kind of images. And we were also restricting ourselves to like, you know, black and white images and things. So I think that that was part of the, it was constrained in certain ways that led to those images. Speaker 1 01:07:49 But as you were saying that, yeah, I did get a lot of attention and then, but, but I think some folks have gotten excited about the wrong thing from the paper and the resulting images that drove before, I think cannot be the protagonist of the story, because I think that kind of became the story because like we like to say like faces excited, it neurons are XYZ, excited, XYZ areas. And I think in that formulation, then it about the images as sort of the, our new understanding of the system. Whereas that was not about the images. It was about look how, what you can do with this model, because this is the model that tells you that what is going to be the predicted neural response for any given image. So I think that's what, where we are in, in terms of like, we think of this as a stronger test of the model, because there are many models than that can come up with different images, then you can test those as well. Speaker 1 01:08:43 And I think there's work very, very interesting work from cricket, Nico cricket, Carter's lab about controversial stimulii. I think those are the right kinds of approaches. At least to me, like you pit these neural networks against each other and then synthesize stimuli and then test them, it's a different kind of control experiment. But at the end, it's basically about model separation and finding the best, the best model. It's not about looking at those images and making kind of stories, stories about them. Yeah. The other side of the story though, is that this should not make someone feel like, oh, you know, the solves call up the recognition. This is the model. Yeah. So yeah, I mean, that, that, that's the other thing I feel like, you know, there's ways of presenting data that can pull our point. It's a proof of concept study to me still. It's like, you know, look like if you take this approach versus the other approach, this approach, like our approach is better or something like that's kind of the way to present the study. But that doesn't mean that our approach is like the best approach or like we are done. So Speaker 2 01:09:46 Do you have people suggesting that we're done, uh, do that? Speaker 1 01:09:50 I don't think we have people who explicitly suggest that we are done, but they might use this as an example of like, look how great the CNS are. And I think it depends on whom you're talking to, because I can also use the same example to kind of like talk to somebody who's just basically saying, oh, CNNs have adversarial images. And this is like a completely wrong domain of like models. I can then use this example to say, look, you can do some useful stuff, but if I'm coming up with things like, you know, you need recurrence and you need other areas to incorporate, someone might go like, but you can control reasonably Relic. Why do you need to incorporate all of that? So if you really look into the models, you know, look at the generalization of the models, it's not that good. It's, it's like, again, not that is a very arbitrary, like word usage. Yeah. Speaker 2 01:10:40 Yeah. But you feel like, um, in some sense, you're your own worst critic, right? Because you, uh, you see all of the nuts and bolts and you see what's missing and what needs to happen. And so do you feel like people are too complimentary are too impressed with the current work because I, you know, you should be well, yeah, I shouldn't be, Speaker 1 01:11:03 I think they shouldn't be, but I think they shouldn't also like everything else. They should just, I mean, I actually think this is our responsibility. I mean, to sort of also expose where the, I mean, if you read the two papers together, like the neural control paper and the reference paper, they're basically one paper is sort of highlighting how you can use them. The other papers sort of highlighting, like here are the images that humans and monkeys are good and the models are failing, so these are the ways to improve it. So I think if you take all of these studies together, then you might get a more balanced perspective. And I think my goal, at least, I mean sometimes for a lot of reasons, I mean, you know, better that like you need to sell the studies in a certain way, but I think in these kind of discussions are like in papers in the discussion sections, like we, we should always be highlighting sort of the confounds or the potential, you know, places to improve these models. I mean, even for core object recognition, these models failed in very trivial ways that are, maybe some people are just reading this paper might be like, oh, this probably already solved. Speaker 1 01:12:06 Maybe they don't exist. Maybe this is the thing that I've created in my head. Speaker 2 01:12:09 More guilt, more. Uh, yeah, Speaker 1 01:12:12 Absolutely. Speaker 2 01:12:14 Um, I know that one of the things that you're interested in is, uh, visual reasoning. Right. And, uh, I don't know if you want to explain why you're interested in it and what it is, but, um, one of the ongoing criticisms, so, so, uh, non-human primates is kind of like the gold standard, right in neurophysiology. And you need an N of two, you need two monkeys to publish, um, classically. But, uh, and, um, recently there have been, you know, a lot of people working more and more in rodents and mice, and of course there's always been the disconnect between mouse, brain and human brain. And one of the reasons why people like to study non-human primates is because it's like the closest thing that we can study, uh, that resembles human brains. Uh, do you see, um, limits to studying non-human primates, uh, to, you know, get at our intelligence? And so the reason why I asked you about the visual reasoning is because you're starting to ask, so object recognition is a fairly simple thing, right? I know it's not simple, but you know, we recognize objects, but now you're starting to ask a more cognitive higher cognitive quote-unquote questions. And I'm wondering if you see limits to using non-human primates for that. Speaker 1 01:13:30 Yeah. I think the answer will be sort of, I mean, my answer to that question would be maybe based on the kind of data that I will be collecting in some sense. So the way I see this problem is that like, you know, ultimately at least for myself, I'm not suggesting that everybody has this approach, but I'm pretty human centric in my worldview. And I think my goal is to find out like how humans solve a particular problem. So they are basically like the main model that I'm interested in. Um, so I think we start from human behavior on different tasks. And ideally we'll have a model which is like currently maybe, you know, some form of convolution neural network, which has many areas other than venture scream, like dorsal stream PFC. And they will be kind of like predicting parts of the behavior of the humans and maybe at full capacity or something. Speaker 1 01:14:19 And I think at least one angle of approaching the monkey research would be like, can I get some neural data that might be constraining for those models might improve those models are. And usually, I mean, the way people go about it is that they collect some neural data, come up with an inference that is more, can be summarized as like a very smaller kind of principle, like have recurrence or like, like a smaller model. And then they incorporate that idea into the bigger model and ask like, do they improve my, my model, my bigger model? Uh, we, I can do that. I mean, I'm probably gonna do a bit of that basically, like saying, look like it looks like this other areas in the monkey brain is associated with this particular behavior and maybe that is going to improve my, my, my, my, my, my development of the models. Speaker 1 01:15:08 The other thing could be like, you just directly, you know, feed the data that you're collecting into the model building itself. So you're getting a lot of monkey data. Then it's a matter of like questions of like how much data is enough data. And I think we are getting more and more data. So I think this is the right time, like start putting them in the models. Like, so right now I'm involved in a project where all the data that I've collected is getting kind of filtered into the training part of the model and the models have been regularized with that data essentially. So like, and those models are becoming better predictors of core object recognition. So that is one way of bringing in the monkey neural data and the monkey behavior, maybe to this problem. The other way I think about this is that maybe, you know, uh, humans and monkeys share a very, I mean, it's, maybe it's probably proven in many ways that we share a very similar visual system. Speaker 1 01:16:01 So if even if I just get responses of the visual neurons in it or other areas during showing some of these movies or some of these like, you know, videos on which the task is based off, I can be providing constraining data for the model of like, you know, you need to be in this representational space and then solve a problem. So, like it's a, two-part kind of approach where the, the, the neural data is basically constraining the representational space of the model. And then on top of that, you add a decoding layer that is the reading those representation, and you can have multiple ways of decoding the task. And then you ask like which one, you know, or you can then compare it to human behavior. And I think this, this could sound novel or surprising, but like, this is exactly the thing that Jim's lab like our lab has been doing for core of the recognition for quite a while, where we were recording in monkey brain, but then comparing the decoding models output to human behavior. Speaker 1 01:16:57 I have now started working, like, because I was also getting the behavioral data too from monkeys. I have started now working, looking at trial by trial and like image by image behavioral correspondences with monkey neurons and, you know, human sorry, monkey behavior, but it was basically monkey neuron human behavior. We had a paper with Rishi, Roger, lingam looking at huge monkey neural responses to like wards and non words and their correspondence to human behavior on those sort of orthographic processing tasks. So I think there's a way to like, do this kind of separated from a behavioral task is I think maybe if you're asking, do we, does the monkey need to do the behavior for them to be relevant to this task? And I think the same applies to rodents and other species it's just to me that the correspond, ultimately, again, as I was saying continuously to no discussions that at the end, there is a model and whatever you do, you need to kind of show that that adds to improvement of the model on something. Speaker 1 01:17:55 And now I, from my, just what we're talking about, I can say like, maybe my goal is like not to improve like prediction on human behavior to ceiling, but maybe it's like, if I'm doing maybe predicting behavior of neuro-typical subjects versus, you know, people with autism, do I have some traction on that problem? Maybe like I can do, you know, like individually, et cetera, imbalances, I can create them more easily with chemo, genetic perturbation in a monkey, and then test what those representational spaces are. And those could be like kind of constraining ideas for when you're building models of people with autism. So I think there are many, many ways in which, and I'm, I'm, I'm seeing all of this ideas and with the risk of sounding Legos, a scatterbrained person who has to, but I think at the, at the end of the day, I think these are the things that excited me. So I think I won't be able to solve it all by myself. I am hoping that a lot of people who are kind of maybe similar minded, we all come together and kind of pry and tackle this Speaker 2 01:18:55 SoCo neuro AI. Uh, so, you know, um, a lot of your, at least, you know, most recent career has been using deep learning models to shed light on brains on, so this is the arrow from AI to neuroscience. Um, do you see, and part of what you're doing also is using, uh, brain architecture and neuroscience, some details to improve the models bit by bit like you were discussing. Do you see neuroscience helping AI, uh, or does, does AI not need neuro-science can AI just scale up and go to AGI or what Speaker 1 01:19:34 That's a interesting question. And also, I think I'm probably not the, my answer might not be that satisfactory just because of my lack of knowledge in a lot of these domains. But I think, I think of this problem in different ways. So like, if I think of this as like, okay, I'm going to build a calculator and should I constrain myself with the brain data? No, it's going to be like terrible calculator for scientific computing or something. So like, if that's the goal of an intelligence systems like to compute, you know, calculate things fast and like, then I think constraining it, neuroscientific ideas and data as like a bad idea. Um, now if maybe we can make a distinction of like behavioral data and actual neural data. So I, if, if I want to prioritize in my head, like which data might be more informative to building models in for AI, I think behavioral data will come first before neural data. Some of the examples might be like moral machines, kind of data. That is part of the MIT media lab. I think if we are trying to constraint a system to work like humans, the human behavioral data, I think will be key to constraint. This Speaker 2 01:20:47 That's kind of been the success of deep learning, right. Is because it, um, the old way in neuroscience was to build a model out of kind of intuition, uh, and then compare it to data and the new deep learning approaches to build a model and train it, to optimize it for a task, uh, like an animal or organism, uh, would perform. And so it's all about behavior and lo and behold, the, uh, model predicts neural data. Well also, right? Yeah. Speaker 1 01:21:12 Yeah, definitely. But I mean, I was maybe making a slight distinction between like overall performance in a behavior versus like following the pattern of human behavior and the error pattern. So like image imaging, a train models are trying to get the labels. Correct. Which is a behavior, but like humans might not always get those labels. Correct. And like the might have different patterns. So I think I was mostly thinking like this error pattern of like, what kind of decision do we make given some kind of confusing stimulii or things like that, those kinds of data might be more relevant to models if they want to sort of operate in a human regime, because I'm thinking of like a system that might be like, you know, helping somebody go through life, who are unable to do things in their life that that machine or robot has to interact with with the person. Speaker 1 01:22:00 And then it's, I think might be important for that, that, that system to be constrained with human behavior to some degree. Um, if are those purposes, I think behavioral data is very valuable. At least that's how I think about it. Um, for example, also AI in healthcare might be something that is, might be very constrained and there, I think maybe the neural data might have some bearing on that. I mean, it still has to be shown, I think, I mean, yeah, but I feel like there might be some, I mean, as I was saying that this ideas of like, you know, how does the brain, uh, differ in a neuro-typical subject versus atypical subject? That kind like, it just depends on the scale of the data and how we are getting it that, and that that's the relationship of the brain, uh, representation to behavior. Speaker 1 01:22:49 I think those kinds of data might help us to build better models of the atypical systems and then use solutions that might be catered to the atypical system. I mean, now I'm kind of, you know, being very abstract with me. I mean, I can come up with like a dream sort of example, where if you know exactly how like a system is learning, for example, a new task, and, um, you can do that for both atypical and neuro-typical populations. You might be able to use the atypical model to kind of come up with learning sequence that produces new typical behavior, even though it's atypical system. So I think that kind of, that is definitely within, I think the, the genre of like AI healthcare kind of like approaches. So I think that way neuro to AI links probably are more clear to me. I think generative models, um, might have a, you know, a boost if they're regularized with neurosystem data. That, that, that is another maybe, uh, angle, but yeah, but, but it's it's so I, I would just not, um, what I'm mostly worried is that it's not like it doesn't obvious that you have some brain inspiration or like neural data is going to improve AI models. Right. That's, that's what I'm kind of very pushing back against. It's like, maybe you can get behavioral data and that's enough and you don't need to poke around. Speaker 2 01:24:13 Isn't it interesting that, you know, these deep learning neural networks are based on 70, 80 year old neuroscience, like fundamentally the idea of a neural network back with even the logical units. I mean, uh, so, and you're adding more biological constraints to your models. So it's an interesting, Speaker 1 01:24:32 That's true. I mean, I'm, I'm thinking of that. Like, so the first part, I agree that that's like, you know, that that's where all of these ideas might've come up and that's a good reason to keep, you know, looking at neuroscience for, you know, inspiration for building better models. But if I look at the last 10 years, I really don't see a concrete example of like, you read a paper in nature, neuroscience journal neuroscience, and took that idea and implemented in a model that dropout being Speaker 2 01:24:57 Run by Speaker 1 01:24:59 The end, they're like engineering hacks. Like, I mean, yeah, the groups like to use it as PR, which is, I think the reason why, so it's great for that purpose, but I think in reality at the end, you can have it. I mean, I mean, and that's, that's fine to me. Like, even if you have an idea of a dropout and then you figure out how to really like tweak it to make it part of a model that does something that's great. And I think in that way, it's really good to have neuroscience as an inspirational kind of umbrella on top of everything. Good, good for my career and I'll be able to talk to them. But I think, I definitely think there is, there is purpose of, of neuro, I mean, yeah, that would be use of neuroscience for AI, but we need to be careful to not oversell it. Speaker 1 01:25:40 Maybe, maybe we should. I know, but I think it's the other way around for me makes more, to me it's more valuable, especially because I think, you know, you're trying to measure data in the brain that is noisy. This is like sample limited and then build theories and models around that. Like what to expect, like how to think about high dimensional spaces, blah blah. So like to me, like once you have a model that is doing a very, you know, uh, high level behavior and very accurately, that complex system gives us the opportunity to like really figure out how to even analyze a complex system. Like, so it's, to me that's a huge bonus from these net networks, because you were saying this, I think have been trying to do both things at the same time, like, like build a complex system and then figure out how to analyze the complex system. And here are networks that are already built up and you can formulate like different theories based. I think to me, that's like a huge advantage of having these networks and they stay, they really become like the starting points and the hypothesis, maybe base hypothesis for a lot of these neuroscientific experiments. So that's kind of like, at least how I have been mostly getting excited about the, the cross tab between the two fields. Speaker 2 01:26:54 We talked about how there's this kind of archaic, uh, fallacy, I suppose, for, you know, naming a brain region, giving it a role. Right. And, uh, the modularity of the brain prefrontal cortex does X that, that sort of thing. Um, and we've talked about, well, I guess I mentioned about, um, how language actually limits us in some sense, do you feel like we understand what intelligence is? Do we have the right notion of what intelligence even is to, uh, start trying to, you know, to continue trying to build quote unquote, uh, AI? Speaker 1 01:27:31 I, I don't re I mean, I know which we scientists, we are thinking of, like, I think for me, I probably don't have a complete understanding of what intelligence is, but I have a friend of understanding of what kind of intelligent behavior I would like to build models for. And so that's where I'm just the kind of the engineering engineering, maybe like talking, because I know what problem I have defined. I won't know the solution. So like this kind of tasks that are slightly above, you know, recognizing an object and like trying to figure out like what different agents are doing in a, in an environment, or like trying to predict what might happen next. Like these kinds of behaviors I think are fairly intelligent behaviors. And my goal is to build models and, and try to figure out how the brain is actually trying to solve that problem. Speaker 1 01:28:18 So in that way, I'm fairly happy about the definitions of intelligence, but then again, we'll get into trouble. Like I'll get in trouble and saying, what is intelligence? They be like the, you know, the typical, like it, you know, scores or IQ, it IQ scores, I think they're heavily debated. And so I just feel like, what I want to say is that we can keep debating about what is the right score, what is the right way of quantifying intelligence, but we have to do it in some way, if we want to have any measurable progress. So I have defined it in some way and I will keep, you know, improving the definition and, you know, expanding on the definition. But, but I think, uh, intelligent behaviors are, to me that controversial, anything that I can do that my three year old son cannot do almost seems like a definition of like little bit more intelligent, but he might be learning faster than me. So at this stage, like the kind of definitions like that, maybe that exists, but like, yeah, Speaker 2 01:29:16 You have a three-year-old Speaker 1 01:29:18 I do have a two year, Speaker 2 01:29:19 Two year old. Is that the only child? Speaker 1 01:29:21 Yeah. Yeah. He's our, Speaker 2 01:29:23 Oh man. That's, uh, that's kind of a hard, um, patch going through and starting a new job and all that. So I feel sorry for you. I mean, it's a wonderful thing obviously, but you know, it's challenging early on, so Speaker 1 01:29:36 Yeah. Yeah. It's uh, yeah, yeah. Speaker 2 01:29:40 are you, um, go ahead, go ahead. Speaker 1 01:29:45 Um, I, I must say like, it's, I'm happier, uh, on average, um, like in con like after taking into consideration everything around the child, I think overall I'm happier that we have a son that's the most, I will say Tiniest, like a P P equal to 0.04. Speaker 2 01:30:07 I used to draw this, uh, a pie chart where a, that I would show people like, you know, why do you like having kids? And it's like 51%. Yes. 49. No. Yeah. All right. Maybe I'll cut this because I sound like a real jerk. Um, are you, uh, are you hiring in the lab? Are you looking for, uh, students? What's what's the situation. Speaker 1 01:30:30 Yeah. Yeah. I'm definitely looking for post-docs and grad students to work together, uh, in my lab. So I think if folks are interested, I mean, the grad students are, they're basically going to be, um, recruited through York's, um, graduate program. Um, and the post-doctoral candidates. I think I'm just going to talk to them individually and then see where the sort of, you know, alignments lie in. Yeah, definitely. If, if folks are interested in, in whatever we spoke about, and maybe if they read some of the papers and things, they're interesting directions that they might want to pursue. I'm definitely interested in talking. Speaker 2 01:31:09 He's the future of neuro AI flux. It's a, this has been a lot of fun co um, congratulations again on the job. And, uh, gosh, I'm just excited for you. It sounds like you, you have a lot to pursue and, um, things are, uh, looking up. Not, not that they were ever looking down, but, um, congrats. Speaker 1 01:31:30 Thanks, Paul. I mean, I mean, there has been a lot of promises made. I feel like I'm, I'm kind of like making a lot of promise and I hope I am able to deliver. I feel like as long as I can quantify what those promises are, I can tell you in maybe a year where I have been, how much I have, you know, delivered. Speaker 2 01:31:47 So check in in a year, Speaker 1 01:31:50 We should check in. But yeah, I'm excited. I think, I think this is worth doing so. So I feel like I'm, I'm all excited to get on with it. Speaker 2 01:31:59 That's been great, Kyle. Thank you. Speaker 0 01:32:00 Thank you so much. Speaker 2 01:32:07 Brain inspired is a production of me and you. I don't do advertisements. You can support the show through Patrion for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brand inspired.co and find the red Patrion button there to get in touch with me, [email protected]. The music you hear is by the new year. Find [email protected]. Thank you for your support. See you next time.

Other Episodes

Episode 0

March 17, 2021 01:08:43
Episode Cover

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters...

Listen

Episode 0

March 15, 2020 01:32:28
Episode Cover

BI 063 Uri Hasson: The Way Evolution Does It

Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather...

Listen

Episode 0

October 11, 2024 01:19:40
Episode Cover

BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød

Support the show to get full episodes and join the Discord community. The Transmitter is an online publication that aims to deliver useful information,...

Listen