BI 187: COSYNE 2024 Neuro-AI Panel

April 20, 2024 01:03:35
BI 187: COSYNE 2024 Neuro-AI Panel
Brain Inspired
BI 187: COSYNE 2024 Neuro-AI Panel

Apr 20 2024 | 01:03:35

/

Show Notes

Support the show to get full episodes and join the Discord community.

Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience.

View Full Transcript

Episode Transcript

[00:00:08] Speaker A: This is brain inspired. [00:00:21] Speaker B: Hey, everyone, it's Paul. Welcome to Brain inspired. Recently, I was invited to moderate a panel at the annual Computational and Systems Neuroscience conference, or cosign. This year was the 20th anniversary of cosine, and we were in Lisbon, Portugal. Portugal. The panel's goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Puget, Blaise Aguero, I Arcos, Kim Stackenfeld, Jonathan Pillow, and Eva Dyer. So it was quite a big panel, and I will let them introduce themselves soon. In the beginning of the discussion, two of the panelists, Tony and Alex, co founded Cosine those 20 years ago, and they continue to have different views about the neuro AI relationship. Tony's been on the podcast before and will return soon, and I'll also have Kim Stackenfeld on in a couple episodes. So I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners, so that was fun. And this is an edited audio version of the panel. I wanted to remove some of the long dead space and things like that, but you can watch the panel on YouTube or in the show notes at Brandinspired co podcast 187. So there's about 30 minutes of just panel, and then the panel starts fielding questions from the audience. Thanks again to the cosign folks for inviting me and letting me bring this to you. Also at Cosine, it was a lot of fun for me to meet and chat with a bunch of you, so thanks for saying hi to me in the poster hallways or in the social gatherings, et cetera. Okay? Enjoy. [00:02:06] Speaker C: I think what we should do is start with our moderator, Paul Middlebrooks, who is running on the stage. Give him a round of applause. [00:02:21] Speaker B: Thank you for having me here. So the way this will work, I would encourage anyone who wants to ask questions. You can come up and ask questions any time. It's going to be a very loose discussion, and I am going to force the panelists to introduce themselves in under 30 seconds each. Please. And if you can relate, maybe your most embarrassing or poorest idea that you've had throughout your career. Tony, I know you have one. You were telling me one earlier. [00:02:58] Speaker C: Wait, what is the assignment? Because in public, I've never actually been wrong. The people in my lab know that. Yeah, it's not possible for. I'm an oracle. So what was the assignment exactly? [00:03:14] Speaker B: Your thesis. [00:03:15] Speaker C: My thesis, right. Oh, okay. Yeah, I can tell you. So, all right, my name's I forget it. I think it's Tony Zador, and I've been interested in the intersection of neuroscience and AI since. Actually, I was a graduate student, and my graduate thesis was looking around for the things that were missing in artificial neural networks that we could take from artificial neural networks, from biological neural networks, and move to artificial neural networks. And the one I went with for my graduate work was dendritic computation. And although I had a lot of fun working on dendritic computation, I will confess that my conclusion was that that was not the key missing ingredient. Although there is obviously much that one could take from dendritic computation. That wasn't the thing that was limiting us back when I was a graduate student. So that could be an example of me being wrong way, way over 30 seconds, Tony. Another way in which I was wrong, because I thought I was, like, in 26 seconds. [00:04:26] Speaker A: So, my name's Eva Dyer. I'm an associate professor at Georgia Tech. I guess in terms of what I've done wrong, yeah, maybe. So in terms of my background or kind of transition between AI and neuroscience, actually started out in AI, but at that time, it was called machine learning, and now it's kind of taken on a life of its own. So, yeah, coming from machine learning and getting really excited about questions related to natural intelligence, and now I'm kind of coming a little bit around to the idea that we can actually use the brain to inspire AI, so kind of back and forth between the two spaces, so. [00:05:15] Speaker D: Hi, I'm Kim Stackenfeld. I'm a research scientist at DeepMind, also affiliate faculty at Columbia. I do some neuroscience things, some AI things. This probably isn't the dumbest thing I've ever done. Cause that's gotta be a really high bar. But I did originally, when I was picking a major, I was really interested in the brain, and I was like, brains are made of chemicals, so I'll study chemical engineering. And I really didn't examine that until, like, senior year, when I realized that most chemical engineers like design distillation columns, and the brain is not a distillation column. And then I did a hard pivot to neuroscience. So, you know, it was fun, but it was probably not the shortest path to what I wanted. [00:06:06] Speaker E: Alex Puget. So you've already heard from me way too much, probably. I got into neuroscience, actually from biology, completely different perspective, slowly drifted into neuroscience, became a failed experimentalist, so had no choice. If I wanted to stay in the field, I had to become a theoretician, which I did during my grad studies. And grew up. I'm part of this generation that really grew up at neuribs and got immediately exposed to all the theoretical ideas that emerged in the field. And it's very relevant to debate today because that's been a big question that we've been debated for 30 years. Who is contributing to who? Is it machine learning that's helping neuroscience, or is it in the other directions? [00:06:48] Speaker F: Hi, I'm Jonathan Pillow. I am a professor at Princeton University in computational neuroscience. So my group focuses on statistical models of neural data. I don't actually work on neuro AI, so I'm not sure what I'm doing on this panel. I actually misread the invitation. I thought it was for a workshop, and I said yes. And it was only yesterday on the plane that I realized, okay, that's it anyway. So I don't know. I don't work on AI. I am a consumer of AI, and I'm eager for it to get a lot better so that it can do something like answer our emails for us. But I'm excited about the possibility of the deep learning revolution for understanding the brain. So I see a lot of really creative applications of ideas from AI or deep machine learning to untangle the computations that are going on in the brain. So that's my main emphasis. [00:07:35] Speaker B: And. Blaise. [00:07:37] Speaker G: Hi. Blaise Aguiradkas. I am a VP at Google Research and the CTO of technology and society at Google. I don't know what that means exactly, but I'm trying to sort of figure it out at the moment. I feel like I've made so many mistakes, you know, scientifically and so on, it's really hard to narrow this one down. So I don't know. But I guess I will just say that along with many people kind of at the intersections of AI and neuroscience, I thought that there had to be some kind of magical special sauce to intelligence. And I think that that is actually my biggest mistake. But I've seen the light now, except most people haven't. So this is kind of a little bit of a backward one. [00:08:24] Speaker B: Okay, well, some of you already touched on this. One of the things that we're going to discuss, although it's going to be very open and we can go down any road that we want, is the interaction between neuroscience and AI, who influences whom. And I would just love to get your perspectives on this. I know some of your perspectives and that there's some disagreement between some of you. For example, Tony believes that neuroscience has historically and continues to influence AI. And I mentioned to him earlier that I think that neuroscientists desperately want AI to be influenced by neuroscience. So I'd like to get your perspectives, and I don't know, Jonathan, if you have a particular perspective on this, since you're uninvited now to the group. [00:09:06] Speaker F: I don't have a dog in this fight. I believe Jan Lecun's tweets when he says, you know, the organization of V one, Hubel and Wiesel inspired him to think about convolutional neural networks. So I think a lot of the, you know, the ideas that went into the design of deep learning came from neuroscience, although I'm not part of that movement. So I take people at their word. I'll just say, yeah, I think I personally am excited by Tony's vision that there are still things that AI could learn from neuroscience. I don't really know, though. [00:09:33] Speaker B: So, Kim, you pivoted from chemical engineering to neuroscience, and now you're doing a lot of non neuroscientific machine learning work. I mean, do you, from your perspective, do you agree with me that neuroscience is sort of a desperate attention seeker in this? [00:09:50] Speaker D: I mean, I might be one, but I think the. So, I mean, when I'm thinking, I do some stuff that's just, like, pure AI methods stuff. So when I work on that kind of work, I've trained as a neuroscientist. A lot of the batch of ideas that I draw from are neuroscientific ideas. I have personally found that thinking of both the brain and AI in the same brain is a useful thing to do. That, like, they're just kind of interested in different aspects of the problem and thinking about things in different ways. And so I have found there to be back and forth. That's kind of a weak statement, because it's not saying, like, this is, you know, anybody has some background that they use as a batch of inspiration for their ideas. Where I am right now at Google, at DeepMind, there definitely is a mood right now that's, you know, empirically got some backing of, like, let's scale it up. Let's, like, let's. We've got some good methods. Let's, like, pause on the science bit for now and, like, just keep making them bigger and see where we plateau. So that's not only really, like neuros, not neuroscience inspired, it's just, like, more. Not super science. It's more like engineering right now. Anyway, so I have personally found myself more excited about using AI for studying the brain because we have this new batch of models. They're super cool. They can do a lot of neat. They prize really neat tools, and they comprise a new model class that should let us answer different conceptual questions. So that's where I've kind of felt like there's low hanging fruit right now in the environment. I am, but I've historically felt very strongly about neuroscience inspired AI, so I don't really feel like I have a super consistent. This fundamentally is or is not, in the long term, going to be part of the process. [00:11:35] Speaker B: You think it's a healthy relationship. [00:11:42] Speaker F: Or. [00:11:43] Speaker B: Do we need counseling, as Tony said? [00:11:46] Speaker D: In a sense, no. I think one thing that could be changed is neuroscientists definitely don't need AI to justify neuroscience. So I think sometimes that is a little bit of a tone. The reason this is an interesting problem is because AI can't do it, and that doesn't need to be the justification for a neuroscience project. Understanding the brain is its own imperative. So if validation from AI is the main reason to do neuroscience, I would say that's not healthy. And sometimes it's easy to want that relevance or validation. But I don't think most people who study the brain would say that that's their only reason. [00:12:27] Speaker B: Eva, you said that you've only come recently to appreciate that there are brains that could be useful for building better AI. I'm not sure how you phrased it, but that's an interesting perspective from. From me, because I kind of think the opposite. So how did you come to that perspective? [00:12:43] Speaker A: I mean, I think there's a lot of enthusiasm in. Yeah. Taking inspiration from the brain to build, you know, more generalizable or robust systems in general. I think it was like, the scale at which we're going to derive that inspiration. And I think perhaps there has been a lot of, you know, excitement about maybe, like, synaptic learning rules or understanding how, you know, neural systems could, you know, implement kind of a more biologically plausible learning rule, or how we can look at, like, circuits or kind of more like, mechanistic implementations in the brain and then port them over to AI systems. And I think, to Kim's point, while some of those tools have actually given us some insights into connections between the two, I think that AI, as a field has moved in a very different direction, I think, in terms of using transformers or different architectures that no longer look like neurally inspired units any longer. And so I think it's kind of interesting, because now it seems like the way that AI scientists need neuroscientists is coming more from this study of, like, complex systems. Right? So now we have really complex transformers. They have all kinds of information within them, but we still don't know, like, how are they solving these different rules? And so I think it's interesting to see now how neuroscientists and neuroscientists, like, perspectives are coming into just, like, probing black box systems. And so, yeah, maybe we. Neuroscience is helpful for AI, but it might not be at the scale or kind of mechanistic level that we might have thought would be helpful before. [00:14:35] Speaker B: So, at the same time, and I'm not a panelist, so I'm not really in a position to disagree, you know. [00:14:44] Speaker G: Looks like you are. [00:14:45] Speaker B: Ten years ago, the brain was a convolutional neural network. 20 years ago, it was a Boltzmann machine. Now it's a transformer. [00:14:52] Speaker F: Pardon? [00:14:52] Speaker C: 100 years ago, it was a watch. [00:14:54] Speaker G: Okay, steam engine. [00:14:56] Speaker C: Or a steam engine. [00:14:57] Speaker B: So my question is, I mean, in a sense, this is the problem of looking under the lamp post, right, to find your keys where the light is. And every new model that comes out, neuroscientists seem to take on that model and say, oh, maybe the brain is that, or does it that way in certain respects. And I'm wondering if any of you think that neuroscientists are too influenced by what's happening in artificial intelligence. Jonathan, you're nodding your head. [00:15:25] Speaker F: I mean, I do think there's a trend, right? You also see quantum physics is something we don't understand. Maybe the brain uses quantum physics. So there is a tendency, I think, to grasp for whatever complex technology, the steam engine, the telegraph. You look at the history of brain metaphors, and it often has been that people compare brains to whatever complex technology we don't understand. But I do think it's a fundamental difference in that we're making quantitative predictions. I mean, so people tried, you know, based on the tale of Hubel and Wiesel's work to build models. You know, that was Mars whole vision program, not to bash on Mar was we're going to figure out how vision works by constructing a series of computations. And that largely failed. And now we have, you know, we have. It's not just a metaphor, in other words, I would say. So the fact that we can make quantitative, we can discover what neurons in v four or it, or in language brain areas are doing using these complex models that were trained on vast amounts of data, I think says something incredibly cool that's different than just comparing it to technology we don't understand. [00:16:19] Speaker G: I feel like I'm much more bullish on this relationship than many of the rest of you. I mean, the moment when they diverged, when neuroscience and computer science diverged, was kind of after McCulloch and Pittsburgh in 1943. So that was when we thought that the brain was a bunch of logic gates, and computers were also a bunch of logic gates, and that was a point of convergence. And then things diverged, and AI sucked for many, many years because it was not connected closely, or at least the mainstream of AI was not connected closely with neuroscience, and it only began to make progress again when they reconnected. And I think this is a general theme. Good ideas come at intersections. Both AI people and neuroscientists are testing rigorous hypotheses. Neuroscientists test them by seeing if the brain does something like the theory, and AI people test them by seeing whether something built that way can work. And the two have consistently informed each other at every turn. The transformer is the first architecture that seems superficially like it is not informed by specific, very specifically by neural architecture. And I'm not even sure if we look at it in retrospect in a few years if we're going to find that to be the case. I think that multiplicative interaction, that thing that is at the key of transformers, we're now looking, and I think we're likely to find things in the brain that do actually do that, and that dialogue is likely to continue. I think that the issue is that AI is now, whereas AI was in the doghouse for so many years, and neuroscience is making real progress now. AI is finally coming into its own and making real progress. And there are status issues between the two fields. So I think it has more to do with that than with the actual intellectual exchanges. [00:18:11] Speaker C: I completely agree. And just to amplify that, it used to be, when I was in grad school, the term AI was reserved for symbolic AI. And I think that's what you mean when you say they didn't start AI, as we now use the terminal didn't really start to work until we jettisoned the version of AI that had nothing to do with neuroscience and replaced it with what we call neural network slash machine learning. Right. [00:18:40] Speaker G: We should go back to calling it cybernetics. Yeah. [00:18:43] Speaker C: So, like, just historically, right? The major advances that gave rise, just to remind those in the audience who don't know the history, the major advances that gave rise to modern neural networks were all inspired by neuroscience. So the very notion of a neural network, it still retains the name the idea that synapses are the locus of plasticity, those are the free parameters, the convolutional neural networks, which were explicitly inspired by models of how the visual system processes information. Insights. Yeah. Hubel and Wiesel insights from the basis of reinforcement learning came from neuroscience, broadly construed, psychology, all of that. And so I think what we're noticing now is that now that AI actually works and is useful, to make it useful is an engineering task, and I don't think people are. At least I'm not seriously proposing that the 20% gain, or even the hundred percent gain that we're going to see, you know, next week at whatever the next ML conference is going to be is going to. Is going to come. What's that? [00:19:57] Speaker A: 20%. [00:19:58] Speaker C: 20%. Okay. That the big gains are going to come from. Ah, you know, there was this paper that showed that C camp, you know, that cyclic amp was involved in some, ah, that's the key insight we needed. No, it's not going to be one to one, but rather, when we come to the next stumbling block that will be overcome by people who have thought deeply about how neuroscience solves similar problems and is able to overcome those. And an obvious one, which I'll just put out here, is the energy and efficiency of modern. I mean, this is the low hanging fruit, right? This is the clear example of something that biology manages to solve whatever problems it solves with Max, you know, 20 or 30 watts, whereas it's. How many petawatts to train GPT? Seven. So I think that, you know, we don't yet know what we. Where the next inspirations are gonna come. But history tells us that all the. With the exception of transformers, all the significant advances have come from looking deeply at neuroscience. Yeah. [00:21:15] Speaker E: So I think, of course, I agree that neuroscience has had. [00:21:18] Speaker G: Sorry. [00:21:18] Speaker E: That neuroscience had a great influence on AI over the years, and Tony gave you a lot of examples, but. [00:21:27] Speaker H: If. [00:21:27] Speaker E: You think about it, all the examples just Tony gave you are 50 years old, 40 years old ideas. And that's my problem, is, I think, yes, originally we had an influence on AI, but today, you'd be hard pressed to come up with modern neuroscience idea that is influencing AI. You don't see too many people attending this conference, but the other way around. We do continue to attend the NURBS conference, and I could come up with a very long list of contributions that came straight from AI that have completely changed the way we do neuroscience. And I'm sitting to one guy who's exactly in that tradition, and Kim is representing that perspective as well, we're really. The imbalance is striking, and it's been striking for 40 years. And as much as I would love to say that we are going to help AI, I don't think we've been very impressive in that respect, whereas in the other direction, it's completely obvious, and I think I want to make that practical. It's like if I were a funder and I had $100 million to invest right now, the big question is, should I put it into neuroscience, hoping it's going to lead to the next wave of AI? Or would it be a lot more, much better use of the money to give it to AI people to work harder in neuroscience and help us develop theory and analyze the. [00:22:45] Speaker C: Just point out that $100 million is enough to train one large language model. So the question is, do you really say that that's where you want to put your money, is like training the next high parameter, large language model? [00:22:58] Speaker E: I'm not talking about, like, Google. I'm talking about, like, Nih and all those, the people who fund us, right? It's not Google. In fact, from what I understand, DeepMind is basically shutting down the neuroscience section. I think there's a big sign there. I mean, and Kim is at Nyu. [00:23:16] Speaker H: Maybe I do. [00:23:17] Speaker E: But I'll tell you, the word is out in London that you don't want to apply to DeepMind if you're a neuroscientist right now, that this is no longer what they want to pursue. I think there is a big sign there. [00:23:27] Speaker C: Okay. [00:23:28] Speaker E: And OpenAI, I've got four members in my lab who are there, and they're not doing any neuroscience, are not attending talks in neuroscience. Okay? That's reality of what it is. And so I think we have to be honest about this. I mean, it's nice. I like when Blaze says it will happen, but it's a question. We have to be practical, and that's why I'm talking in terms of dollars. Is that how you want to invest your money tomorrow? As much as I like to get money to work in AI, I'm not sure that's the best investment. [00:23:56] Speaker C: If I can just say in biotech, there are biotech companies whose job it is to take the advances that have been made in basic science labs and convert them into products that are useful for society. When you have a biotech company, you don't typically see a large wing of the biotech company, or for that matter, even a pharma company, which could potentially afford to invest it in deep basic research. Because there's a pipeline. There is certain kinds of research that is high risk. It has a long tail of success in that most of the things that people try fail. A tiny fraction of those turn out to be important, and those are the ones that can potentially be commercialized. And this is basically AI is now sort of sorting itself out into the natural scheme of things, where their time horizon is maybe three to five years. And that's engineering. That's not basic science, and there's a longer time horizon of basic ideas. So if the belief is that the current ideas, if scaled up, will allow us to achieve AgI, then indeed, there is no reason to look to neuroscience for insights. There's another possibility, which is that, no, there's something missing, but the engineers will figure it out on their own. That's a real possibility. But the last possibility is that when the modern approaches, the current approaches start sort of maxing out, that they'll realize that there's something missing, and that that something missing will be something that we know nature has solved in some way, because we are the thing that is being targeted. Right. And so that's sort of the hypothesis. [00:25:38] Speaker G: I agree, Tony, and likely, it's already in the room. I mean, you were saying, alex, like, you know, these ideas are 50 years old. I mean, it's true. You know, Hubel and weasel is old hat. Also, CNNs were still the state of the art in 2017, so it sometimes takes quite a long time for things to go from the basic research to milking the cow. And the fact that what we're doing right now in industry is attracting a lot of money, I find it very sad to reduce the thing to economics, just as I find it very sad for people who are supposedly doing basic science. For all of their grants to be about, if they're studying neural networks, it has to be about epilepsy, or if they're studying anything cellular, it has to be about Alzheimer's. I mean, it's not just about fixing old people or sick people. It's also about finding out the really basic stuff that's important. I think we should keep our eyes. I don't mean to disparage medical research. It's very important. I'm one myself, so I feel like I'm starting to be able to speak with impunity about it. [00:26:43] Speaker C: But, yeah, you might think differently about Alzheimer's. [00:26:46] Speaker A: Yeah. [00:26:47] Speaker D: So, I mean, just to the point about the, like, interaction with neuroscience at DeepMind, it's definitely true that it's shrinking a lot. It's not entirely dead. And it's pivoting in a couple of ways, which not, like, just in the interest of, like, mere self defense here, but more just like, because it actually does have some. Some, like, meaningful points about the interaction with neuroscience and AI. Right now, a lot of the neuroscientists at DeepMind just went off and are doing machine learning now. Machine learning that was related to the kinds of things they were studying usually, but something that's more in the space of something Google cares about a little more right now. So, like, language models and just taking whatever kind of version they had and doing it that way. The folks who are still doing neuroscience are kind of doing that in a new way. Some are doing really, like, cognitive science or neuroscience on language models, things that are kind of using neuroscience y attitudes, but, like, not on a neuroscience problem. Some folks, like me and Kevin Miller and Marie Eckstein and Zeb Kurth Nelson, were doing more things that are, like, using AI for neuroscience applications. It's really not in the original spirit of DeepMind's neuroscience team, which was like, we're going to understand the brain so we can build AI. It's really more like, okay, now there's all this cool AI, and people are trying to use it for AI, for science. Neuroscience is a science, so it gets to be in that umbrella of, like, AI applications. And there's really, like, kind of two ways this works. One is, like, AI is kind of a tool. Find patterns, be a model of the brain. Like, it's a big, complex learning system. It can learn patterns about big, complex systems. The brain and its study have a lot of those properties. Another big one is that we have a lot of external collaborations now, because a key advantage of being at DeepMind is seeing the machine learning stuff that's working and being like, hey, this would be a neat idea to gain insight on some question in neuroscience. That question is not always best explored in an industry setting where scale up isn't really what you need. It's, like, people to think about it for a long time, but that's pretty much how we've been adapting ourselves for, like, what you very accurately describe as this, like, you know, really different relationship. It wasn't exactly your words, but that's my gentler take. [00:29:11] Speaker B: Does anyone have a comment on that before I throw it to the audience? There's a line forming questions. All right, audience member number one. [00:29:21] Speaker G: Well, thank you. [00:29:22] Speaker C: So, besides the pessimism of some of. [00:29:24] Speaker G: The panelists, I wanted to ask if you find a common language that we can use both for understanding the brain and improving the engineering of the systems. So we see in neuroscience a lot of dynamical systems, maybe for theory as well, but we don't see it as much for AI. Are we stuck with just thinking of units that integrate information from other units? [00:29:45] Speaker C: Is that the commonality, or do we. [00:29:46] Speaker G: See any other common language that allows to do the science and the engineering? Well, I think that dynamical systems is actually poised to make a pretty big comeback in AI. I mean, for one thing, we've moved from CNNS and things like this that are just static functions to encoders, decoders, type models which are autoregressive and which do implement a dynamical system. So I think that a lot of that class, a lot of this classic both dynamical systems and also even basic ideas from physics of some of the kinds that I was talking about earlier today are poised to be a new sort of common language for us. [00:30:35] Speaker C: Yeah, just to. I think that's exactly right. And I think that as AI starts to sort of address the kinds of problems that so far, it hasn't been very effective at addressing, such as, like, controlling a robot, where we're, like, haven't really made that much progress. I mean, some, but not like the dramatic progress we've made in image recognition and language processing. I think that things like dynamical systems and a lot of the shared vocabulary with neuroscience will begin to kick in. So. [00:31:11] Speaker G: Yeah, so I have a question that's kind of inspired by the b talk from yesterday. And so a lot of the neuro inspired AI is really, you know, coming from a laminar cortex from mammals, you know, and there are all sorts of other smart creatures on this earth that have radically different anatomical platforms, but they're still intelligent. And I guess I was just wondering, you know, kind of writ large, if there ever any instances of AI models that have been inspired by non mammalian brains. [00:31:49] Speaker B: Moving on, then. [00:31:56] Speaker G: Sorry, I tried to give one example. So, neural cellular automata are inspired by morphogenesis, and, yes, that's not even the brain at all. That's sort of patterning with chemicals and local interactions. And I think ncas are also a really important frontier, actually, in AI right now. [00:32:18] Speaker C: Yeah. And I guess one thing I'll add to that is that artificial neural networks, the field used to be called machine learning, hence the emphasis on learning. [00:32:34] Speaker B: A. [00:32:34] Speaker C: Lot of insects and a lot of invertebrates, actually. I mean, bees are remarkable in how much they learn for an invertebrate, but a lot of invertebrates do remarkably well with relatively minimal amounts of learning. And I think that having lots and lots of neurons is particularly useful if you need them to learn a lot. But a lot of organisms work pretty well out of the box, and in some cases, you know, C. Elegans, they work with 302 neurons. And so I think that it is partly because of the style of AI that, you know, where the focus is on learning a new task each time, rather than getting really good at one task with a small number of neurons that you see that kind of a difference. [00:33:19] Speaker E: Actually, your question made me think about this, and I think it's the opposite. Right now, we are. AI is modeling insect intelligence. We are not integrating anything that's specific to mammalian cortex. We don't even have cell types. Right. That's, like, one of the big questions. What, so many cell types. We don't have a laminar architecture with, like, all the feedback stuff. We don't even. We barely even have the feedback between areas. And, in fact, I'm hoping, and I'm going to go along the lines of those guys, that perhaps that's where we're missing something, where neuroscience might be able to contribute, but it's not for lack of thinking about it. There are many, many labs that are desperately trying to come up with good ideas on that, but nothing super convincing is emerging. I'm looking forward to Raj Rao's talk tomorrow because I know that he's been thinking hard from the control theory perspective, so maybe those ideas are going to emerge. But I think, so far, we're actually. What we're doing could work just as well for insects or mammals. [00:34:16] Speaker A: I am going to try to phrase this question in a very, like, open manner. I really want to take it broader. And, Tony, I think we talked about this in my thesis proposal. But anyways, so I. [00:34:33] Speaker C: You're just not a plant. [00:34:35] Speaker A: I am interested in neuroscience because I think it's intellectually interesting, and I also find great value in helping people with disabilities or with mental illnesses. Even if, Blaze, you don't. It's okay. But I want to understand why you guys are interested in AI at a neuroscience conference, right? So maybe you find it intellectually interesting, which is also great, but. Or do you. Are you interested in the applications that it can provide, like, Jonathan to neuroscience or, like, whatever else motivations you have? Yeah. [00:35:15] Speaker C: I mean, my own answer is really summed up by the famous quote by Feynman, something like that, which we cannot build, we do not understand. So, for me, the reason for being interested in AI is that I can. B's all I want about my beliefs that this circuit works in this particular way or that particular way, but you don't really know until you've tried to build it. And I will say that in the early history of, I guess, up to the eighties, vision researchers actually thought that, like, Hubel and Wiesel and were done. It's feature detectors all the way. And I was sort of at the tail end of watching. Vision researchers recognize that their very simplistic models of, we just keep building a set of feature detectors, and we have. Vision don't actually work. [00:36:11] Speaker E: Right. [00:36:12] Speaker C: And they didn't really. It took them a generation to recognize that that didn't work, and that they. It took them actually trying to implement their ideas to realize that, well, there's more to it than that. And I think a lot of where we are in neuroscience has that flavor. You know, especially, you know, our inability to build robots that can interact with the world really drives home how poorly we understand the entire system. So, for me, the two sets of questions are really one set of questions. [00:36:46] Speaker A: I was handed the mic. I don't have much to add, given a lot of the discussions that we've been having. I think, for me, coming from the AI side and coming into neuroscience, I saw an opportunity to be able to utilize this immense set of tools to be able to derive new insights, both into disease, but also into brain function. And I think in some cases, we're working with transformers and some of these models that aren't neurally inspired any longer. But I think it's through really trying to see how far we can go in terms of reading out from the brain in potentially really exotic or maybe not neurally inspired ways that we can maybe start to just see what's possible. And I think opening ourselves up to that without the constraints of biology and using that to derive insights from brain data could also be a really promising direction. Right. So, once we see, oh, these are the types of systems we need in order to decode or read out from the brain, then maybe we can actually use that as insight and go back to the neuroscience as well. [00:38:06] Speaker D: Yeah, I really agree with that. I mean, I like AI a lot as a field for neuroscience because it's a really. It's a very nice frame of reference. It has a lot of cool models that can learn how to do cool things. I guess the question of, like, why neuroscience and AI, like, why did I guess that? I couldn't tell. Yeah, I guess that's sort of both a personal, like, why is this interesting historically, as well as, like, what do these fields objectively have to say about each other. And like, you had a Feynman quote, and I have, like, a von Neumann quote. The von Neumann quote is when we speak of mathematics, we might be speaking of a secondary language built on a primary language of psychology. I just, I think I have always found the brain, or maybe in general, this, like, idea of an intelligent system kind of a intriguing idea because it scaffolds all other reasoning. And, you know, as someone who never, who had trouble picking a major that felt like the deepest and most like, I don't know, fundamental problem to study at some level, they both have a lot of the same batch of problems. [00:39:10] Speaker C: We're looking for a nice time quote. Now. [00:39:14] Speaker F: I was just going to say, I don't actually study AI, but I think I'm interested in information processing in general. So I want to understand how do we get long time scales? But we can ask that question about AI as well as we could about the brain. So there's been a surge of interest lately, and actually, maybe the successor to the transformer, arguably is going back to linear state space models. So there's been, how do we get long time scales in an artificial system? How do we get long time scales in the brain? How do we do context dependent processing? So I think there are a lot of these kinds of questions that we want to ask. We could ask them equally in a complex brain or in an artificial system. So that's my interest. [00:39:47] Speaker B: Next question. [00:39:48] Speaker G: Okay, I'm gonna ask two. I would like to. I'll be ask one if I only have two. Okay. So I guess this, this question comes out of the talk of Alex Puget that he gave today, which was talking about compositionality and all that, and that the solution, in the end, was just scaling it up more and more and more, if that's really the solution or not. But. So I guess my question is, if you compare the resources that academia has and neuroscience, like, we can't throw 100 million at training a neural network, so how would you see that academia or neuroscience can work together? Or necessarily, we might have to compete with the research that comes out of meta, Amazon, Google, who can just throw more and more layers at this? You can't. So, I mean, I think that there are two solutions. One of them is to collaborate, and that is becoming harder. Per Kim's point, you do have two Google people on this stage. So obviously there are a bunch of us who care, but we're also in cow milking era. But I think that a more interesting answer is that maybe it's not just scale. I think that the relentless pursuit of scale right now is really interesting in the sense that we're going to see how far that goes and where it taps out. I'm very curious to see how far it can go, and I don't see an obvious ceiling, but I also don't think that that is the way we have solved it. That seems fairly clear. We consume a very limited number of tokens up until age, you know, four, by which point we're linguistically competent, so there must be something. And we use 20 watts, you know, if we're, you know, at a stretch. Right. So we're obviously doing things very, very differently. So I don't think that competing with the big AI labs at their own game is the way to go at all. [00:41:55] Speaker B: Before you ask your question, does anyone feel like I do, that neuroscience is paying way too much attention to AI? Do you think? Okay, you want to elaborate? [00:42:04] Speaker A: Everyone's paying a lot of attention to AI. [00:42:07] Speaker B: I think you should be paying attention. [00:42:11] Speaker F: I mean, these things goes go in phases, right? I mean, obviously there's a pendulum. I think we'll have you brought up the pendulum. [00:42:16] Speaker G: The old pendulum. [00:42:18] Speaker C: Yeah. [00:42:18] Speaker F: Every. How many posters did you see? [00:42:19] Speaker B: Okay, so I was at a quote unquote debate where the two participants were drunkenly going back and forth on these things. This is like four years ago, and the pendulum was brought up, and David Ccilo wasn't slurring his words when he said that. David, I hope you're out there, that, you know, we're just. That it's a pendulum. And right now, AI is kicking neuroscience's ass. That was four years ago, and I think it's more so now. Like, the pendulum didn't know it could go as far. [00:42:50] Speaker E: There is no pendulum. AI has been influencing neuroscience for the last, like, throughout my career. I started in late eighties, and all the ideas from AI, meaning machine learning, having influencing us every single year, I actually don't see a trend. I think this is. I mean, like, I don't see a recent trend. That's what I meant to say. Like, you know, David kind of is among the people who revived the kind of neural network craze in neuroscience. So that's kind of what a new trend. But before that, there was this whole period of time where we worked on bayesian approaches that came straight from machine learning and from nureps. So it's been a one way highway. It's been very active for. For 30 years. So I don't think it's a pendulum at all. And I think it should continue, by the way, and we should keep on, even inject more AI. And this conference was created in part with this idea in mind, is to inject massive amount of machine learning and AI into neuroscience. And I think we're successful in complex. [00:43:46] Speaker A: I do agree with that in general, but I've seen just even over the past, like, two years or so, as being someone, like, in AI that cares about neuroscience. I used to have a lot of AI researchers coming to me and asking me, like, oh, so what is the brain doing? And, like, how does this work? And they were, like, really thinking that neuroscience was going to help them to solve certain problems that they couldn't solve. And I think over the past two years, probably with transformers scaling, like, a lot of that kind of movement away from thinking about, like, new losses or new ways to train networks and just wanting to scale, I don't see as many of the AI researchers actually coming to neuroscience as much, or I see, like, a bit of a trend moving in the opposite way, at least just from my perspective and talking to people. But, yeah, I mean, I think in general, we've been influencing each other, but I've seen a little bit of a backslide over the past years. [00:44:41] Speaker G: I'm kind of bummed that we're talking about people having this very clear disciplinary allegiance. I mean, most of the people who have made really cool advances in both fields have skills and competencies at both and have published papers that are interdisciplinary. And so I think this really, that we're even having the debate suggests a level of professionalization that's really counter to having new ideas, which always involve hybrids. [00:45:11] Speaker D: Yeah, I mean, to the point about whether we're overdoing it on the AI stuff, I think we'll probably look back and be like, we really tried to put a square peg in a round hole in a lot of cases, but still, because it's new, because we can. It's new, and it's clearly interesting, and we can use this to explain and try to do a bunch of things we haven't done before. It doesn't seem like an obviously bad idea for people to, like, really try stuff. Try. It's a good. It seems like a good period to be exploring it. And I don't know if I think we'll probably look back and feel like we overfit, but I'm not sure if actually, like, we have totally choice or maybe it's still optimal to, like, be trying a lot of stuff out in this space. [00:45:53] Speaker C: If I could just add to Blez's point, for those people who do think it's an interesting thing to be bilingual in neuroscience and AI, Cold Spring harbor has a. Has a program where people spend two years. [00:46:10] Speaker H: So I was asked to try to encourage Jonathan and Alex to inject a little more edginess in. And I had a couple of questions that have been partially asked and answered, and I guess I'm only allowed one question. I will note just in passing, that the concept of attention probably comes from psychology. And so transformers are probably not devoid of neuroscience, in the Tony and Eric sense, at least, of neuroscience. But I think, blaze, you just kind of undercut me, because what I was going to say is, I think this debate is ridiculous, and all of you guys have both contradicted each other and yourselves. And if we look at models where science works, it integrates ideas from many things, and it has to use tools from mathematics and statistics to understand systems that are complicated. I will point out that this becomes problem when people become professionalized in academia. So even in neuroscience groups, even at cosine, there's not enough real interactions and collaborations between the theorists and the experimentalists. Very experienced in watching theorists come and collect data from some experimentalist and model it. There's no iteration. There was no tested hypothesis from the theory, and they go on to model the next thing. And the next thing. [00:47:21] Speaker G: Well, the experimentalists were question. [00:47:24] Speaker H: The question is, if. If you're all really doing the same thing and you all really agree that these things are interesting, why do we have this basic problem of integrating these things in even neuroscience, let alone between AI, whatever that is? Because we are the model for intelligence, right? So it's going to be directly based on us. So there's still some kind of fundamental cultural problem that we're not overcoming in the way our data from the brain is collected and used with regards to the theory and vice versa. [00:47:56] Speaker F: I lost the question. [00:47:57] Speaker B: Does anyone remember the question? [00:47:59] Speaker F: Was that a question? I didn't actually hear a question. [00:48:02] Speaker B: Thank you for the comments. [00:48:04] Speaker F: Yeah, thank you, Eric. [00:48:06] Speaker B: You guys are ridiculous. I think that was the sum. Right, go ahead. [00:48:11] Speaker G: Yeah, just wanted to. Short question, just a quick comment. I think with regards to robotics, DCA is going to be a year of robotics. We will see they do a lot of, I think, tasks that would probably surprise neuroscience as well, because scaling a lot of this system that works, it requires lots of engineering efforts. And this AI engineers are right. Cuda kernel. But we don't do this sort of stuff in neuroscience. Maybe if we, I don't know, borrow some principle from engineering, we can build an AI, which is going to be different than the current AI, but we'll probably get there by scaling. I'm just wondering, what do you think about just doing a little bit of engineering in neuroscience to scale this? I don't know. Try scaling and try different hardware. So, yeah. [00:48:57] Speaker A: I mean, I can just say that we, in my lab, are currently trying to invest in these sorts of approaches of, you know, being able to train models on large amounts of neural data. And I think that this has been a major gap in. In the field of neural data analysis in some ways, where we do have to, like, fit a model for each new data set, rather than actually being able to scale and combine them. So I think that, um, yeah, through the convergence of both these fields and knowing what we need to do in neuroscience, as well as how to train these large scale systems, I think we are moving in that direction as a field. [00:49:37] Speaker B: Next question, please. [00:49:39] Speaker G: Hey, interesting discussion so far. Right now, when I look at the kind of phenomena looked at in AI and neuroscience, they seem a little bit different, at least in one way. There's probably more that you're aware of. [00:49:53] Speaker I: And this is doing that. [00:49:54] Speaker G: AI thinks a lot about transfer. How do you transfer knowledge to new tasks, new data, et cetera? It seems like less of a focus in neuroscience. I could be wrong, though, if there could be a. If shifts in focus or paradigms would help, would lead neuroscience to be more informative to AI. Can you think of some one seems like transfer, for example. That's my question. [00:50:18] Speaker B: Thanks. [00:50:19] Speaker A: Yeah, I think. [00:50:20] Speaker D: I mean, so there's, like, a lot of times when people start talking about, like, what's new and exciting in neuroscience. A big thing is scale, that there's, like, all this data. We have tons of neurons. We can get them over really long periods of time. One dimension of scale that seems like it's challenging is having lots of, like, getting an animal to perform a challenging behavior, getting it to learn something really complex and rich, seems like it's a big bottleneck, speaking as a theorist and not an experimentalist, but, like, that's. That's my impression. And I think, like, one dimension, if. If one wanted to go all in on. On the scale idea, which, like, there are other approaches, and not everyone should do that, but it seems like that's a dimension for scaling up. And if we can get faster learning curricula, that would be nice. [00:51:12] Speaker F: I'll just say I think that's a great comment and something that's missing in neuroscience. We often train an animal to do one task, and we study how they solve that task, and we don't think very much about what if they actually, there's a very nice paper from Byron News group where they did look at different animals that had different training histories and seeing how they actually solved the same task in different ways. So I think there's part of it, maybe, is the necessity of neuroscience experiments that animals, mice don't live very long, and so we don't typically do train them on 14 different tasks in a row. But I think the question of why we don't forget something when we learn a new task has actually inspired a lot of the ML research, the AI research about that. We typically don't have catastrophic forgetting, where if I learn to ride a bicycle, I forget how to do jumping jacks. But AI does still sometimes. [00:51:58] Speaker G: Quick comment on this. So, the huge revolution in AI has come from unsupervised learning, which turns out to include every task. And one thing that I really haven't seen happen very much. There was a big revolution, of course, in neuroscience when we started to be able to do awaken, behaving, recording, and from large numbers of units. But I haven't seen so much of that unsupervised revolution come to neuroscience. And that seems to me like a big opportunity in the sense that the amount of observation that one can make grows by orders of magnitude, when you're not just collecting your one bit of information, or, like, when did the tongue touch the thing for every experiment, but just measuring everything and discovering the latent variables in it? So that would be one idea, one thought. [00:52:51] Speaker B: Go ahead. [00:52:51] Speaker I: Okay, so, yeah, you guys talked a lot today about neuro AI is trying to bridge ideas from neuroscience to AI or vice versa. But I wanted to ask, what do you think of the perspective of the new generation of neuro AI as a whole new third sort of type of science that's just trying to study information processing machines that are optimizable, more generally speaking, because even before AI, or before humans evolved, information exists in the air, just emerging from the statistics of the world. And a lot of ideas like Doctor Tishby's information bottleneck theory or Doctor Carl Fristen's free energy principle, I think kind of more tackled like this abstract information processing idea. So this is personally what drove me to this field, and I wanted to see this discussion, basically. [00:53:42] Speaker G: Yeah, I love that point. And, I mean, that would have been my answer to the, you know, what drove me to the field as well, or why do I do this? And I didn't mean to say anything bad about medical applications, but, you know, there is a search for fundamental principles here. And, you know, I think that if you either, you know, go down the route of, well, you know, physics is, you know, biology is not physics. You know, there are. It's not some grand unifying theory or, you know, wolfram new kind of science. Like, it's just going to be, you know, simulations that are unexplainable, you know, and they'll be able to make predictions, but we don't be able to say anything about why or how. I don't think that that's the case. I think that there are general principles here, and that understanding them is not going to solve all of the pharmacological details of how to work brains in the right way to fix them. But it will tell us something about how real brains work and give us engineering principles that let us build things like them. I think we're actually pretty close to some of those, so that's definitely my motivation. [00:54:47] Speaker B: Has AI influenced neuroscience in any negative way? All right, next question. [00:54:55] Speaker G: Well, it's kind of related to that, so I really like docs point of making things practical. And if we're not talking about donations, another way to make things practical is education. So when we're training the next generation of neuroscientists, we can only make them take so many courses. You have to give them some certain framework to work with. And I've noticed over the eight years that I've been helping teach this course at UPenn that it used to be I would teach something about the brain, and the student come up, oh, how can we bring that into these neural networks? How could that happen more often? Now, it's, oh, the brain does that, but my neural network doesn't need it. Why does the brain do this? And I have some colleagues who are absolutely horrified that that's been the change, and some who are extremely excited by this and think it should be more so. I guess the broader question is, do we think we need to pivot neuroscience into a much harder emphasis on ML, that we really should take that route? Or is there a more balanced approach? What does that kind of look like in your opinions? [00:55:43] Speaker D: Yeah, I guess sort of. To that question, and also the one that you raised, I just had to cue it. That gets to points about biological plausibility, and my background is more as a cognitive neuroscientist, and I haven't focused tons on biological plausibility, but that would probably be what a lot of people would say has been a downside of sticking too much to AI models is like, it makes it, you know, easy to focus on questions at a level that's maybe not biologically plausible. And you got to be kind of careful about the abstractions you're drawing in terms of, like, the brain doing things that models don't do sometimes. Maybe the reason is just a biological plausibility one. Maybe it's the literal biological materials, or maybe it's just something about learning in a single lifetime and having to, like, grow a brain out of almost nothing. So, like, I don't totally know. We also don't know what's implicitly going on in all of these models, so it's hard to say exactly what they aren't doing for sure. [00:56:48] Speaker E: My sense is that we have to inject even more in the ML in the field through training that neuroscience students should all have to take nowadays, AI and ML courses. And unfortunately, it's not part of most curriculum that I know, and that's the future of the field. I think if you look at physics, any physicist who gets in the field, even if they want to be an experimentalist, they're going to be absorbing enormous amount of theory before they're done with their study. And in neuroscience, it's quite remarkable that you can get in the field and right now know very little about the theories that are dominating the field. And I think it's just a reflection of the fact that we're a very young science that came from biology, and there was a sense that we really needed theory, but I think there is a really severe need for more theory in our field. And this conference was very much created with this in mind. It's trying to contribute to that and push in that direction. [00:57:44] Speaker B: We're short on time. I'm going to take one more question. Sorry for the three of you, but you can come up after, I'm sure. Last question. [00:57:52] Speaker G: A lot of pressure. [00:57:55] Speaker B: Time's up. [00:57:56] Speaker F: Next question. [00:57:59] Speaker G: So I think it's really interesting that while we are talking about this mutualistic relationship between neuroscience and AI, there is this almost parallel mutualistic relationship between AI and cognitive science. And so, yeah, and Alex talked a little bit about Josh's work, but I haven't seen too much here, and it's obviously cosine and not CCN. So maybe that's why. But I'm curious what you guys think about this sort of tripartite relationship and how we, as the computational and systems neuroscientists, can think about what they're doing and maybe gain some inspiration and benefit from their work as well. [00:58:33] Speaker E: I'm really glad you brought this point and that we're going to end on this, because I do think that's. I've always been a big believer that we need neuroscience, cocsai, and machine learning to crack the brain. And I always been frustrated that there's not enough Coxai at this conference. It eventually led to the creation of CCN because a lot of the people were frustrated until they created their own conference. I don't think that's good for the field. I think it's keeping us separated when, in fact, we should all be arguing about this here. That's what I was trying to do by bringing Josh's work here. I know Tim Behrens and I argue about that all the time, and he keeps telling me, you guys need more Cognero. And I want to end by saying that actually, if I had to put some money on who is going to influence AI, I think Cognaro and cognitive scientists might have a better shot in the next a couple of decades than the neuroscientist. [00:59:27] Speaker G: I don't know. I mean, I think that on the one hand, you know, like psychology experiments and things like this really are really useful, you know, so I. You know, and even sociology experiments, for that matter, the more quantitative end of it. But a lot of cognitive science is infected with Chomsky bullshit, you know, and his. And has really done, as Chomsky has done a lot to keep the field back and to inject repeated false beliefs. Just hold a lot of things back for years and years. I think that holding it up as a field, I don't know. I would want to do a little sifting to get the wheat from the trap kind of thing. [01:00:14] Speaker E: Blaise and I had this argument recently about Chomsky. The reality is, we should not forget that before Chomsky was behaviorism, which was incredibly limited, and those guys completely changed neuroscience and cognitive science. But I would agree that they went too far in the whole linguistic thing and the way Chomsky, but I definitely think that those contributions were enormous and that there are people who've been able, and again, going back to Josh, but I'm going to not be able to give too many names here on the fly. But there are many people right now in the field who have gone past that understand neuroscience, are an interface in coming up with absolutely brilliant idea that we're not injecting in our work here and that we should. But it's true. The pinker and Chomsky of this world and the photo of this world who I quoted in my talk. Yeah, that's, I think, we can leave behind. [01:01:04] Speaker B: I've been strong armed into asking you one final question, and that is, what will we be arguing about in 2044 in 20 years? Thanks, everyone, for coming. Oh, sorry. Does anyone want to hazard an answer? [01:01:20] Speaker D: I'm gonna be the first to say I have literally no idea. I haven't been in this field as long as a lot of people in this room. But my limited experience, my limited experience has been characterized largely by, like, incredible rapid change. So I don't think I would have predicted where we are, like five years, two years ago, let alone five. So I really don't know. That's scary, because a lot of research is trying to make long horizon bets, and now is a tricky time to do that. [01:01:57] Speaker G: Well, I know something we're definitely going to be debating in 2044, which is the question of moral patience for AI's. In other words, are they people? That's definitely going to be a debate. [01:02:09] Speaker B: In 2044 for humans as well, perhaps, yeah. And now a message from our sponsor. I alone produce Brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, Neuro AI. The quest to explain intelligence. Go to Braininspired Dot Co. To learn more. To get in touch with with me, email Paulinenspired Co. You're hearing music by the new year. Find [email protected]. Dot thank you. Thank you for your support. See you next time.

Other Episodes

Episode 0

December 15, 2020 01:42:12
Episode Cover

BI 092 Russ Poldrack: Cognitive Ontologies

Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map...

Listen

Episode 0

April 26, 2021 01:27:26
Episode Cover

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental...

Listen

Episode

May 30, 2019 01:11:08
Episode Cover

BI 036 Roshan Cools: Cognitive Control and Dopamine

Show notes: Roshan will deliver a keynote address at the upcoming CCN conference.Roshan's Motivational and Cognitive Control lab.Follow her on Twitter: @CoolsControl.Her TED Talk...

Listen