[00:00:03] The question I'm going to answer is what is currently the most important disagreement or challenge in neuroscience and or AI? And what do you think the right answer or direction is?
[00:00:15] The first thing I need to say is congratulations. What an amazing achievement. 100 episodes of your podcast.
[00:00:20] Thank you for creating it and sharing it with the community and thank you for letting me be a small part of it. It's really been a pleasure and Congratulations with your 100th broadcast.
[00:00:39] This is Brain Inspired.
[00:00:52] My name is Joe Marino. I'm a PhD student at Caltech in the computation and neural systems department, and I work on bringing ideas from predictive coding and neuroscience into machine learning. On your podcast, you've really brought together a diverse set of perspectives covering neuroscience and all the computational aspects associated with it. I really feel it's broadened my perspectives in terms of all the different views that are out there and trying to make sense of this really complex field. You've covered a huge variety of topics on the podcast, everything from cognitive architectures, things like complexity and emergence to dynamics, deep networks, evolution, and you even get into more philosophical topics like consciousness. It's much more than any academic department could offer in terms of just a huge diversity of perspectives. And these perspectives often come back to Mars levels, which is a recurring theme on your podcast. And there's almost this meta scientific aspect in which we're, we're asking what is even the right way to study neuroscience and what does it mean to understand a system? What can these models really tell us about the brain? And I really find this larger perspective useful in trying to make sense of just the philosophy of science and what it is we're doing here. What are we doing here? Well, right now we're celebrating. I have the best listeners and supporters in the universe. Hang on, let me try that again in the universe.
[00:02:18] Thanks, Joe. And thanks to the other listeners who shared audio of their thoughts about how the guests and topics on Brain Inspired have affected their own thinking, which I'll continue to share as we go along. I'm Paul, speaking of diverse perspectives. This second installment of our little 100th episode celebration does not disappoint in that department. You'll hear many previous guests answer in diverse ways the question, what is currently the most important disagreement or challenge in neuroscience and or AI? And what do you think the right answer or direction is? So once again, I randomized the responses because the variety and the diversity of responses outweighed the consensus.
[00:03:04] And check the show notes for links to each person that you hear. That's Brain Inspired I will share my little response to the question from my own limited but ever expanding perspective. I think a major challenge for both AI and neuroscience is understanding the character and the limitations of human and otherwise natural intelligence. I see it as important because, for one thing, I think it'll clarify what kind of AI we want to build. For example, human level or human like AI. That concept makes no sense to me, neither as a concept nor as a goal. We as humans are obviously aware of how flawed we are, how many disorders we have, how deceptive and manipulative we can be to serve our own selfish and trivial interests and desires, and so on. And in the space of possible minds, in the space of possible intelligences, replicating human intelligence just seems short sighted and maybe even an exercise that highlights our own limited imagination.
[00:04:18] I also think understanding our own intelligence will help clarify the relation or distinction between what we currently consider intelligence and all the other processes that are related to life and biology that we don't necessarily think of as intelligent, and that the current computational approach to brain function clearly doesn't value, including the nature and explanation of subjective experience.
[00:04:45] The processes that are important for life, which have been painstakingly shaped over evolution, are inextricably entwined with whatever functions our brains manifest. And we don't know yet how to think about the role of biological processes with respect to what we mean by intelligence, what's important and what isn't important for building our conception of intelligence. Is the computational view of our brains enough, or does it abstract out too much other stuff that for various reasons may be important for function and for intelligence and for experience?
[00:05:21] We're at this odd time right now with our metaphor for the brain, and the current metaphor is one of computation, like Turing computation, a certain kind of information processing. And that feels more right than all of the other metaphors beforehand that relied on whatever happened to be an advanced technology of the day, like water pumps, like the telegraph, and on and on. But I say it's an odd time right now, or at least in my opinion it is, because we all know that those previous metaphors failed. So we know we're sitting in this field littered with dead metaphors and they're laying all around us, and we're just sitting next to this latest one that's still alive, that we're clinging to the computational brain metaphor. And it may be right, or it may be just more right than any other metaphor that we've used thus far. It may still be missing things. I think that the jury is still out on that question.
[00:06:19] So Anyway, that's my little spiel for now. As you'll hear, there are plenty more takes on this question and ask me in a week. Right. And I probably will have changed my mind. However, for this episode, here is the important message that I took and that I think that you should take moving forward. The message is this. There is no shortage of exciting and interesting and deep challenges to work on in the broad field of understanding and building intelligence. There are more now than there ever have been. So your possibilities are vast in terms of what paths are open for you. We all essentially have endless opportunity to explore what interests us. And that is awesome for curious and motivated minds like yours. Enjoy.
[00:07:10] So I'm Rodrigo Canquiroga, neuroscientist at the University of Leicester. I think the big challenges is what Minsky said, like ages ago, and it's general intelligence. And actually I think there are two challenges which they may be linked to each other. One is general intelligence and the other is consciousness. So basically, how can we make a machine as intelligent as a human being? And I think we're still very far from that. And perhaps related question is how can we make a machine conscious of its own existence? And I think these are questions that we have been avoiding for quite some time, or that scientists in artificial intelligence have been avoiding for quite some time, and maybe some people now start to start thinking about it more and more seriously. I think this is the really, really big challenge. I mean, to have machines learn in very few trials, not with millions examples and practicing and practicing, but just learning one shot as we can do to transfer knowledge, I mean, to develop common sense and to be able to do analogies and inferences. And these are all things that Minsky said some time ago. And this we can put together as general intelligence. And maybe this development of general intelligence, of common sense, somehow is linked to having machines, I mean, developing self awareness. Masrita Chiramuta hi Paul, it's nice to be back on your podcast.
[00:08:39] I think the most important challenge in neuroscience at the moment, and I think it's not just at the moment, I think this is the perennial challenge, is actually finding a way to deal with the complexity of the brain in such a way that you have models that are simple enough to allow people to, especially the scientists themselves, to understand the phenomena that they're dealing with, but are not too simple that they like grossly oversimplify the brain. So there's finding the sweet spot of the balance between just the complexity that is there in the systems and comprehensible models and Theories.
[00:09:15] Chris Elias Smith Let me move on to answer two other questions on your list. Questions number two and three. I'm grouping these two together because I think they share an answer. Recall that the questions are number two, what is the most important disagreement in the field? And what is the right direction? And three, do you think scaling with more parameters and generic architectures is going to lead to human level intelligence? One important disagreement in the field is whether we need to worry about cognitive level processing and representation or not. I think Gary Marcus and Jan Lecun had something of a Twitter war over this one actually, essentially, models like GPT3 assume we don't. If we have enough data, we can learn everything we need to from that data, including what we might have thought of as conceptual organization, cognitive manipulation of concepts, how to usefully model the world and all the relations in the world, and all this kind of stuff. So all that higher level cognitive stuff at a more abstract level, you can actually think of this as a version of the question of whether we should allow ourselves to impose structure on the networks and representations in our models. This is a version of the same question, because including concept like representations is exactly an example of imposing structure on the network and representations. I'm actually a big believer in the importance of this approach for several reasons. One, it's resulted in huge successes in the past. Just look at convolutional neural networks. This is the workhorse of deep AI. Those networks adopt the structure we find in the biological visual system. Another reason is that nature's had billions of years to determine the right network structures and representations that we start with when we're born. So the amount of computation that that evolution represents is kind of unfathomable. So we're going to have to jump the queue, if you will, in some way. And I'm guessing that the insights we get from neuroscience and psychology and good old ingenuity are the kinds of things that are going to let us not pay that huge computational cost.
[00:11:07] And finally, I come back again to the Spaun model. There's really no way we could have started with a blank slate or a really deep generic network and just train that network in order to give us a model that does what Spaun does. Of course, because Spaun is a neural network, we can actually backprop through the final version after we built it, and we can optimize. But to get it to that original functionality using just backprop and a lot of data just isn't a viable option for several reasons. One is that models like GPT3 cost about $12 million to train their 175 million parameters. If we assume that we can just scale by parameter count, Spaun would still cost $1.4 million to train. And frankly, my lab doesn't have that kind of budget.
[00:11:50] Second, this approach assumes that there is a dataset that covers the 12 different tasks that Spaun can do. Unlike the terabytes of data on natural language, getting that kind of data for intelligence tests and copy drawing and so on just isn't possible.
[00:12:05] And third, we haven't seen much evidence that we can train a single model that does a lot of different tasks, tasks like vision, motor control, reinforcement learning, intelligence tests, and so on with this one big data set. This kind of heterogeneity of task is really foreign to most neural network models.
[00:12:25] So ultimately, I think that my arguments speak to a need for integrating methods, not that one's better than the other. I love deep learning. My lab uses it all the time. But we also use concept like representations, and we impose the structure of the mammalian brain on lots of our models. In some ways, the approach I'm suggesting here is what I argued for in my book how to Build a Brain. It's an integrated hybrid approach to building systems that can achieve biological cognition.
[00:12:51] My name is Jim DiCarlo. I think one of the most biggest disagreements and challenge right now is that there's a sense in neuroscience at least, that models that are going to provide understanding have to be somehow elegant and simple.
[00:13:06] And I think there's a need to accept that complicated models are at least a bridge to simple understanding, but may also be the form of understanding that we're going to need to learn to live with and to appreciate how great it can be.
[00:13:24] This is Paul Chiesek from the University of Montreal.
[00:13:27] Well, I think the big debate is really the same one that's been going on for decades, really. And that's. And that's just the basic ideas about the functional architecture of the brain, particularly the role of representations.
[00:13:40] So there's many theories that see representations as sort of the central elements of how everything works. Not just cognition, but sensation, motor control, everything. And in that tradition, you get models where the goal is to answer questions of how one representation is transformed into another. That kind of has an implicit assumption that someday you're going to just kind of connect up all these modules and voila, you're going to have a working model of the brain.
[00:14:09] But on the other side of the debate, there are theories that reject that centrality of representations and focus instead of Process models, where it's all about the dynamics of interaction between the organism and the environment, etc. And I'm much more in favor of that latter view, except that I don't reject the idea of representations, as many people in that camp do. So I do think representations are very useful. I just think we need to emphasize the mechanisms.
[00:14:42] So essentially, because I think if you think about representations in the context of a particular mechanism, then they're useful, they're performing some function. But if they're just like the border, the input or the output of a mechanism, then it's really like passing the buck. And I don't think we're just going to be able to connect up different models and have something that really works. I think they're really only meaningful in the context of some kind of a process model.
[00:15:12] So I think that debate goes on, though, because I think a lot of people have become a bit polarized.
[00:15:19] It's either all about representations or they're absolutely not acceptable. And this debate has been going on, really since the early days of psychological science, and I think it still goes on. And I think maybe in AI, most people just assume the traditional view. But I think that's a big challenge because both views, I think, have a lot to offer each other.
[00:15:46] I think one way to move forward is to take the mechanisms more seriously rather than the representations. I think we have to.
[00:15:54] You know, it's not like the mechanisms should be defined in terms of how they take an input representation and create an output, you know, like an image, and create some kind of a model of the world. I think instead we should think of the actual processes, the sort of behavioral.
[00:16:12] Behavioral capacities. And then in the context of specific behavioral capacities, only then do we want to talk about what sort of interim patterns of activity or representations might be useful. And they're not going to be the traditional type. So I think if we continue defining the problem in terms of these little modules that are sort of like input output modules, I don't think we're really going to get a deep understanding of how things work together.
[00:16:41] I'm Nathaniel Daw at Princeton University, Princeton Neuroscience Institute.
[00:16:47] I think the big challenge in neuroscience, and it has been for a while, is how to deal with large amounts of data. And it's not just how to deal with large amounts of data in the sense of statistics, but it's like how to ask a question to which the answer is record 1000 neurons, or how to form a hypothesis for which the answer, the hypothesis about how the brain works that you can answer by recording 1,000 neurons.
[00:17:21] And I think part of the challenge is that faced with all this data, it's very easy to do descriptive things. And a lot of the fancy trends in neuroscience are just ways of describing large amounts of data like manifold or graph theory and networks.
[00:17:42] They're just ways of writing down facts about ensembles of measurements and simplifying them to some extent. But they don't really answer questions. It's hard to even know what the question is to which the answer is all this data.
[00:17:58] My name is Jessica Hamrick. So what is currently the most important disagreement or challenge in neuroscience or AI, and what do you think the right answer or direction is?
[00:18:08] For me, the biggest challenge in AI, and I think it's also a question that we haven't really answered in neuroscience or cognitive science, is how do we do abstractions? What does that look like? I think we're pretty good at training systems that are able to learn really good low level or fine grained control. And we can also train systems that are able to solve really, really challenging reasoning problems given the right level of abstraction. But we don't really know how to connect those two together. We don't know how to from raw sensory data to the right level of abstraction. And it doesn't seem like that sort of thing has really emerged from training these big end to end systems. No matter how much data we seem to throw at them, they still don't quite connect up between these low levels and these high levels of abstraction. So I think that understanding how do we get out things like symbols from perceptual data I think is really a major, major challenge.
[00:19:07] I'm Russ Poldrack. The question I'm going to answer is what is currently the most important disagreement or challenge in neuroscience and or AI, and what do you think the right answer or direction is? So I think one really interesting challenge is figuring out how to bring together two very different ways of thinking about brains. One that I'll call computational, tries to understand the specific computations being performed by particular neural circuits or networks. The other, which I'll call network neuroscience, for lack of a better term, thinks about brains in terms of large scale dynamical systems and network modeling. And so there seems to be a fundamental disconnect between these two ways of thinking about the brain. The computational view thinks about the individual computations being done by particular areas, where the network view sort of treats different areas as kind of fungible, other than kind of being part of this bigger network and maybe being part of particular paths through the network. So each of these seems critically important to understanding how brains work. But because they come from very different approaches, the computational view really comes more from kind of computer science ideas and kind of computational modeling, whereas the network neuroscience approach really comes more from kind of physics and dynamical systems.
[00:20:21] It's a challenge to bring those together. Now, there's certainly people working at those crossroads. David Sicilo is somebody who comes to mind as an example. But I think there's a ton of challenging work to be done to try to understand how brain dynamics and network structure give rise to cognitive computations and how that might help us understand how to build intelligent systems that are as robust and sort of low power consuming as human brains are. Peter Roofsoma the one thing that I think is a really important development in neuroscience is the ability to record from many neurons at the same time and also to interact with them, stimulate them, so that we can now create interfaces with the brains, first of animals, maybe later in humans, that will give rise to new functionality.
[00:21:13] So in my own work, I'm really interested in creating an interface with the visual cortex to restore a rudimentary form of vision for blind people. But this technology will also be helpful for the creation of much better brain computer interfaces.
[00:21:29] And I think we're now in a time where all these developments that we need for brain computer interfaces are coming together, thinking of new technologies, brain chips that allow for fast communication without wires, but also the methods to decode and encode brain activity based on artificial intelligence. So all those things are now coming together and this is a very exciting time to work in this field.
[00:21:59] Yeah, yeah. So we created an interface in monkeys with 1,000 electrodes, and we demonstrate that it's possible to impose meaningful patterns. So we train the animals to create. We train the animals to recognize letters, and we can just put the letters in their brain and they recognize them.
[00:22:20] My name is Konrad Carding. I'm a neuroscientist at University of Pennsylvania. What is currently the most important disagreement challenge in neuroscience and AI? Let me answer it. For neuroscience, the biggest and most consequential disagreement in my view is the role of reductionism. How should we understand the brain? And there is this old reductionist idea, which is the right way to understand the brain, is understand the brain areas, understand what each cell does within, or what the population does within it, and basically produce a reductionist model of thinking. There's an alternative view that might hold that that first reductionist view cannot possibly work.
[00:23:09] Why would people say that? They would say that basically there's so Many parameters. The space of potential models is so big that no amount of measurement can ever get us there.
[00:23:21] If the reductionist dream can work, then that is arguably the right way of doing neuroscience.
[00:23:29] Alternatively, you can say there are fallback options. If we believe it doesn't work, we might want to understand how the brain learns, assuming that what it learns from a complicated world is too complicated.
[00:23:43] There's yet another possibility which is just normative models where we could say what we are looking for is basically just a model that says in which way could the brain be optimal?
[00:23:58] And I think this discussion carries over into artificial intelligence.
[00:24:06] How can we understand an artificial intelligence system? There's a lot of emphasis of people basically applying methods like they use in neuroscience to artificial intelligence, machine learning, deep learning methods.
[00:24:21] But it's possible that we cannot understand it, in which case we might rather want to talk about, say, the learning algorithms that get there. Now, to put this whole discussion a little bit into a context, there's no doubt that for a lot of the medical questions, the reductionist paradigm is the right way to go. At best, we can hope to get at some mechanisms, mechanisms that we can interfere with. The question is, is that endeavor to finding cures for diseases actually all that tightly coupled to understanding how we think?
[00:24:56] It might not be all that coupled.
[00:24:59] It might be possible that we want to drive towards mechanistic explanations for the sake of curing diseases and that at the same time we want to drive towards non mechanistic explanations when we want to understand human thoughts.
[00:25:16] This is Matt Smith from Carnegie Mellon University.
[00:25:19] What is currently the most important disagreement challenge in neuroscience and or AI, and what do you think the right answer direction is? I don't know if this is the most important disagreement, but one of the things I always read with interest is when people talk about the idea of singularity in AI, that is strong AI or artificial general intelligence, where the AI has capacity that's close to that of a human. This is of course fodder for sci fi books and sort of fun to think about.
[00:25:53] But I guess I've never talked to a neuroscientist who really thinks that we're genuinely that close to a singularity. It's usually popular science people or computer science people, and I think it bears on a fundamental difference between how people think about the brain in terms of AI versus neuroscience. So in terms of what do I think the right answer direction is? I would say if we can't give up that idea, we're probably holding ourselves back from progress a bit. That is to say AI can do all sorts of wonderful, cool things.
[00:26:33] If we really think that we're actually getting close to some singularity, I think we're underestimating the scope of the problem.
[00:26:42] Right. I don't think it's really actually genuinely that close that we will, in a computer, replicate something that has general artificial intelligence. Instead, I think we should be focusing on the things that we are making great progress on, and that's using AI to solve practical problems, which, of course, has been wonderful in many ways, and also trying to use AI as a tool to understand the brain. And when we did our interview, we talked a little bit about this. There's certain examples, reinforcement learning or object recognition, where people have made some really neat progress in comparing how AI solves problems to how the brain seems to solve problems. And that's been, I think, really mutually beneficial. So I guess for me, one of the biggest, as an outsider who's not working directly on developing new AI tools, one of the disagreements that I think is critical is giving up on some imagined view of what AI will do and instead focusing on the really great things we're doing right now with AI and neuroscience.
[00:27:55] Hello, this is Rafo Bogac, and I would like to discuss the question concerning important challenges on the interface of neuroscience and AI. There is currently a big gap in the number of training iterations required to train humans and artificial neural networks. So I feel it would be very fruitful to investigate how the biological neural networks achieve so high sampling efficiency. Developing biologically plausible models that can learn as efficiently as humans could also help to investigate deep learning in the brain. A lot of great work has been done so far comparing neural activity observed in the brain with activity in neural networks. However, all this work has employed already trained neural networks. Once rapidly learning models are developed, one would be able to compare the activity in the brain and the models during the learning tasks given to both humans and neural networks. This would allow comparing the error signals in the models with neural activity. Such comparison would be very useful for distinguishing between different models of deep learning in the brain, which make different predictions on how the errors are represented in brain activity.
[00:29:16] This is John Krakauer.
[00:29:19] I think that's easy. I mean, I think the biggest challenge, and you can see it all the time in conversations across the specialties, is common sense, semantics, and meaning. I think that's the biggest challenge. And either attempts are made to explain it away.
[00:29:42] Right? So the inactionists and the embodied people and the dynamicists and the evolutionary arguments and affordances. In other words, there's a whole way to try and go against this intentionality version of representation where it's about something and you have to think about semantics and meaning. I think that is by far the biggest puzzle.
[00:30:14] It's at the core of what people colloquially call thinking. And either one admits it and says it's not clear how we're going to get there and that it's integral to thinking, or one finds some way to deny it's an issue and explain it away. Either one of those stances nevertheless simply highlights the fact that it's a huge challenge. Right now, myself, Van Gerven, I'm chair of the AI department at the Donders Institute. One of the points which are really important, and I think that has been on your show quite a few times as well, is a very big challenge, is to truly understand synaptic plasticity mechanisms. Right? So really understanding how the brain learns. I feel that there's a lot of advances there, recent advances that move us in the right direction. And also in my own department we have been working on this. So in the most recent Neurips conference we had one presentation of our work which basically shows that activity based learning so related to target prop can actually be made stable for very deep networks. That's a good development. It means we don't need gradients, the propagation of gradients. We can just focus on the propagation of target activation. So that could be one piece of the puzzle. But I think a lot of new stuff is coming up. So I think that's highly exciting both for neuroscience, but also for AI.
[00:31:48] Hi Paul, this is Yuri. What is currently the most important disagreement challenge in neuroscience and AI. AI and robotics often claim that they are biology or brain inspired, as is obvious from the use of the word intelligence. But in my view, they are inspired by the wrong brain model, which one can call a blank slate or tabula rasa model.
[00:32:12] Let me explain. Historically, research on the brain has been working its way in from the outside world, hoping that such systematic exploration will take us someday to the middle and on through the middle to our actions. The assumption is that the brain, or more precisely the mind, is initially a blank slate, filled up gradually with experience in an outside in manner.
[00:32:38] Thus the initially empty mind becomes more complex with experience.
[00:32:43] AI and particularly its connectionism vein, adopted this tabula rasa model by training computational models to symbolize or represent input patterns. This prevailing view is perhaps most explicitly expressed by Alan Turing, the great pioneer of mind modeling. Let me quote Turing.
[00:33:05] Presumably the child brain is something like a notebook, as one buys it from the stationers.
[00:33:15] As a result, AI platforms built this way do indeed become more complex with extensive training.
[00:33:21] So much so that at some point, new learning induces a catastrophic interference and the machine forgets everything. This doesn't happen to my brain.
[00:33:33] An alternative brain centric view, the one I am promoting, is that self organized brain networks induce a vast repertoire of preformed neural patterns while interacting with the world. Some of these initially nonsensical patterns acquire behavioral significance, what you can call meaning. Thus it is inside our model. Experience is primarily a process of matching between pre existing neural patterns in here and events outside there in the world. The brain's primary preoccupation is to maintain its dynamic. May I speculate that using this correct brain model, entirely different machines can be built based on a fusion of AI and robotics, just like the brain.
[00:34:25] In the human versus machine comparison, we often find ourselves simultaneously saddened and enthusiastic when we learn that a new AI machine outperformed a human being.
[00:34:40] However, that robot or supercomputer was built by hundreds to thousands of interactive human brains.
[00:34:49] No wonder such a machine can beat a single Homo sapiens.
[00:34:53] We often mix up the comparison between human versus machine knowledge and the knowledge of humankind versus knowledge of machine kind.
[00:35:04] Our knowledge is rich not only because of the performance of the human brain, but also because there are 7 billion brains on this planet and a good fraction of them can communicate effectively, facilitated by the Internet. The Internet is the new agora for exchanging externalized knowledge on a global scale. We humans are designing and studying AI rather than the other way around.
[00:35:32] Is anyone concerned that this relationship will flip? Come on.
[00:35:37] Thomas Nosselaris Department of Neuroscience, University of Minnesota what is currently the most important disagreement or challenge in neuroscience and AI? What do you think the right answer or direction is?
[00:35:50] This is maybe not a controversy, but it's a tension. And I think that tension is very important. Attention that's very important is noise.
[00:36:02] So the brain is extremely noisy in a way that many AI systems are rarely designed to be. For example, the mammalian visual system generates highly structured activity in the absence of visual stimulation.
[00:36:18] And we don't really know why that is and what it means. And I think that determining if this kind of brain noise is just a biological inevitability that is best avoided or engineered out of AI systems, or an indication of computational or hardware principles that AI should embrace and invest in, is a big and important challenge.
[00:36:45] It marks, I think, an important difference between biological and AI systems, and we need to get to the bottom of that.
[00:36:54] I'm Steve Grossberg. What is currently the most important disagreement or challenge in neuroscience and or AI?
[00:37:03] And what do you think the right answer or direction is?
[00:37:08] Well, I don't feel qualified to identify the most important disagreement or challenge in either field. However, I can provide some personal perspectives.
[00:37:21] Let's start with neuroscience in general. I believe that neuroscience is developing wonderfully well, at least technically.
[00:37:33] The new methods for probing individual neurons and small networks of neurons, such as opioid genetic methods, are very impressive indeed. On the other hand, there seems often to be a lack of understanding by young investigators who use these methods of the functional meaning of their own data for behavior.
[00:37:57] And the methods themselves often provide no link between brain mechanisms and psychological functions, or said more simply, of how our brains make our minds. But without such a link, we can't really understand how our brains work and what our brain mechanisms are for.
[00:38:20] Making such a link cannot be made by current experimental methods because the emergent properties that characterize psychological functions arise from interactions between many thousands or even millions of neurons.
[00:38:38] Only an appropriate neural model about how our brains make our minds can explicitly derive such emergent properties from model neural networks.
[00:38:52] For the past 63 years, I and my colleagues have been steadily discovering increasingly comprehensive neural models of how our brains make our minds.
[00:39:04] These discoveries have provided principled explanations of a very wide range of mental capabilities in both normal individuals and clinical mental disorders. To this end, I and my colleagues have by now published over 550 articles in multiple prestigious neuroscience and cognitive neuroscience journals, among others. Moreover, many of these contributions are highly cited. For example, for people who follow these kinds of statistics, I have around 78,000 citations on Google Scholar and an H index of 128.
[00:39:47] I also served as founding editor in chief of the main Neural Networks journal from 1987 to 2010, as well as serving on the editorial boards of 30 other journals. In 1988, I founded the International Neural Network Society so that interdisciplinary scientists could come together to discuss their results at least once a year.
[00:40:13] And in addition to presenting our work at neural network modeling conferences, I and my colleagues have for over 40 years, frequently presented our latest results at mainstream neuroscience conferences.
[00:40:27] But despite all of our efforts, many neuroscience investigators seem uninterested in the fact, for example, that I may have predicted their latest experimental results 20 years ago and explain the underlying mechanisms that gave rise to that prediction.
[00:40:46] Or they may not even realize that their new result is just a variation of experimental results that were published years ago. They often don't understand the connections between experiments because they often don't understand the functional meaning of their own data.
[00:41:05] Partly this may be due to the old laboratory mentality of thinking that the only data worth knowing are the data collected by members of your own lab or the labs of your immediate friends, not the full range of relevant interdisciplinary data from all labs, past and present.
[00:41:26] So, in summary, I believe that the Society for Neuroscience and other neuroscience organizations are currently failing to provide enough infrastructure and training opportunities to develop the kind of fluent interaction between theory and experiment that's essential for the future health of the field. Although things are admittedly much better than they were, let's say, 30 years ago in that regard.
[00:41:55] So let me now make some comments about AI.
[00:41:59] AI, just like neuroscience, is a huge field that I won't try to comment on in its full breadth. I will focus my comments on the connection between AI and neural networks research.
[00:42:14] First, this kind of connection is a good thing, if only because our most successful examples of intelligence are human brains, and that is what biological neural networks try to understand.
[00:42:29] And this connection between AI and neural networks did not always exist. In fact, Marvin Minsky, who was one of the leading pioneers of AI, at first shunned neural networks, if only because his own initial research on neural networks failed to make significant progress. Marvin considered himself one of the smartest people around, so if he failed, it followed that no one else should waste their time trying.
[00:42:59] Instead, Marvin turned to the computer as a metaphor for intelligence because he couldn't understand the brain.
[00:43:07] Many years later he did get interested in neural networks and the brain, but by then he couldn't catch up with all the amazing and rapid progress that had been done in the intervening years.
[00:43:21] I really can't resist telling you the following anecdote. After Marvin got interested in neural networks again, I got him invited to give a lecture at the main International Neural Networks Conference. At that time, of course, he didn't prepare his lecture as usual, so most of it consisted of extemporaneous jokes. But he did say one thing that had the whole audience roaring with laughter. He said that the biggest mistake he made in his earlier life was to underestimate Grossberg. And everyone thought that was a total riot.
[00:44:05] Currently, deep learning has become very popular in AI.
[00:44:10] Deep learning uses the backpropagation algorithm to learn. So many people call deep learning back prop on steroids.
[00:44:21] In 1988 I published an off cited article that lists 17 fundamental problems of backpropagation and showed that adaptive resonance theory or art that I'd introduced in 1976 had solved them all. These were not minor problems, for example both back propagation and deep learning are untrustworthy because they're unexplainable.
[00:44:51] They're unreliable because they can experience catastrophic forgetting. As a result, no life or death decision can justify using them like a financial or a medical decision.
[00:45:05] Moreover, it's perfectly clear that deep learning is not how our brains work.
[00:45:11] And that was 32 years ago.
[00:45:14] Yet today, deep learning is all the craze, and adaptive resonance theory is unknown to many people in AI.
[00:45:22] Despite the fact that I and my colleagues have developed adaptive resonance theory into the most advanced cognitive and neural theory of how our brains autonomously learn to attend, recognize and predict objects and events in a changing world.
[00:45:43] That is, adaptive resonance theory is one of several of our models that's helping to achieve autonomous adaptive intelligence, which I believe will rapidly become the most important, important computational paradigm of this century.
[00:46:01] Moreover, since its inception, adaptive resonance theory, or art, has been used in hundreds of large scale applications in engineering and technology.
[00:46:13] Because art has a unique combination of learning and prediction properties that alternative algorithms don't have.
[00:46:22] Art is, moreover, not just another model. It is, in a fundamental sense, unique.
[00:46:30] I say this because already in 1980, I was able to derive art from a thought experiment about a universal problem concerning how do you autonomously correct predictive errors in a changing world using only local computations?
[00:46:51] So art is, for better or worse, here to stay.
[00:46:55] AI thus has many of the same problems as neuroscience, of cliques and insufficient training to inform its practitioners about what is known.
[00:47:07] Hyperspecialization of this kind seems to be an epidemic in science and technology.
[00:47:16] Let's take a quick little break and then we'll get back to the responses.
[00:47:23] Well, Brain Inspired has inspired me to think more about brains and continue to think about a career in neuroscience. I feel like since listening to the podcast, lots of the people who I've been looking up to there be more human in my mind. They'll be more approachable, there'll be more like me, which makes me really, really want to go into this field. And I have found more and more motivation to think about the different aspects of neuroscience and AI and how they meet. So if before I thought it's only computational neuroscience, now I can see it's systems neuroscience and theoretical neuroscience and a few other angles one can look at.
[00:48:13] So thank you Paul for doing such an amazing job.
[00:48:17] He's the only one podcast that's for which I go on two hour bike rides for just because I want to pay full attention to your podcast. Thank you David Popel and I work at NYU and at the Max Planck Institute. So everybody assumes that we have in neuroscience and psychology some story about memory, but the fact of the matter is we don't understand how memory works at all. We have no story. And I think it's one of the biggest parts of the bankruptcy of neuroscience that we don't understand how information is stored in a mechanistic, neurobiological sense. We have metaphorical things that we say about synaptic patterns, but that's not an explanation and it's not a good account of the phenomena that actually constitute what memory is, information being carried forward. And so that is, I think, going to hold us back until we have a major breakthrough on what it means to store information for reals.
[00:49:24] It's Patrick Mayo. What is currently the most important disagreement challenge in neuroscience and or AI, and what do you think the right answer direction is? I'm not really sure. The first thing off the top of my head is sort of a undercurrent of single neuron versus population recordings disagreement where single neuron recordings were the conventional approach in systems neuroscience for a long time, and now there's a lot of interesting results using population recordings.
[00:49:56] What do you think is the right answer?
[00:49:58] Yeah, again, this is going to be a pretty basic thing, but I think the right answer depends on the question that you're asking. If we want a fairly cursory, but I think satisfying understanding of the brain and behavior, I think population recordings are going to be hugely useful for that. That being said, you of course, will need information about individual neurons. And so of course, single neuron recordings will not go anywhere anytime soon, as far as I know. So my name is Stefan Leinen. So, of course there's a number of challenges that we're facing right now in neuroscience and AI. It's an exciting time, but one that I'd like to really pick out is a dualism or a debate between two sides, which has been around for decades in AI. And that's the debate around the connectionist approach versus the symbolic approach. And I think versus is the correct term here because usually you're in one camp or the other. There are two very distinct approaches to building AI systems. The symbolic approach is the traditional way of designing AI systems where we think about how a mind would work or how an intelligent being would solve a problem and then just go ahead and build it, usually based on some kind of logical foundation or using programming. Whereas the connectionist approach, of course, is more based on machine learning and often leads to a black box solution, which is not the case in a symbolic approach. And I think what we've seen in the past decade is a number of AI winters, but we've also seen the sort of swinging movement back and forth between sometimes the symbolic approaches are invoked and sometimes the connectionist approaches. And in the past decade this connectionist swing has had quite a strike. So you don't hear that much about the symbolic approaches at the moment. And I think one of the reasons why connectionism is so widely spread and so impactful and so promising is because it just scales great. So we have an abundance of data, we have cheap processing and memory storage, as we all know, but it's a very scalable approach because it works in parallel and it doesn't require a team of programmers to build something because it can self learn. And I think the major challenge that we're facing is it's something close to what in physics would be called the sort of grand unification theory of quantum theory and relativity theory. So we have this in AI as well, I think. And what's interesting at this point in time, it used to be that the symbolic approach is in the lead. And then it was up to the connectionists to explain how they would sort of attach their system to the symbolic way of reasoning. So you have a logically reasoning robot that also unfortunately required a camera and some wheels to move around. So then we use a connectionist black boxes just behind the camera. And. And so it was up to the connectionists to sort of translate the real world onto this symbolical system, the logical system. And it didn't work.
[00:53:18] And now we're in a different mode where the connectionists are actually in the lead. And so we're approaching this problem from a completely different angle. Now it's up to the symbolic people, the symbolic groups, to show how you can express their theories, how you can use logic and causal reasoning and design, but map it onto a connectionist system and make it scalable and make it learnable. I think we're up to this challenge. I think we're actually right there. I think this will be the decade where we approach this challenge and it will lead to awesome stuff. So many of the benefits that we get from the symbolic approach, one, being able to peek inside how a system works or manipulate, play around with it because it's designed, or even imposing norms and ethics and sort of legal constraints within the system. I think it's going to be possible if we are able to meet that challenge and express symbolic approaches in AI as connectionist systems.
[00:54:24] This is David Krakauer, perhaps my second favorite or most challenging question. Is what I guess I, or any of us consider the most important disagreement or challenge in neuroscience and, or AI. And I think it is really the challenge between neuroscience and AI. They don't face the same challenge. They are fundamentally opposite approaches to the same challenge. The challenge being how do we understand what is meant by intelligence, how do we implement it? And neuroscience is staunchly reductive. It, by and large adheres to the position that an understanding of brain requires an understanding of neurons and neural circuits and neural modules.
[00:55:25] Whereas AI, at least in its current preferred incarnation as machine learning, pays scant regard to any of that. It's algorithmic, it's a statistical model that could be implemented in any kind of material where intelligence, etc. Is understood in terms of simple mathematical statistical concepts like classification or clustering.
[00:55:53] So it's a rather like the final question on mind brain.
[00:55:57] I think what we're seeing now in the early 21st century is the school of brain and neuroscience, to some extent cognitive science and the school of mind, AI, machine learning.
[00:56:11] And it's not clear, I think, how those two will be reconciled. And the architecture of neural networks, to the extent that they resemble in some analogical fashion the brain, I think is spurious. And so that shouldn't mislead us into thinking that the two approaches have a deep structural correspondence. And the interesting thing, I think, about machine learning and AI is the fact that it's being deployed to solve problems that the human brain does not solve very well, like, for example, being highly effective at playing combinatorial games and potentially in the future, solving mathematical, even natural, scientific problems.
[00:56:58] So it could be that solving your last question will be forced to, in some sense, solve the last question by trying to reconcile what machine learning algorithms are doing in a more general language, as, for example, we've seen in attempts to apply statistical physics or the information bottleneck to deep learning, as John Hopfield had done in understanding content, addressable memory in the collective dynamics of neurons.
[00:57:33] So maybe there'll be some weird rapprochement achieved by the mathematics that are used to understand each of those systems in their own right. And someone smart person will say, wait a minute, you're both speaking the same dialect. And perhaps that will allow us to see how these things are connected. But currently I don't see that.
[00:57:58] And perhaps the great utilitarian edge that machine learning has will lead to that approach, at least in terms of most smart people's efforts becoming dominant.
[00:58:13] This is Wolfgang Maas from de Graz University of Technology in Austria. I think now many researchers realize now, this trivial fact that the brain has been shaped by evolution, it has not been designed by theoretician of any type. And I think we are missing concepts and methods for really understanding such systems, complex systems that are shaped by evolution by some heuristic optimization process.
[00:58:43] And this is, I think, something where we're just standing at the beginning. It's somewhat related to reverse engineering artificial neural networks and also for making artificial networks interpretable. But these are still artificial networks that have been optimized by one algorithm there, not by zillions of algorithmic hacks which evolution was likely to use.
[00:59:12] Ulrich Hassan from Princeton University. I think AI is an amazing engineering feast. It really changing the way people interact with computers.
[00:59:24] The main challenge is to understand, is it relevant to the human brain, Is it a completely different way to do stuff? Or is the brain using similar tricks to act? Are we biological neural networks and artificial neural networks belong to the same family of models using similar tricks? Or this is completely out of domain achievement that have nothing to do with the way the brain is working?
[00:59:50] This is Steve Potter from the Georgia Tech Laboratory for Neuroengineering. What's currently the most important disagreement in neuroscience and or AI, and what do you think the right answer is? Well, there's a lot of different disagreements. I wouldn't say this is the most important, but an important disagreement in neuroscience is that there are those who think consciousness is a hard problem or something special, versus others who think it's just physiology. I am in the latter camp. It's not any harder than other neuroscience problems, which of course are very hard.
[01:00:27] But we need to stop thinking in binary terms. Consciousness is not really an all or none thing or a unitary thing. I think of it as a large set of adaptations that organisms evolve to help them respond appropriately to their environment and whatever situation they find themselves in.
[01:00:47] Some organisms may have more of these adaptations than others.
[01:00:52] Okay, to tackle consciousness, I think we need to keep breaking it down into its different capabilities and study the circuits that are active when those capabilities are being used or being implemented. This is basically following the plan that Christophe Koch and Francis Crick laid down when they studied visual awareness. That is, they were looking for the neural correlates of consciousness as far as the feeling of what happens, or qualia, as people call it. We need to start by studying all the circuits that lead to any feelings at all. Where do feelings come from? Those are just more circuits that we can study neural correlates of feelings. Some of them are tuned to take internal brain states as their inputs.
[01:01:41] So There needs to be sort of a field of a study of neuroenteroception, interoception being, you know, looking, sensing signals from within your body. In this case, we're talking about sensing signals from within your brain or your nervous system itself.
[01:01:59] Talia Konkle, what is currently the most important disagreement or challenge in neuroscience and or AI, and what do you think the right answer or direction is?
[01:02:11] I don't know the answer to this one. I can think of two possible things. One is maybe, I think it's a subtle, I think it's a voice to disagreement, but maybe it is. Is there actually really such a thing as artificial general intelligence? I think there's a sense that.
[01:02:37] So hard to talk to no one.
[01:02:40] I kind of lost my train of thought there because then I got distracted by my other answer, which was, oh, because I just attended this panel and I thought one of the discussions that went back and forth was kind of interesting, which was how do you make progress in this field? And there are sort of two sides that were expressed in this debate. And on one side it was, we need to have principles that guide our models because there's just too, too many possible parameters and architectures and ways of connecting things that we need to really sort of distill the principles, distill the phenomena, build models that manifest those either directly or as emergent properties. And that it really should be sort of a principle driven progress versus the other side, which was like, well, all the progress of the past trying to design in principles have, you know, hit a wall and that maybe we shouldn't try and design those principles. We should just build systems that can learn the solution and then we could discover the principles based on whatever solution got learned. And those seem to be really taking off right now. But how far will they go? Will those hit a wall before we need principles again? Maybe this is an artificial dichotomy and it seems like we probably need both. But I thought that was kind of an interesting challenge for how you approach progress in this field. And not surprisingly, I think probably, I know I would chart a middle ground. I like principles, but I'm not afraid to let some learning happen and then study the artifact that got built to understand what principles are hidden inside it.
[01:04:30] Matt Botvinnik, what is currently the most important challenge in neuroscience and AI, and what do you think the right answer or direction is?
[01:04:40] I want to address a challenge that is limiting our ability to bring important questions from AI to neuroscience.
[01:04:53] And the problem is that we don't have a crystal clear Understanding at this point in AI of what we cannot do or what we don't understand.
[01:05:05] It's not that we're at a point where we can do anything and where we understand everything. It's that we're using a set of techniques whose properties and potentialities we're not completely clear on. We're learning about them. And because they depend on learning, it's not even clear when we succeed in AI exactly what the mechanisms are that are responsible for the success and whether those are really general enough to support what we want to do next.
[01:05:37] So clearly there are many mysteries in cognitive computation that we can bring from AI and knock on the door of neuroscience and say, can you help us figure this out? There are many issues related to credit assignment, the binding problem, and so forth. But in general, when colleagues of mine in neuroscience ask me in my AI researcher role, hey, what questions are coming out of AI that neuroscience might be able to help with? Like, what's an insight that we could gain in neuroscience that would really help AI? It's a hard question to answer because we're at a point in AI, the development of AI technology where we're not 100% clear on what we do not know and what we need to figure out. It's an intriguing and challenging moment in that regard.
[01:06:35] This is Brad Love from ucl. What is currently the most important disagreement challenge in neuroscience and or AI? And what do you think the right answer direction is behind the hype and the subsequent backlash against deep learning related approaches? I think there's a real conflict that runs deeper. It's a battle that's been raging since at least the 1950s. I saw it firsthand with the second wave connectionism versus symbol system debates. It's a battle that I've been on both sides and it somewhat aligns with the scruffies versus the neets. So, you know, you get these criticisms from the symbol camp that often don't have a lot of content, but you get nifty phrases like, you're building a taller ladder won't get you to the moon.
[01:07:23] But unfortunately, these criticisms usually aren't accompanied by a concrete, viable alternative. And if they had one, you probably just do it themselves instead of criticizing others making progress. And on the other hand, if you go to kind of more of a connectionism camp, I think they're prone to optimistic and somewhat magical thinking, like, if we just make the model bigger, it'll work better, which I find is proven true about 0% of the time.
[01:07:54] So, stepping back, I think there is something Deeper going on like a real disagreement. And it's really a debate about what we as people should accept as understanding or acceptable explanations. And that's really what the fight's about. So to make it concrete in terms of deep learning, you can't look at every weight and understand it, just like you can't look at every neuron in the brain. And so maybe we have to move towards an understanding that involves the architecture selected, the learning rules, what kind of training sets are you using? So maybe very gross properties of the solutions. And so you can't really get at a nice clean understanding, but you could still have a handle on some broader dials that you can turn and what led to the solution. So I'm starting to believe these kind of models which build these really complex non linear embedding spaces might be picking up on something interesting, including systematic understandings of domains and relational information. So the kinds of things more symbol oriented people care about, people that care about things like compositionality.
[01:09:09] But of course, these spaces in these models, it's not going to be like Lisp, you're not just going to be able to see how everything binds together. It's going to be completely driven by contextual aspects and we're never really going to understand these spaces in the same way. And maybe that's just what our explanations are going to be like in the future. And I'm starting to think that. So I'm not talking about replacing science with engineering, but maybe a challenge is building a science that is appropriate for the scale of problems we all say we want to solve.
[01:09:45] This is John Brennan at the University of Michigan. What is currently the most important disagreement or challenge in neuroscience and AI, and what do you think the right answer or direction is to go? I think the thing that comes to mind for me here is going to be very familiar to you, very familiar to your audience. It's about the role of data versus theory in our science.
[01:10:09] AI and neuroscience especially. My little subfield of linguistics and language understanding has been dominated by big data and for good reason. Data has the revolution in the data that's available and the computational power of our systems for analyzing and working with that data have really been mind blowing. But they I don't think are the path to lead to understanding and explanation of how the mind works. I think there are two reasons, or at least two reasons for this. The first is that the system that in my case creates language, allows us to understand and produce language. Well, that system can create an infinite number of possible utterances so we can always generate new sentences, and we do so every day. Language is a generative system, but the data that we have, no matter how big that data set is, is finite. Finite. It's not infinite. But the system can generate an infinitude. But the data don't reflect that infinitude and they cannot reflect that infinitude. And so because of that mismatch the system is, the parameters of that system are always going to be underdetermined by a finite set of data. Now, our AI engineers aren't the only people out there trying to figure out what the system is that generates human language. Our children are doing the same, including my 2 year old who's just running around dining table right now.
[01:11:32] So how do they do it? Well, they don't do it from scratch. They don't do it just with big data. Children have a set of biases, or maybe even you might consider them rules built in that come for free to tell them, well, how to take this input that they're receiving, this unstructured input, this massive data, and how to extract from it regularities and rules. They have a bias to look for certain kinds of rules and to ignore other kinds of rules. They have a bias to attend to certain aspects of the data and to pay less attention to other aspects of the data. The exact nature of these biases and these rules is a huge matter of debate in the cognitive sciences and developmental sciences. But the biases are there.
[01:12:15] And I think that that's good evidence that if we want to understand this system from an artificial intelligence perspective, well, we're going to need to understand those biases as well. So that's one point about the role of theory. The second point is more specific to the way in which the systems that have been developed in AI use data and rely on big data. And here I want to maybe appeal to analogy. I suspect this analogy is going to be way too facile, way too simplistic to lead to great insight. But it's what I have come up with this morning. The analogy is about baseball. So consider a system, an AI system designed to understand, or maybe better put it, to predict the next action in a baseball game. So that's the system that you're building and the data that you feed this system, well, it's massive. We have recordings, comprehensive recordings of baseball games going back decades, covering thousands of baseball games over many seasons. We have that data in many different, we have many different variants of that data that's highly redundant. So we have voice, radio announcement, talking about the game, what's happening in the game. We have, of course, video recording what's happening in the game. And we have the baseball statistics that are capturing certain abstractions about the games, the properties or the dynamics of a particular game. So we have tons and tons of rich, rich data about baseball games. Okay, so feed that data into, well, the most kind of biggest architecture that you want to handle, this sort of dynamic, spatio, temporal kind of data. And the goal again is to see if you can get it to predict the next, well, the next move in the baseball, what's going to happen next.
[01:14:04] You can have it predict who's going to win. Maybe you want to have it more specific, like what's going to happen in a particular play or at a particular inning, a particular. At Batman. Okay, it doesn't matter. The point is, the question is, the thought experiment is this. Can you look at the outcome of that training exercise of building that system, and let's say it does quite well predicting what's going to happen next in a baseball game. Can you then go into that system and extract some systematic principles that help you understand and explain how a game like baseball works? And I suspect not. Like, let's think about the following principle. Like, one basic principle for how baseball works is gravity, right? So all of the, you know, the probability of a home run, the probability of a certain kind of catch, the way in which the ball flies to the air, is governed by this underlying physical principle of gravitational law. And the thought experiment, at least on my view, it's hard for me to imagine I could go into that neural network that's learned to predict the next baseball move and find the law of gravity in there, or perhaps the law of equal and opposite reactions that describes it, captures in important detail exactly what happens when the ball and the bat meet. But that law is not present in a way that can be understood in the neural network that has been trained to predict, well, predict what's happening in the baseball game. And then we could ask the same question about other things of the game. We could see whether the neural network can tell us things about the rules of baseball, can tell us things about the way in which money paid to different teams affects their probability of success. And I bet there are going to be some wins. If you look around, there are going to be some areas where the structure of the network will reveal and will convey certain important principles of the system that you're studying. But in as much as there are going to be some of those wins, there are going to be many, many, many losses, many Many areas where there are crucial underlying principles that are not, that you don't get insight to from looking at the structure of the network. Okay, so again, I'm not sure if that is a helpful example, but it's the one that came to mind. And the takeaway message for me is that the way we are approaching the merger of AI and neuroscience is not grounded in a goal of understanding. It's grounded in a goal of prediction. What is going to happen next? Can we predict what's going to happen next? But that goal is not the same as understanding, it's not the same as explanation. And so from those systems that can predict maybe even quite well what's happening next, we are not guaranteed a readout, guaranteed some insight into why that system works the way it does. And so this is just a general statement about the sort of black box nature. So this is in a sense a restatement of the familiar problem of the black box nature of these AI systems. So I guess this goes to the question of, well, what is our goal in merging neuroscience and AI? And one way to put that goal is what we want to build human like intelligence, human like artificial intelligence. But even that is not specific enough for me at least. So what does it mean for something to be human like? Is it human like if it is able to predict what a human would do with high accuracy? Or is it human like if it carries out that prediction or carries out that next action in a way that is isomorphic or a reasonable approximation to the way the human does it? A system that's goal is prediction is not guaranteed to yield a system whose properties are act in kind in ways that I say a human intelligence system does. And so I'm a scientist at heart. I want to understand language at heart. AI for me is a useful tool, very useful tool. Neuroscience is a useful tool for me to understand human language. And so if I want to understand human language, then prediction is not going to be enough. I need to use tools that also help me with explanation. And theory is necessary for that explanation. The data themselves do not yield explanation, even if they can yield high prediction. So I come down on the side of theory. I'd like explanation. And the data aren't going to give that to me.
[01:18:33] That suits my particular goals. But maybe a bigger lesson here is that goals, or let me put it this way, maybe a bigger kind of lesson I've learned as I've been doing this for a while now, is that people's goals are different. And the things I find most interesting, the Things I find most compelling are not, of course, going to be shared by a great many other people. And so just because I want a kind of explanation of how some very granular aspect of human language works, like I really want to understand exactly why verb phrases operate across languages the way they do. Well, that is sort of meaningless to someone who wants to build a system to translate between human languages with a high degree of accuracy. And that's going to assist lots of people. So I guess this is a big caveat on the point about theory versus data I said a moment ago, which is that it depends on your goals. For my goals, I need theory, but your goals may vary.
[01:19:30] Grace Lindsey Gatsby Computational Neuroscience Unit at University College London what is currently the most important disagreement or challenge in neuroscience or AI, and what do you think the right answer or direction is?
[01:19:45] I think if we're talking about kind of existing conversations that are being had in the field, I think an important one is a push towards studying more naturalistic behavior in neuroscience. So, of course, right now, and historically in a lot of neuroscience, people study very reduced laboratory settings where if they're using an animal performing a task at all, the task is very simple and not very natural to the animal's behavior as it would behave in the wild.
[01:20:18] And there's a push to start using more naturalistic behaviors or to have more labs use more naturalistic behaviors that the animals do innately or just kind of match their ecological niche more. Just so that we're really studying the brain in its evolutionary context would be one reason to do this. And also just to make sure that we are actually, you know, kind of challenging the animals in a way that is native to them. And so therefore, kind of maybe it will be easier even to train them or to get them to do something that's interesting versus the kind of difficult things we have to do to get them to do our weird lab tasks. So I think that's a conversation that's been being had for the past few years at least. It's been brought to the fore through a number of articles and discussions. And I think that that is important to go in that direction. And I would say that it's important for our models to go in that direction as well, because modeling a very simple task that really isn't that challenging for us to build a model to do is not necessarily going to provide so many insights, or at least you have to be careful in how you do it to ensure that it actually does provide insights. Basically, if you're using a simple task. There's a lot of different ways to solve a simple task. And so you can build a model that can solve it, but then does that have any actual relationship to how the brain solves it? Is the brain solving the brain that you're studying? Solving it in a weird way, because it's an unnatural test, the animal, it gets complicated. And so I think the right direction for both experimental neuroscience and computational neuroscience is to use more challenging and naturalistic tasks. And of course, that's where the advances in AI come in, because we can build models that can do more complicated tasks, at least by the standards of neuroscience. What is being done on the side of AI is very complicated and much more kind of naturalistic than the tests that are studied in neuroscience. Of course, people in AI probably want to keep pushing that boundary themselves as well. Andrew Sacks here. What is currently the most important disagreement or challenge in neuroscience and what do you think the right answer or direction is? I'm going to modify this slightly, say what I think the perhaps least important disagreement is, or maybe that's a little uncharitable. But I often find discussions of compositionality or sort of the current abilities of AI to be talking past each other in the sense that some people will say current feedforward networks or GPT3 can't do X, right? So GPT3 is not conscious. Therefore we're going to need something really radically new. And I feel like on the other side of these debates, you often have someone saying, yeah, well, they're not conscious. But look, you know, if we do this new training regime, we can get a little bit closer. And so this is really just a matter of emphasis and time scale. One set of people seem to be saying at the moment, you can't do this, you're going to have to change something. And other set of people is saying, yes, while it's true that at the moment we can't do something, once you change something, maybe you take a step forward in this, in this new direction. And so I just think they actually aren't so far apart when it comes right down to it.
[01:23:49] Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side, but still have science. Go to BrainInspired Co and find the red Patreon button there to get in touch with me. Email Paul. BrainInspired co. The music you hear is by thenewyear find
[email protected] thank you for your support. See you next time.
[01:24:23] The stare of a boundless blank page Let me into the snow the covers up the path that take me where I let go.