Episode Transcript
[00:00:03] Speaker A: I'm a quantum physicist, right? I mean, so basically, like. So. So quantum physics is weird. Was weird. Was rejected by people like Einstein, right? I mean, because, like, it felt like an unnecessary theory, right? I mean, to basically, like, make sense of what was happening. But then as we started building the framework, it became clear that, you know, like, yes, it does explain a lot of things and phenomena.
People usually ask you, like, you're like, oh, you're in the eye. You must, like, what excites you about AI and whatnot. So basically, like, you know, like, it's just like, like everybody else, I see the benefits of getting there. The reason why I eventually went into AI is that it freaks me out.
I think it starts with the lack of understanding from a lot of, like, AI researchers about what consciousness actually is. Right.
[00:00:56] Speaker B: Well, no one understands.
[00:00:58] Speaker A: No one understands, right? But basically, like, I think. I think we can agree on the fact that.
[00:01:08] Speaker B: This is brain inspired, powered by the transmitter. Do AI engineers need to emulate some of the processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies?
Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story or a key part or the key part of that story?
Jennifer Prentke believes that if we continue to scale AI, it will give us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance and innovate AI beyond where we are. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data and the infrastructure for IT to train AI. And in that vein, she led efforts at DeepMind on the foundation models that are ubiquitous now in our lives.
I was curious why someone with that background would come to the conclusion that AI needs inspiration from life and biology, biology and possibly consciousness to move forward gracefully and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, artificial general intelligence, whatever that is.
Her perspective is a rarity among her cohorts, which we also discuss. And get this, she is interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race.
Anyway, Jennifer was fun to speak with, and I look forward to where her research and her thoughts take her in the future. A link to her website, Quantum of Data, in the show Notes, where there's a section containing the blog Post writings that we discuss on today's episode. The show notes are at BrainInspired Co podcast/217.
Thanks for listening. Enjoy. Jennifer.
Okay, Jennifer, this is a kind of an uncommon guest for this podcast.
You have a lot going on both in your history and your current thinking about things. And I kind of want to start at the end and then we will have lots and lots of topics to discuss to get there. So you seem to have arrived at some, what seems to be a very uncommon view for someone in the AI machine learning space or world, or even from the physics world. So I know that you were trained as a particle physicist and then you went into machine learning and AI and entrepreneurship, and now you've written a series of blog posts which have this deep philosophical bent which relate quantum physics and consciousness and AI, all of the things that we're going to talk about here. But the uncommon thing is that you have an appreciation, it seems, for biological processes, for life, which is void in the AI machine learning world, as far as I can tell.
Yeah. So given what I just said, how would you describe, like, is that accurate? In my description?
[00:04:41] Speaker A: Absolutely. So basically, I think you nailed it, right? I mean, I started my career as a particle physicist because I was truly attracted by this desire of understanding the world and understanding life, how the universe came to exist.
I came into AI as a series of events like the financial crisis, which basically led many particle physicists to more common industry oriented careers. And so I landed in AI. And so my roots are calling me back to bridging this gap between what AI or artificial intelligence looks like today and what basically the real conversation about the topic should be about.
[00:05:29] Speaker B: Okay, so I was going to ask you how you came to this view, but it sounds like you kind of had roots in that view from, from the beginning, but then you got sucked into actually making a living for a while.
[00:05:44] Speaker A: Yeah, that's certainly true. Right? I mean, so, no, I mean, so something. I think I'm really, really excited about this conversation because I truly believe that now that we're reaching the limits of what can be, basically, I think nobody a few years ago would have expected AI to take the direction it has taken today. And basically like us having chatbots we can communicate with and whatnot. Right. I mean, and basically, like, I think now that we're building intelligence, it is really important to have that sort of conversations and like invite other experts, in particular people from neuroscience, to bridge the gaps and build something that's really meaningful.
[00:06:28] Speaker B: Oh, neuroscientists will be pleased to Hear that? Because we're always begging to be invited to the party that we're never invited to. Right. We have something to say about building intelligence. But it's so weird to hear someone from that world. I mean, so, all right, so you're in the AI, like historically, right. A few years ago or something, you're in the AI machine learning world. I mean, is there, what percentage of people think like you do in terms of appreciating the biological side?
[00:06:58] Speaker A: Yeah, no, So I mean, that's an excellent question. I'm glad you're asking this. And so I'm afraid I'm going to give you an answer you're not necessarily going be excited about because I'm probably an outlier, right? And so that's what I expect.
So basically like my, my take is like, so AI is a space that's extremely computer science centric, right. I mean, so basically like, so my, my most recent corporate job was head of AI Data DeepMind. Right. I mean, so basically like I was leading the team that was basically like preparing all of the data that was used for training like models like Gemini Gemma, which is the open source version of an LLM, like released by Google. Right.
And so I was surrounded by computer scientists who believe that they have an understanding of what intelligence should be like. Right. I mean, and basically for me, what I've seen inside and out is like, this is like really it should be a multidisciplinary field where you have ethicists basically like making decisions. I was actually like the person who are, who was to make the decisions about which data belonged into these models. Right, yeah.
[00:08:12] Speaker B: You're a big data. You design the data and you take, what is it called? Data ops. No, yeah, Data ops.
[00:08:20] Speaker A: It's one way of calling it. That's right, yeah.
[00:08:24] Speaker B: So you ensure that the data is right for the problem. Is that a very brief way of summarizing like part of your expertise?
[00:08:30] Speaker A: It's actually so, yes. So like, I would go one step further. Right. I mean, so basically like, I think people, even people who are not necessarily in the AI field, understand that the models are basically the engines that make sense of the information that exists in the world. So basically you have this concept of training data which is what is exposed to a machine learning model or an AI model before it can make predictions or generate new information.
And so that information needs to be prepared properly because as they say, the garbage in, garbage out. If you put, put the data that you feed into these models is basically in some form. What's going to come out.
So it's an extremely important discipline because you're sort of deciding what the model is supposed to learn just the same way that a human teacher is to decide what knowledge is going to be injected in a child. Basically, it's a huge, huge responsibility. And so I've seen many basically scientists perceive that, or AI scientists perceive that.
They are the ones who have the honor of deciding what this is. Basically we know that when you raise a child it takes a village to make those decisions.
And yeah, basically I think the same thing needs to happen for AI systems.
[00:09:57] Speaker B: Wait, this is a little personal, but I hear the baby in the background. Do you have one child?
[00:10:02] Speaker A: No, I have many. I have children, actually four children.
[00:10:06] Speaker B: Well then, you know, so, okay, because I was going to say like, you know, it's. We did the cliched, had the sort of stereotypical response. We had one child, very careful, second child comes along and it's like, ah, it's all right, you know, but so it takes a village. But the village can be of a lot of different, different persuasions and the child, there's a lot of different ways for a child to turn out. Okay, right. So on the one I'm trying to analogize this with, with what you're just talking about, with feeding AI models the right training data.
So in one sense, if you're a teacher, you want to ensure that you're teaching, you're feeding the learner the right way to effectively and efficiently learn. On the other hand, they're probably going to learn by hook or by crook, even if the data that you're feeding them is not so efficient or isn't so well planned out, perhaps. So I wonder how far the analogy goes.
[00:11:00] Speaker A: It does go very far, right? I mean, because basically what you're saying is you cannot protect your child from being, being exposed to information that might be inaccurate or dangerous potentially. Right. I mean, so the same is true of like, and especially like models that require like, you know, basically what is called like a real, like a reinforcement learning human feedback. Right? I mean, so basically like if you use ChatGPT, you know what I'm talking about? Basically like every now and then you're being asked which one of two answers you think is best for your requirement, right? I mean, in basic Linux or.
Yeah, I mean basically so we human users are also responsible for what those machines actually learn and end up how they end up behaving at the end of the day, right? So it does take a village that does take a group of people and Then basically.
Actually, let's talk about this a little bit more, right? Because there's been a lot of conversations among users of LLMs that whether or not you should be polite to, to a chatbot, right? I mean, basically like, and people are like, you know, like, those are machines. You don't need to be polite. In fact, Sam Altman himself, like, basically said, like, this is like not necessarily an ethical thing to do because you're wasting compute. And basically, like by, you know, like saying please and thank you. Right?
And some people would say that basically, like by not saying please and thank you, you're teaching the AI that it's okay to be, to be rude or you know, like, basically like express itself in a way that's a little bit drier. Right?
[00:12:35] Speaker B: And, or teaching yourself, getting yourself in that habit.
[00:12:38] Speaker A: Exactly. And I, I think, I think you're making a good point because I think like, as those systems become like partners to the way we think about intelligence and basically like interact with us, right? I mean, it's also like, important to keep in mind that, you know, like, we're, they have an influence towards the way that we behave. Right? I mean, and basically, like, so it's important to incorporate this in the way that we build the systems, but basically all those systems are also going to impact us.
[00:13:09] Speaker B: Okay, all right, so I'm going to back up. I've already buried the lead here. Artificial conscious or. Yeah, artificial consciousness. Well, I don't know if it would be artificial. You sort of start your series of blog posts which we're going to kind of focus on, and I'll link to with the claim somewhere near the beginning that most AI researchers believe that if you just scale up, consciousness will emerge with the current technology.
But the other claim is that AI researchers believe we need consciousness for some reason. I may be mis paraphrasing that.
Is that true that, that, that I didn't think that consciousness was a fundamental goal of AI. Is that.
[00:13:58] Speaker A: Let me, let me, let me rephrase this, right? I mean, so basically, so I don't, I think the majority of researchers are not even asking those questions, right? I mean, they are technologists and they are thinking that basically they are building tools that make it easier for people to consume information. Right? And specifically coming from Google, I believe, like Google is goal for the world is to organize the world's information and they see this as AI as being the opportunity to do just that, but in a novel way that goes beyond traditional search.
At the same time, I think basically there are a few People and well known people that actually have this impression that basically consciousness is emerging, as you just said. I will cite Ilya Sutskeverer who is the chief architect who basically developed like the early version of ChatGPT. So the person who's basically responsible for the GPT technology. Right.
And so Geoff Hinton, who was his PhD advisor as well, like recent Kulik also mentioned that they believe. So.
[00:15:03] Speaker B: Yeah, I mean, so it sounds crazy to me. It's so weird to see such an. It kind of goes to show that super intelligent people can easily be wrong about many things. And maybe they're not wrong, but they're wrong.
[00:15:17] Speaker A: But I would actually say so basically, I think it even starts deeper than this, right? I mean, so basically one of the things I discuss is like, I think it starts with the lack of understanding from a lot of like AI researchers about what consciousness actually is. Right?
[00:15:31] Speaker B: Well, no one understands.
[00:15:32] Speaker A: No one understands, right. But basically, like, I think, I think we can agree on the fact that. Fact that self. Like, like, so I, I think the confusion starts at you know, like self awareness and consciousness, right? I mean, so basically like the reason why I believe Ilia, for example, like tends to believe that, you know, like, basically like consciousness is, is an emerging feature is that you can see ChatGPT and other LLMs basically reason about themselves, right? Because they're able to tell you like if you modify your prompt in a certain way, you can expect to get better results or whatnot. Right? I mean, so it can think, right? This is basically metacognition or meta thinking or self awareness is not basically there are even philosophical definitions that are clearly different from consciousness. Basically there are different concepts. And I think example number one, we're talking about the importance for AI researchers to understand intelligence. Basically just understanding that concept is important in order to avoid this sort of confusion.
[00:16:38] Speaker B: But so you think that people who make Those claims about LLMs are conflating different philosophical ontological notions?
[00:16:46] Speaker A: Absolutely, yeah.
[00:16:46] Speaker B: I mean you tease apart sentience and awareness and consciousness, which I actually disagree with the way that you define these things.
And I did a little research with the Stanford Encyclopedia of Philosophy or whatever because I, I read it and I thought, oh, I thought it meant something different.
But as long as you operationalize the terms, that's fine, which is what you do, because definitions are important. If you're going to be developing systems to accomplish those sorts of tasks or to be able to say that the system that you're dealing with has such and such a property, you have to operationally define that Property, which is what you do with sentience and awareness and consciousness. So it's not a big deal that I disagree with him, but it just goes to show, like, everyone has different intuitive notions about what these terms mean. And I'm sure I conflate them all the time as well.
Okay, so let's see.
Okay, so artificial consciousness, Okay. A lot of what you do throughout these posts is talk about what might be important to develop artificial consciousness. And that modern AI is on the wrong track in, in that regard.
Things like embodiment, like I said, you tie in quantum physics, which I is curious to me because there's that thread of like, consciousness is quantum. And so we need to talk about that like at some point later. But embodiment, quantum physics, and you come down to life. That or the result or the conclusion rather that intelligence cannot essentially be separated from life, which I have kind of come to the conclusion to. However, I can't articulate in principle why that would be the case, because anything I think of that, I say, well, life has this.
There's no reason in principle why you couldn't build that particular property.
Does that make sense? So am I correct that you've come to that conclusion as well?
[00:18:56] Speaker A: I believe so. I think this is where I'm. I'm converging to as well. So, yeah, so let's talk about this a little bit more. Right? I mean, so basically I'll keep the quantum conversation for later on because, like, I think we have a lot of like, you know, like other, like, so we need to talk a lot about like infrastructure. Infrastructure as well, I think in order to understand like, basically like the gap of it here. Right, but yeah.
[00:19:22] Speaker B: Wait, could you, could you maybe start just with your views on why we would want an AI to possess consciousness?
[00:19:29] Speaker A: Yes, I will tell you. Let me start with basically my story at DeepMind. Right? I mean, so basically I was saying earlier, so basically I was the person in charge of deciding which data was going to be fed into those models with all of the implications from an ethical standpoint and a responsibility standpoint.
When you look at the way that people think about data governance, what data belongs in there?
We are taking an approach which is like, it's a mitigation, right? I mean, you're basically like, you're looking at the data and you're thinking, I would like to use all of this data because researchers want to use as much data as possible because obviously the more data, the better the models are going to be. And you have to take back from there. Right? I mean, you have to say we cannot use this data because there is a risk that, you know, like, basically like somebody doesn't want their personal data to be used, or this can represent a risk, right? I mean, or basically, like there might be some inappropriate data in there or whatnot, right? I mean, so basically, like. So it's a mitigation, right? I mean, it's basically like you're just like, evaluating the risks of using too much data and you take. You peel back from there, right?
[00:20:46] Speaker B: That's just. Those are ethical considerations.
[00:20:48] Speaker A: Those are ethical considerations, right? And so because it's like a mitigation, right? I mean, basically, like, you cannot completely, like, forecast what can happen, right? I mean, so I'll give you an example. It's like, you believe that by using. If you do not use, like, a violent content, you cannot reproduce violence. It's actually not true, right? Because you can reproduce, like, violence by combining, like, two things that are, you know, like, unrelated to each other. So you can, you can, like, for example, like, you can produce very inappropriate content by generating deep fakes which don't require to use, like, inappropriate content as part of the training process in the first place, right? I mean, so you cannot prevent for models to be used in an appropriate way, even, no matter how careful you are with the, with the initial process, right? So basically, like. So it is a mitigation, right? And so I, at some point, I realized, like, I'm. I'm fighting, like, windmills, right? I mean, basically, like, it's always like, you know, like, basically, I cannot prevent for everything bad to happen. This is how people working on safety also think about those things, right?
[00:21:56] Speaker B: Did you say. Did you say windmills?
[00:21:58] Speaker A: Yeah, yeah.
[00:21:59] Speaker B: Oh, is that a common analogy? I've never heard that you're fighting windmills.
I'm trying to. What? What is the. What? Like, you.
Well, you throw something at the blades and it goes through.
[00:22:10] Speaker A: How.
[00:22:10] Speaker B: What does that mean?
[00:22:12] Speaker A: I. I think it's like a reference to a book that's. That's relatively famous.
[00:22:17] Speaker B: Oh, shoot. I'm not that well read, I guess. Yeah.
[00:22:20] Speaker A: All right. Anyways, right? But basically, like, you're. You're trying to prevent something bad to happen, but it's. You, like, it's impossible because, like, it's. It's like cyber security, right? I mean, you're always trying to, like, prevent bad things to happen, but, like, you're in an adversarial setting because, like, somebody's trying to, like, go. You know, like you're trying to stop something bad from Happening and they try to makes basically make that impossible. Right. I mean, so anyways, right. And so basically, like I really felt at some points like you would need to have something that really helps, you know, like get something like fundamental to the model, that the model makes the right decisions for itself. Right. I mean, that basically like you could prevent, you know, like, basically like you could, you know what I mean? Right. I mean, basically like you can't, like it goes back to this idea. So let's talk about super alignment. Right? I mean, super alignment.
[00:23:17] Speaker B: Let me see if I can just summarize it before we go into super alignment. So the logic is you can't prevent externally, you can't prevent the model from generating things that you don't want it to generate. So this is where like you want to inundate the model with some sort of internal values.
Okay? Yeah. Okay.
[00:23:36] Speaker A: So I'm not saying it's easy, I'm not saying it's even possible just yet. Right. And it's just basically like what I see is basically like you are on a constant, you know, like, you know, like the fight against like potential, like, you know, like windmills or, you know, like bad players or whatever you want to call it. Right?
[00:23:58] Speaker B: Yeah.
Right. Okay. All right. Super alignment. Let's go ahead and get into it. So, okay, this is interesting. I thought we're going to talk about ethics later, but this is kind of how you came to this.
[00:24:07] Speaker A: No, I came, I, I came to the conclusion that basically like your typical, like, you know, like the typical ethical approach to data governance, data management is just like never a never ending process. Right? I mean, so basically it's impossible.
[00:24:24] Speaker B: It's impossible.
[00:24:25] Speaker A: Of course it's impossible. Right.
[00:24:26] Speaker B: Okay.
[00:24:29] Speaker A: Because again, it becomes a cybersecurity problem itself. It's just like an adversarial kind of problem, right? I mean, and I don't think there's a way around it unless you can start investigating novel ways where you could force a model to make decisions for itself. Right? I mean, so let's leave it here. Now, I'm not saying like you want the model.
This is where like synthetic consciousness comes in, right? I mean, so basically I call that synthetic consciousness because like, I don't necessarily believe we should or we can make models actually conscious the way humans are like living beings are, but you can make them hopefully behave in the way that they can probe against certain values right at the inseam as opposed to doing that as a, as a mitigation.
[00:25:18] Speaker B: Let's just pause here. Can do is There a difference between synthetic consciousness and artificial consciousness?
[00:25:23] Speaker A: Well, I don't know.
I mean, actually I would even say like, I don't know if there's a difference between consciousness and synthetic consciousness. Sorry, that. It's a basic question.
[00:25:34] Speaker B: You just said that you wouldn't want to necessarily build an AI with human like consciousness.
[00:25:38] Speaker A: But so I wouldn't want to build on purpose, but I would like to build an AI that can make a judgment call on value use, right? I mean, then basically say like it doesn't seem okay to give that answer, right? I mean, basically like. So let's talk about super alignment, because this is where this is going, right? So super, super alignment. So the concept of alignment in general is basically like you want an AI to behave in the way that aligns with what the user is expecting, right? I mean, and basically there are lots of questions about like how, how do you define what it needs to be aligned to, right?
Super Alignment is basically like you want to align the behavior of the model to what humankind thinks should happen to the ethical, the fundamental values of humankind. Right? And so.
[00:26:32] Speaker B: But we don't know what the fundamental values of humankind.
[00:26:34] Speaker A: Exactly. So this is exactly what worries me. Right, so play back a little bit. Let's talk about Ilya's discovery again, right? I mean, so Ilya used to basically until relatively recently, until last year, Ilya was working for OpenAI, right? And focusing on. So he was running this team called the Super Alignment team.
And they were focused.
Apparently Ilya at some point started worrying that we are building LLMs to become like very powerful, you know, like machines that can do bad things, right. I mean basically that people can use to do bad things, right? And we need to find a way to like mitigate the risk. Risks. And so he got substantial investments from OpenAI to actually operate, research and identify the risk and try to find solutions against those risks. And as time went, basically Ilya grew frustrated that basically OpenAI was not investing sufficiently enough resources and time and money for those problems and focusing on running ahead of the market instead. So basically at that point he left OpenAI and he started basically this new company. A lot of people talk about ssi, so safe super intelligence, which is supposed to basically solve those problems and basically create a super aligned AI that does in safe way, is a safe AGI for humankind. And then that obviously leads to the question basically who defines what safe means, right? I mean, if you basically say like we want AI to operate in a way that's fair and safe and you Know, like aligned with human values. What are human values? Who defines that?
[00:28:17] Speaker B: Right, well, cultures, like, they're different across cultures, they're different. There's so much variety across individuals. Like. Yeah, so it's a very strange thing.
[00:28:25] Speaker A: Yeah. And so you can imagine that basically, like, so if you led that decision to a human, right? I mean, basically, so we've seen that more recently with Grok 4, right? I mean, so basically, like, you know, like where the newer version of the. The XAI model, right. I mean, that got released like a couple of days or a couple of weeks ago, right?
Was. Was basically like, you know, like people report, like trying to, you know, like entering a prompt, and basically the AI is trying to align those. The. The answer that it gives with Elon Musk's point of view on the topic, right? So basically, like, so, so, you know, like, basically so in this case, it's like, what is super alignment? Are you aligning human values to the position of Elon Musk on every single topic? So basically, so the person that basically this AI gets aligned to has all of the power, right? I mean, basically, like, you could say that if there is like a.
Some obscure power that decides, like, you know, like, I want the AI to be completely aligned with like the point of view of a government or a specific.
[00:29:28] Speaker B: What is wrong with this world? What is wrong with this world? Okay, well, but let's pause here because. Okay, so how do we. All right, so we talk about AI alignment a lot.
How AI could be dangerous. We got to align it to our values.
How do we align humans? What. What is the. Sorry if this is, well, trodden territory. I haven't really. I don't really follow the AI safety literature and stuff much, but so the way that we align the alignment problem with humans, right, We. We make rules, we make laws, we make prisons, we try to raise our children well or not, but there are a lot of misaligned humans. And so it's an odd thing to then think that we can align to a. I mean, it's just so ridiculous to me.
I could go on, but. Okay, so it's just interesting that it seems to be the case that a lot of AI researchers. Researchers claim to have like, a clear vision of what, like how to align or like what we would want to align. But the entirety of human individualities and cultures are completely misaligned, like a vast, you know, a huge proportion. So I just want to throw that out there. Like, if we're talking about AIs as agents, as conscious entities that it's just a. I don't know. It's like a.
I can't wrap my head around.
[00:30:51] Speaker A: It's a rabbit hole, right? I mean, so basically, so. So let's talk about the opposite as well, because you're absolutely right. You're touching on all topics, right? Because without super alignment, right? You have echo chambers, right? I mean, basically, like, you have your own little version of like a chatbot that you're using who knows your context, knows your preferences, and decides to answer accordingly, right? I mean, so basically, there are lots of people complaining about the fact that the chatbots are sycophantic, right? I mean, because they tell you what you want to hear, right? I mean, and basically, like, you're not being challenged, and you're basically like, so. So it is aligned with your view, right? I mean, if you basically, like, it knows your political preferences, it just goes in this direction, right? So you need to realign to something that's like, you know, like. Because, like, what makes us more aligned as humans compared to each other is basically like, we have this, like, cultural references, right? I mean, basically, because, like, your reality might not be my reality, but it's more aligned with my reality because we're talking to each other, right? I mean, so basically, we live in the same world. We are exposed to the same things that happen to the same history, right? I mean, basically, like, if we live in the same country, we have the same cultural references, right? I mean, so even though our positions are not completely aligned, we have. We are. But so if you start doing this with your own chatbot and those chatbots don't talk to each other, and you're talking to your chatbot, I'm talking to mind, right? Our views are gonna start diverging, right? So basically, like. So this idea of, like, super alignment is basically like aligning to something even if it's not the right thing to align it to. Basically, like, bring those views together instead of having this huge divergence, right? I mean, so there is value in that, but there's danger in that as well, right? I mean, so it's. And. And nobody really has the answer, but basically, like, I'm. I'm even. I'm even a little bit worried. Like, people are not asking the right question questions yet, right? I mean, so basically, like, you know, like, because you have the camps of like, you know, people are like, we don't need alignment. Just, like, let people be in their own eco chambers. I've heard people, like, basically promote, like, let people who have different political opinions have their own political opinions so that they don't fight with each other, right? And, and for me, like, this is the worst thing that can possibly happen, right? Because like, then, then you don't understand people's points, but different points of view, right? I mean, but basically like also.
[00:33:11] Speaker B: So anyway, I could never visit my grandmother. That would be awful.
Well, what's the difference between alignment and super alignment? Is it just that super alignment is like everything has the one thing that it's aligned to, whereas super alignment.
[00:33:25] Speaker A: Alignment is supposed. So let's talk. So alignment is basically like fundamentally making sure the AI doesn't converge or diverge over time, right? Because you can, can implement your AI in the first place to basically do something good.
You have the example of basically the paperclip experiment. So the paperclip experiment is basically you teach an AI to basically optimize the production of paperclips and it ends up killing the world or everybody on the planet because it just wants to produce more paperclip, even at the cost of human life. So it's basically to say that if you don't design like what you're trying to optimize for in a way that's, you know, like a sane and responsible and, and does take into consideration everything that can go wrong, you will have bad things happen. Right? I mean, so, but super, super alignment is basically like. So it's, it's the belief that, you know, like, basically like finding the truth in the way, right? I mean, basically like finding what the absolute like best, best case scenario or optimal outcome is for humankind. Right. I mean, so. And we don't know that this even exists, right? But even if it does, who can basically, who can say for sure we're actually building towards this? Right? I mean, so that's.
[00:34:49] Speaker B: Yeah. I mean we can't even say like, what makes us content.
Like what makes a Buddhist monk content is way different than what makes a clown content, perhaps, you know, like.
Yeah, okay, all right. So that's the difference between super alignment and alignment. So I derailed us there.
Where were we? So, okay, so there's the alignment problem with machines.
I think it is hubris to believe that we can externally even program in objective functions that are quote unquote, aligned externally.
Do you? What, what's your position on that?
[00:35:32] Speaker A: No, no, 100%. I'm just like, basically, look, we don't, we don't understand this in life. It's like, what, what is human value? What defines right? Like what is the definition of right and wrong? Right?
[00:35:41] Speaker B: I Mean, this goes back to you saying that. Sorry, sorry to interrupt. This goes back to you saying that people in that space are just not even asking the questions or the right questions.
[00:35:49] Speaker A: Yeah, but so I wanted to go back, like, actually like, let's, let's go back one step, right? And because like, you started talking like about like embodiment or you started talking about prisons, right? I mean, basically like consequences, right? I mean, so basically, so this goes back to like, you know, like you. How can you make everybody agree that, you know, like, killing is bad, right? I mean, because there are consequences, right? And so, so basically like, so in this case it's basically like, why, why do we fear consequences, right? I mean, so basically like, and I think we have fear consequences because we feel the consequences.
And because of embodiment, basically you would tell an AI you cannot operate in the world for 100 years if basically you give a bad answer or whatnot, it doesn't care because it doesn't have a notion of time. So basically time perception is we care because we have a concept of finitude. We're scared of death. Basically the reason why consequences matter, why we fear prison is that we're wasting time out of our lifetimes, right? And basically like sitting in jail, right? I mean, so basically like, so it has like a consequence, actually matters to us for that reason, right? Same is true for embodiment. Because embodiment is how you experience pain, right? I mean, how you experience pleasure and, and, and all of things. These things are meaningful to us because we are in the flesh, right? I mean, physically, like, and so on. So that was the question. And so I'm not the only one to have this position, right? I mean, because just a good. A few days back, like Fei, Fei Li, who's like a very famous AI scientist who was responsible for distributing imagenet, which was the data set that made computer vision possible, right? So anyways, right? I mean, so she also believes that basically you need embodiment to reach AGI, whatever that means, right? I mean, so she didn't necessarily necessarily say that in the exact same context of what I'm saying right now. But you have these conversations are happening about the importance of embodiment for AI to reach AGI, whatever AGI means.
[00:38:02] Speaker B: But we already have robots, aren't we on our way potentially.
[00:38:06] Speaker A: But now the question is basically, yes, so we're talking about biology earlier. So now the bigger question is going to be, yes, if embodiment is necessary for the perception of consequences and hence having stakes and really basically feeling Responsible for something which is what machines would have to get to.
Is silicon based hardware sufficient to get there?
[00:38:35] Speaker B: Right. So you have a robot with some sensors or even like the early cybernetics turtles, I don't know, you remember, I don't know if you remember those, but you know, very simple sensors that like slowly wandered around offices. I think they had like light sensors. So like if you have like one photoreceptor, that's, that's on your robot, that's one sensor and you have actuators. So you can like move through the world based on one signal. Right? But that's a robot, that's one signal. It's silicone and metals and gears, you know, and all that jazz robot stuff and so. But you, but you think it needs to go further than that. You think that biology is actually necessary for feeling?
[00:39:15] Speaker A: I mean I, that's, yes, I would say that's my opinion. But I don't have like a tangible. This is where you know, like it's more like believe than really proof. Right.
So anyways, but like, I mean I would say like basically like look, what I observe is basically like. So if we're starting to talk about infrastructure, how AI is hosted and embodiment and whatnot, right. Basically what I will say is there is a reason why nature made us the way we are. I'm also seeing that we started talking earlier about brain functions and biology and whatnot. I do believe that part of the reason why we're conscious and we experience consciousness the way we do is basically our brains are, are probably like managed by quantum processes, right?
[00:40:11] Speaker B: I mean here comes the quantum. This is the. All right, yeah, no, no, I mean if you're bringing it up, we can. Because I mean this quantum account of consciousness is like sort of decades old now and largely dismissed by like the neuroscience community and almost laughed off because of some of the claims and properties. But does it having a resurgence right now? Like you write about it a lot, but you're always writing about it with the clause if this account has merit, then Right, So there's always the if clause.
[00:40:49] Speaker A: Well, I'm a scientist, right? I mean, so basically like as I'm also trying to like, basically like I also need proof that this is true. But like, I think, look, we don't have like what other alternatives do we have to explain consciousness.
[00:41:02] Speaker B: Right, okay, well, let me. Alright, so here's my gripe about it. Here's a lot of people's gripe about it.
Well, one gripe about it is that you're taking something that we do not understand and you're trying to explain it by something that is also out of our understanding with respect to, like, Newtonian kind of physics that we engage in in the world. So you're explaining one mystery with another mystery which just seems convenient and unnecessary and silly, especially if you rely on microtubules to make the claims. Okay, so just stating that up front.
[00:41:35] Speaker A: No, I mean, so look, you're talking to a particle physicist, right? I mean, so basically, like, I would say this is true of many discoveries we've made, right? I mean, because, like, look at relativity, right? I mean, basically, like, who would have said, like, you did, like, distortion of time to explain some phenomena which were not. Not measured at the time, right? Look at the whole. Look, I'm a quantum physicist, right? I mean, so basically, like. So. So quantum physics is weird. Was weird. Was rejected by people like Einstein, right? I mean, because, like, it felt like an unnecessary theory, right? I mean, to basically, like, make sense of what was happening. But then as we started building the framework, it became clear that, you know, like, yes, it does explain a lot of things and phenomena that didn't seem to make sense, right? I mean, so for example, I had a teacher when I. When I was like a early, early stage, like in the. In college, right? I mean, who was like, physics teacher. Now I'm gonna have to, like. I'm gonna have to tell you about quantum physics because it's like, you know, like in the books and the textbooks and whatnot. So I. Unfortunately, I have to talk about this, right? And so he was like, completely, like, rejecting the idea of quantum because, you know, like, basically, like, it has merit because it's. It's been proven in other ways, Even. Even if we cannot fathom it, right? I mean, basically, like, so. So I've studied particles where. That you can see. You can only measure the decays of those particles and whatnot. But like, still everything you observe fits, you know, like the. The standard model of particle physics, right? I mean, so, for example. And then even. Even if it's just like. So physicists do exactly that, right? They basically try to come up with, like, complex mathematical frameworks and try to see if the data fits that framework, right? And I think if we want a chance to try to understand how the human brain works, what consciousness is, we're gonna have to make those hypothesis, right? I mean, and so this is why I always write it this way. Like, you know, like, basically, like, I don't. I mean, I also don't. I am not sure. That, you know, like. Basically like microtubules are the right way of modeling consciousness or whatnot. But basically I'm saying that in the absence of an absolute theory, it makes sense to experiment and basically validate those theories with the data we can get.
[00:43:51] Speaker B: Well, the microtubule.
Stubart Hamerhoff, Roger Penrose line of research is just a huge line of confirmatory seeking. Basically saying, like, look, it's possible. Not like trying to falsify itself as a good scientist would, as very few scientists do or whatever. But I mean, the microtubule problem is microtubules are everywhere. They're not just in brains.
So maybe the brain is not important. So maybe consciousness is everywhere. It's kind of a panpsychist view, but so is the view what is the relation between collapsing the wave function and consciousness along these quantum lines.
[00:44:26] Speaker A: So I think many, like what I think is interesting, right? I mean, it's basically like this, you know, like some scientists try to explain, like.
So without talking about Penrose, I would talk about flagging. Right? I mean, which is like, more so basically, like, I think you. You have like. So it's. It's an elegant way of trying to explain free will, right? I mean, and basically, like. So basically like the idea of like. And you know, like everything we know about quantum is clearly like the collapse of a wave function is like the.
The gap between the unseen and the seen, right? I mean, so basically, like.
So, so anyways, right?
[00:45:05] Speaker B: Between the possible and the sort of.
[00:45:07] Speaker A: Right? I mean, so basically, like, you know, like to. To stay like, without going into the. The technical details, right? And basically like the quantum world gives you like a superposition of everything that's possible, right? Basically, like back to, you know, like one specific path, right? I mean, so basically this is the. What the. The quantum physics theory say, right? I mean, basically like the moment you observe, observe this basically like, it collapses, right? So I can see how this could explain what we experience as consciousness, right? It needs to. It needs to be proven, obviously, right? I mean, basically, like. And I, I see this, like, basically I think we're reaching the limits of like, what's. What's understood here.
Just the same way that, you know, like when quantum started being, you know, like discovered or proposed by scientists, nobody thought it was, you know, like, reasonable to believe that a particle could be in multiple states at the same time, right? I mean, so basically. And so I think as good scientists, we need to evaluate whether it's a viable theory. And again, for me as a data scientist, I believe that you can observe the data, you can measure things, and you can basically validate whether or not it fits the. And so all people building models or working on models would say all models are wrong, but some are useful because they do represent something that's real. Right. Or that basically models the world approximates something that's real.
[00:46:34] Speaker B: Yeah. So what you were just talking about hints at some of the phenomenology, like philosophy that you write about and sort of existentialism either, you know, you bring up existentialists and phenomenologists, Marlo Ponty, Herschel Kierkegaard, which was an early existentialist. And so there's this phenomenology bent in your thinking. And it's related to the quantum level explanation for consciousness, so. Which you were kind of just hinting at, right? So I'm not sure, I forget, does the will collapse the wave function or does the wave function collapse? And that's what's presented to the consciousness.
So my question is, how does phenomenology essentially the experience of being, how in your mind is that related to the quantum account?
[00:47:26] Speaker A: Well, I mean, so that's where I don't have a clear answer, right? I don't think anybody has a clear answer, right? I mean, so I think like, basically like, you know, in Fagin's view is basically like three ways.
[00:47:39] Speaker B: Sorry, I'm sorry to interrupt. So we should just say this is Federico Fagin.
Yeah. Okay, so he's big, invented microprocessors, is that right?
[00:47:47] Speaker A: So then that's basically like, so, so you will say like, basically, like so true for pendrous as well. But like these are still people who are scientists, right? I mean, basically like by training, right? I mean, so basically like, you know, like again, like it's, it's, it's all hypothetical for, you know, like until the day like we have some proof of this. Right?
Which is why like in my writing I never say I believe that this is necessarily the right way. I'm just like, this is a possibility, this is a model. Right? So Frederick Ofagin basically like, like I believe like promotes or you know, like believes that you make a decision, it collapses the reality the brain function, right? I mean, but basically like, so you still have control on what this collapses towards, right? But basically, so I'm not that interested in necessarily the process itself. I'm interested in the framework. Because like, basically like if you can translate like the decision making process to a quantum process, then you can use the mathematical framework of Hamiltonians, right? I mean, which are like, basically, like a representation of, like, preferences, right? I mean, basically, like, you could say that you are more inclined to make a specific decision because fundamentally, like, basically in humans, you. You have like a personality that makes you more. More shy or more, you know, like, basically like a, you know, like, more prone to anger or to, you know, like, being reactionary towards something or whatnot. Basically, this would cause you to increase the probability to basically make a specific decision even though you're still the person making the decision.
And I think without necessarily reproducing. Exactly. I don't need the theory to be true to say this is an elegant way to represent preferences in an AI. And basically, this is my take on this. Basically, at some point. And again, basically there is research on that space of how can you give a personality to an AI so that it favors certain types of answers towards others. And basically, you don't actually need to prove that the brain reacts the way it does because of the collapse of a brain function in order to basically state that there is value in the mathematical framework of quantum physics to represent those. Those preferences.
[00:50:09] Speaker B: All right, well, so, but there's something special about biological processes in life, apparently. But, you know, quantum physics is everywhere. It's not just in biology. So wave functions are collapsing everywhere. Right. Or am I. Is that improper to say in physics?
[00:50:23] Speaker A: I mean, I guess the difference would be like. Yes, yes, it's true. It's probably like, basically like, does life make you. Or cause the wave function to collapse? Right. I mean, so, yeah, I mean.
[00:50:36] Speaker B: And you're thinking. Yes.
That's what you're leaning towards thinking.
[00:50:39] Speaker A: Yes. Well, I mean, clearly we don't understand life and we don't understand consciousness.
[00:50:44] Speaker B: Well, and I should say, by the way, like, I was just being critical of the quantum account of consciousness.
And deep down, I think this is fundamentally wrong. But there could be something to it. Of course, but neuroscientists sure as hell haven't figured it out. So it's not, it's not like we figured anything out either.
So leave it to the physicists, especially the Nobel laureates. Physicists, they'll explain everything about consciousness, right?
[00:51:14] Speaker A: Well, you should, you should leave it to Penrose then.
[00:51:17] Speaker B: Right? Now, I know that's. Well, that's a running joke, right? It's like once, once you get a Nobel Prize, you just, you go off the rails and like, you start. You just shift your field entirely, thinking that you can just figure everything out. Right? That's the criticism.
That's kind of the running joke about the Nobel Prizes.
[00:51:33] Speaker A: But I would go back just to close this, right? I mean, I would basically say, like, look, as, as I'm an experimentalist, right? I mean, so basically I'm a particle physicist by training, but basically, like, I am among the people who, like you used to run those like particle colliders to like generate collisions and try to make sense of it, right? I mean, so basically, like, so I, I'm. I don't come from a theoreticist background, right? I mean, so basically, like, so what I, what I do is basically like, I collect data. Data to basically evaluate which theory is more likely, right? I mean, so basically, so from that perspective, the best theory, not the right theory, the best theory is the one that gets us as close as possible to what we observe, right? So basically, and for me, even if it's wrong, as I said earlier, right? I mean, basically, like, all models are wrong, but some are useful, right? I mean, so if they are useful to help us emulate synthetic consciousness, in this case, I think it's of value, right? And I don't need to reproduce actual consciousness to have the benefits of consciousness. Because for us, if consciousness is what makes us make good decisions, basically act morally, basically, if we can reproduce that same behavior without necessarily reproducing the process exactly the way that is intended in nature, it still gives us an AI that can potentially make. Make decent decisions on certain things.
[00:53:00] Speaker B: Wouldn't that be ideal? That would be ideal to me. I really don't want a subjectively aware.
[00:53:06] Speaker A: I agree.
[00:53:06] Speaker B: Yeah. I really don't.
[00:53:08] Speaker A: No, exactly. Because like, and if you're like, it's even worse than that because if you do make a truly conscious AI, then we have moral responsibilities towards things, right? I mean, it's beyond the fact, it's creepy. It's also like, basically it gives us more work to do, right? I mean, so.
[00:53:24] Speaker B: Yeah, exactly. That's the thing. I mean, like, yeah, I just don't understand why anyone would actually want to intentionally create.
[00:53:33] Speaker A: And I don't know that this is true. I don't think I've explicitly heard anybody say we want conscious AI is like basically like referring to, like to Ilia and Jeff Hinton. I think they.
Accident, right?
[00:53:48] Speaker B: Yes.
[00:53:48] Speaker A: It's just going to happen. Happen, Right. Yeah, Right.
[00:53:51] Speaker B: Okay, but, okay, so, but so your bet then is that scale won't get us there. It won't just emerge from scale, which is what you're saying that many of the Talking heads in AI Talking heads/ historically important figures in AI believe that it'll emerge with Scale, you don't believe that, but you're looking to principles of life as a sort of principle.
Proof of proof. Proof of principle that ethics can be essentially grown.
That's. Is that your kind of viewpoint? That instead of building AI, we're going.
[00:54:30] Speaker A: To grow it, right? No, absolutely. So basically I'm just like. Basically because like if we externalize. So my take is this is like our current approach to ethics is like by stopping or preventing bad things to happen. It's like, basically like catching a child who's fully. Like we're even talking about children earlier, right? I mean, so basically, so as parents, like there are different ways of like growing your child. You could say, like I will be holding my child by the hand and make sure that I catch them every time they stumble, right? And you have the approach which is basically I will teach that child to be responsible for their own actions, right? I mean, by teaching them that they are completely consequences to what happens and whatnot, right? We don't know yet what this means in the context of an AI, but it certainly means like you need some reinforcement learning. You need to have reward functions that are implemented properly. But basically if you create that framework that enables an AI to basically self control, right? I mean basically whatever or self regulate, right?
Whatever that means. Basically you're creating a system where you are not the adult in the room at all times that has to make sure that it doesn't stumble because we cannot prevent everything bad from happening. We already see that you really is an AI out there. There will be bad actors trying to do bad things with it. Basically they will be creative just the same way that hackers on creative tools to try to like, you know, like steal from our bank accounts, right? I mean, and use technology to do things that they shouldn't, right? So basically I'm thinking of this as like, basically like how do you create some framework that enables like some of the responsibility to live upon the AI itself, right? I mean, so I think this is interesting, right? I mean, so I'm not saying it's easy, I'm not saying it's necessary. I'm saying that this is an angle where for me, as somebody who has been responsible for ethical behavior in AI, basically it's less daunting to believe that you can incorporate some of that responsibility directly back into the AI.
[00:56:41] Speaker B: Okay, I want to come back to wetware and growing AI as opposed to building it. But since this is sort of segueing from what you were just talking about, there are four principles that you suggest.
Biological Life is imbued with that are important for what you were just talking about. One of them is valence. One of them is embodiment, which we already kind of covered. One of them is temporal perception, and the other is a moral compass, which we've been referring to scattered throughout. So the valence is like, you know, the felt sense of what is good or bad, right or wrong, painful or pleasurable, beautiful or horrific. I'm reading what you wrote and we've kind of covered that a little bit. I don't know if you want to say more about that, but I do want to talk about the temporal perception aspect and why you think that's important.
[00:57:33] Speaker A: Yeah, no, I mean, so temporal is like, basically it goes back to this notion of consequences, right? I mean, I think like, even like more upstream to everything you said is like, basically, like, can you expect, like, why do we behave the way we behave and why do we feel like, basically like there's something we cannot afford to do, right? Because we fear the consequences, right? I mean, basically we fear the consequences, like either like, you know, like physically or, you know, like, basically morally, right?
[00:58:02] Speaker B: I only spent two years, two years in prison. That was it. Okay, that's not that much time, but.
[00:58:07] Speaker A: Yeah, so, so ideas, right? I mean, so basically, like, so if you do not, like, so a lot, a lot. Like, many philosophers, like, actually, like, believe that basically, like, we do fear consequences because we fear finitude, right? I mean, because, like, basically, like, you know, like a lot of what we fear comes down to, like, fundamentally a fear of death, right? That we are finite beings, right? If you knew you have, like, all eternity to basically, you could always say, like, at some point I will, you know, like, basically, like, the consequences will disappear because people will forget what I did. Like, the social consequences are not that dire, right? I mean, and whatnot, right? I mean, so basically, so when you look. And that's a big problem, like, even fundamentally for AI, there is no notion of time, right? Because when you interact with the chatbot, you ask the chatbot a question, if you come back in an hour or in six months, it will give you the same answer to the same question, right? I mean, basically, because basically there's this notion of statefulness enough, and we as biological creatures. If you are angry now, right? I mean, basically, like, you will answer to me, like, in a specific way which might be different from, you know, like, when your hormones have calmed down, right? I mean, basically you've chilled down a little bit. Like, basically in a couple of days, you won't necessarily answer that, you know, like vividly to something you don't like, right? I mean, or because like time has went by and none of this basically even in the basic implementation is taken care of. Basically they have zero notion. LLMs have zero notion of time. The time that has elapsed and. Yeah, so basically this is where I think a lot of goes back to embodiment and perception of time. Basically, if you want to have something that truly reacts as a human, if you want more human like kind of behavior, you're not going to be able to operate this without a perception of time.
[01:00:16] Speaker B: But it's interesting and we're kind of thinking on our feet here, but as you were just talking about that, it dawned on me as children and you have experienced this as well.
You don't have that sense of finitude, you don't have that sense of mortality. And your time perception is way different than when you're as old as I am.
And so I wonder, I don't know, have you, what do you think about?
[01:00:40] Speaker A: Because like, I mean, I mean, you know, like yes and no, right? Because like, so I would start by saying like, basically like, you know, like, you know, like as children grow, they start growing this notion of like I don't have all of the time in the world, right? I mean basically like, yeah, when you grow, you ground a child, you're just basically you're not gonna be able to like play video games for the next couple of days if XYZ happens, right? I mean there's also this notion of finitude, right? I mean basically perception of time, right?
[01:01:09] Speaker B: There's an avoidance of pain also. But you do, but, but your perception of time is very different. I mean, I remember, you know, and I, I long for that childlike perception of time and not understanding mortality. Like mortality is not even understood in really young children. So I'm just developmental aspect.
[01:01:27] Speaker A: Yes. So. But yes and no, because like I think so I think it's like a combination of things, right? I mean, so basically like, so children might fear more like basically like social pressure, right? I mean basically parental disagreement or disapproval, right? I mean basically like, so maybe you don't care about the long term consequences. You care about like your, your mom or your dad like being angry at you, right? I mean, and so basically, so it's like, it just shows like, basically like how you, we humans, basically like our reward functions are a combination of things. It's like basically my friend, friends are gonna be angry at me or like somebody I I love basically is gonna Consider me like if I lie, right? I mean, might be something bad's gonna happen to me, I might lose my job. But it can also be like, oh no, my friends are gonna know I'm a liar, right? I mean basically they're not gonna take me as seriously in the future, right? So it's a combination of things, right? I mean it's also like basically it's not just perception of time. It's not just like a fear of long term consequences. It's a combination of social pressure, consequences, pain and all of the above.
It's realistic to say that different human beings perceive things or consequences as being different. And basically what might be a deterrent for somebody might not be a deterrent for somebody else. So it's also true that you evolve throughout your life and basically your dreads and your fear of consequences evolves across type.
[01:02:56] Speaker B: Yeah, it's interesting reading over these four things again. Valence, embodiment, temporal perception and moral compass.
Maybe I'm not sure if they have other things in common, but there's something at stake, binds them all kind of together and there's nothing at stake for a computer.
[01:03:13] Speaker A: Exactly. So this is exactly like the question is like basically, can you have real stakes for the AI if you're not embodied need, right? And basically like so, so that you're absolutely right where you know, like a foreign AI that doesn't get a reward if they do something right, that doesn't get like, you know, like punished for doing something wrong, that doesn't have to wait or doesn't perceive like the, the weight of, you know, like being bored, waiting. Because like, let's say keep it simple, right? I mean basically like let's say ChatGPT now would have like, you know, Nika, it's it personal perceives boredom and basically it perceives like having to wait for you to answer for several days as being like, you know, like some sort of consequence for it giving you a bad answer. Right. I mean, maybe it would optimize for reduction of that time. Right.
[01:04:01] Speaker B: If it experienced or if it, or if boredom was a sort of objective. Yeah. An objective function in it, you mean?
[01:04:08] Speaker A: Yeah.
[01:04:08] Speaker B: Okay. Not that it perceives the boredom of the user that it itself.
[01:04:12] Speaker A: Yeah.
If it's like I give an answer and my user is not gonna, basically it's not gonna answer to me for a week. Right. Basically it's like maybe I need to minimize the amount of time that it's gonna take for the user to come back. And so those reward functions right now are optimized for other things. Right. I mean so God knows what it is actually optimized for, right? I mean I'm guessing most of the time it's optimized for, for the probability that the user is going to keep using it over time. Right. I mean basically create some sort of addiction.
[01:04:46] Speaker B: Yeah, well, yeah, we've seen that for sure.
Yeah. Okay. All right. You want to talk about wetware a little bit?
[01:04:53] Speaker A: Yeah, we can talk about that.
[01:04:55] Speaker B: Well so I mean you write about things for example like we might need to grow artificial intelligence like in organoids and or you know, just biological substrates, networks of neurons, wet computing, etc.
Well maybe just describe that and then I have a question about how it relates to artificial intelligence.
[01:05:17] Speaker A: But so I will take a different angle to that because like basically. So I'm very intrigued by wetware because like it starts from the belief and the fact that silicon based hardware is very inefficient. Right. I mean basically. So what, what I'm seeing is like basically like when you, when you just look at like look people are using GPUs, CPUs like for. To train like complex like machine learning models and AI. Right? And it's, it's, it's not even like basically like like at this stage it's just like very brute force. You just like feed the process into this. But it's not you know, like basically like leveraging like how many idle processes are in place. Is there like unused hardware now, whatnot. Right. In comparison our little brains are so much more efficient, right. I mean to basically hold multiple thoughts at the same time and self regulate and sleep. Right. I mean basically. So there has been like actually a lot of research that shows that you know like we sleep because we are processing the information that we were, you know, like of things that happened to us during the day. Right.
[01:06:26] Speaker B: And there's been actually probably lots of functions of sleep.
[01:06:29] Speaker A: But yeah, that's more stated. And so basically. So one of the. Basically. So there's been research of people showing that if you forced basically a deep learning model to sleep artificially, you would actually get something that's more efficient because it would help basically flush out the useless part of the data and whatnot. So what I'm seeing is there are like look when people created like deep nets, like deep learning. Right. I mean basically that was supposed to be an analogy to the brain like in terms of like the way that neuro. Like. Yeah, I know, right. I mean basically like exactly. Right, right. I mean basically like it's, it's just like such a raw analogy when you know, like, basically there is so much more, right? And so something that makes like brains really unique is that you have like information that is co located with the compute, right? I mean, basically. So in basic compute hardware, you have your hard drive here, you have your data center somewhere, and you basically have the compute that helps process and make decisions elsewhere. And this is already very highly inefficient. We know that data centers are inefficient because we need to cool them.
So what I'm saying is we're so far from understanding how the brain works and we have so much to learn and to gain from mimicking. I mean, I'm gonna say I'm gonna talk about biological mimicry here, right? I mean, basically, I think we are way behind basically what we're doing in AI infrastructure, right? And basically I've done a lot of work throughout my career in AI infrastructure. I think more research needs to go into developing AI infrastructure that is more appropriate to host intelligence.
[01:08:15] Speaker B: What about like neuromorphic computing? It's kind of on the rise.
Yeah, but that's different than like wetwear, right?
[01:08:22] Speaker A: It's, it's part of like, basically for me, it's just like, basically like inspiration from nature. Basically like learning how, you know, like and trying to extrapolate from this, right? I mean, so basically like from all aspects, right? I mean, so, so wetware, right? I mean, let's go back to wetware, right? So I'm not saying we should grow basically like or we should necessarily host AI processes on wetware. I'm saying that maybe that's the easiest way to get that efficiency for free, right? And basically so because like, it will take time, like you rightfully. And we pointed that many times like during that conversation, we do not understand our own brains. Certainly not to the point that we can reproduce a brain like function and we can reproduce this processes, right? Because we don't understand consciousness. We don't understand like why the brain is so efficient, right? So as opposed to having to reproduce a synthetic brain that has the same properties, it might be easier to basically leverage what's already there, right?
[01:09:27] Speaker B: Let's say that it is easier and it's a road that we go down. You start incorporating actual biological tissue in the AI and in computing, where then does.
Is it still AI? Is it still artificial?
Where does artificial thing end? If you're not building it, you're growing it and just taking advantage of it, then in some sense that is a failure of AI because you have not been able to build it properly, you have to grow it. So would that be considered a failure?
[01:10:01] Speaker A: Look, the extreme of this is basically so look at neural, right? I mean, so basically you implant like a chip in the brain, right? I mean, so basically like, so you could see both ways. Is like, basically is the, is the, the silicon helping the brain get better or does the brain, is the brain necessarily to control the silicon? Right? I mean, so basically like, so once we get into that symbiotic kind of like function, right? I mean, I don't, I don't, you know, like, basically like you can take it both ways. It's like, basically like, are we leveraging the best of both worlds into something that's useful for humans? Like, does it even matter? Right? For me, for me it's like it comes back to what is the goal of artificial intelligence? Why did we want to have artificial intelligence in the first place? If the goal is to multiply intelligence, go faster or whatnot, then you could say it's fair to do whatever needs to be to happen in order to, you know, like extrapolate on human intelligence, right? I mean, if the goal is like to really replace human brains, like then no, like growing intelligence or growing intelligence will be considered to be a failure, right? What worries me a lot though is like, as somebody who's looking at this from an ethics lens, right? I mean, it's basically like what happens the moment that you start like, you know, like, you know, like putting into servitude, basically like biological systems, right? I mean, basically like if you, if you're using, if you're using like animal cells in order to host intelligence, are we making something suffer, Right? And basically we don't have a definition of suffering just yet, right? I mean, what, like, are we, are we torturing, you know, like beings, Right? I mean, and as long as we don't understand consciousness, we don't understand pain, we don't understand what life is, right? I mean, basically I don't think we should go there, but there are companies working on this already, right? I mean, so basically I think it's the right time to ask those questions.
[01:12:00] Speaker B: Okay. All right, so I'm going to read something that you wrote here. When you're talking about wetware, I think it's in your wetware writings.
This opens the door to new metrics like homeostatic balance, adaptive learning curves and affective variants. And it reorients the AI paradigm from task solving, which is the historical AI benchmark paradigm, to life forming. I mean, it sounds like you're all in on the life aspect of it and neither of us can really articulate why that is.
[01:12:37] Speaker A: So I'm not saying it's right, right. There's just like, I'm stating a fact, right? That the moment we get there, right. That we are hosting. Basically what I'm saying here is we don't know what we're. In a way where I don't know what we're doing, right? Because look, I've worked for decades. I mean, I'm basically, what does it mean to measure the quality, the performance of an AI model in a situation where you would deploy models on wetware? Basically, exactly as we said before, right? I mean, basically, like suddenly you're operating with a biological system that is, that has like a, a longer, like, you know, like a reaction time, like us, like longer, longer latencies, right? I mean, different reactions or whatnot, right? I mean, so you cannot just like look at it from the lens of like the performance of a model. You have to look at this from a perception of like a biological system onto which we might be imposing consequences, right? I mean, so everything has to be rethought. It's like basically what does performance actually mean, right?
[01:13:43] Speaker B: Within the dynamics of the real world also.
[01:13:45] Speaker A: Yeah, absolutely. Yeah. No, and so it's just like, basically like. So I'm, I'm personally not looking forward to this. I'm just like, basically like I would even say like, basically like. No, but it's true, right? I mean, basically it's like there are companies working on this now, right? I mean, basically like I'm not working for the companies who are building wetware now, but there is experimentation with those things.
Everybody knows neuralink, right? I mean basically, are we comfortable just yet? Are you comfortable just yet knowing that it's warm and cozy? Because at this stage you're thinking about a paraplegic person able to walk because they have a chip implanted in their brain. So that's the, that's a great application, right? But what does it mean if you are now letting robotics part, be part of our bodies? So basically, I don't think there is enough thinking about what this represents, what the mitigation, the risks are going to be, what the control layers should be, right? And yeah, basically, look, there's a lot of sci fi about the bad stuff that could happen basically if suddenly, suddenly like you implant ships in your brain, right? I mean there are lots of like fun things about this, but there are lots of scary things about places.
[01:15:03] Speaker B: Well, so I mean you, you and Everyone mentions Neuralink. There are a lot of companies that are doing this sort of thing and Neuralink just gets mentioned because they have a very famous person who started the company. But I recently went to like a, like a neuro stimulation slash neural interfaces kind of workshop and part of that workshop, they actually had patients come and tell their stories to us, a bunch of neuroscientists. And a lot of the people were like engineer, like neural engineers who are designing these kinds of brain implants and stimulators and closed loop stimulation systems, et cetera. But they had these patients come whose lives had been changed by these devices, some of whom had the devices on their head, you know, as they were like telling their story. And anyway, so so far it's been very positive, right? So there's the danger, right, that it could be very, that it could lead to something very negative. And it's very, it's early stages because you can imagine if you're like Rajesh Rao talks about neural co processors and others too. Like, like you have this, what we use, ChatGPT, is like a neural CO processor right now. It's just not directly implanted into our brain.
[01:16:15] Speaker A: Yeah, yeah, yeah.
[01:16:18] Speaker B: Go ahead.
[01:16:18] Speaker A: No, go ahead.
[01:16:19] Speaker B: I was just going to say I have made us jump all over the place here and we've covered a lot.
I'm trying to find out what have we not covered that we need to talk about here because we still have some time.
[01:16:31] Speaker A: No, I mean, so basically like what I was going to say, right? I mean, so basically like I, like, so I write about those topics, you know, like it's, I'm a little bit of a weird person because like people, there it is.
So I mean, people, people usually ask you like, you know, like, oh, you're in AI. You must like what, what excites you about AI and whatnot. So basically like, you know, like, it's just like a lot like everybody else. I see the benefits of getting there. The reason why I eventually went into AI is that it freaks me out. Oh, and basically like so basically and, and people are like, like, you know, like I actually like have people, friends, family and whatnot. She's like, we are so happy you're going in there because you are, you know, like, basically like there is a responsible person in the room, right? So people always like, so I care. Like, basically I'm just like, I see the opportunity, but I also do see the risks.
[01:17:25] Speaker B: Right, See, I care too. But, but I don't think I, I am so aware that I don't Know the right answers.
[01:17:34] Speaker A: Yeah.
[01:17:35] Speaker B: And so I think that the thing that I care about is that everyone who's in charge seems to think they know the right answers. And I know they don't know the right answers. I don't know where that is.
[01:17:43] Speaker A: I 100% don't, don't know the right answer. I think I'm asking the right questions. I think you need somebody, you need somebody in the room that says, like, look, I am like, just think about this. I was the person who was in charge, like, okay, so let's talk about Gemma, right? I mean, so Gemma is this open source model that Google released, right? Which is like an open, it's called an open weight model, right?
[01:18:06] Speaker B: What is it called?
[01:18:07] Speaker A: What is it called?
Like a Gemma, right? I mean, so Gemma is like, so it's, it's called, it's, it's a, it's an open weight model. An open weight model is like the equivalent of like Llama for meta, right? I mean, so it's the open source version that companies release that you can take and deploy on your system and then build on top of, right? Because like you look at ChatGPT or Gemini or whatnot, like they are hosted by, you don't actually get to play with the model, right? I mean, you use the model like Gemma, Llama and whatnot are basically like, it's, it's an existing model that you can take and deploy. So once you, if you train that with the wrong data, it is there to stay. You don't control it anymore, right? It's, it's, it's in the wild, right? So it's, it's very scary as a data person to say, like, I'm the person responsible for what goes in there because once it's out, it's out of my control completely, right? I mean, so basically if there is sensitive content that went into that model, it's there to stay and I'm responsible for it for the rest of time, right? Because I'm not going to be able to recall that data. And so when I built the strategy for Gemma, I was really conscious about this, right? I mean, basically anyways. And I know that if I'm not the person doing this, they will still release that model. Basically, Google and Meta and whatnot are not going to decide not to release these models because one person refuses to try to give answers to this question. So basically for me, it's just, I'd rather try to make a judgment call, even if it puts a lot of responsibility onto me to decide what goes in there? And I think so basically by writing about those topics, about what are the ethical implications of wetware, right? I mean, what are the ethical implications of generating consciousness if consciousness truly emerges from scale?
You have to answer those questions. You cannot just stay here and say, nope, I'm not gonna.
Anyways. It's just basically we need to was those questions before something bad happens, right? And I don't have the answer. I will never have all of the answers. I don't think any of us is ever going to have all of these answers. But we need to come up with the best faith approach to this. And because of the nature of those problems, you have to combine.
It's a multidisciplinary problem. You need to have the inputs of physicists, you need the inputs of philosophers, ethicists, neuroscientists and whatnot. And what really worries me, because as a physicist in a world of computer scientists who believe that because they are computer scientists, they are the ones with all of the answers to what artificial intelligence is supposed to be, I think it's really important to have a Trojan horse in there that can force other opinions to be exposed and to be injected in there.
[01:21:18] Speaker B: So who's listening to you?
[01:21:20] Speaker A: Who's listening to me? I mean, people from the outside.
So I mean, you know, like, I'm, I'm an expert in this field, right?
You can just like. Yeah, exactly. Right, yeah. No, but I mean, I mean, you would be surprised. But basically, like when I started talking about those things and writing about those things, things, people, people actually in the field are just like, I never thought about this. This is dangerous, right? But I mean, at least, at least they're starting to ask themselves those questions, right? I mean, and basically. So, yeah, I mean, it's, it's just like, you know, like, basically for me, my fight is more like getting people to speak, right? You don't disagree, like, you, you don't agree with me on like exact, like definition of like consciousness or whatnot. I don't think I agree with myself. I don't think I, I have a clear opinion just yet, right? And my opinion keeps changing over time, right? And as, as I keep listening to people hearing things or whatnot. But I, I want people to ask questions, right? And ask themselves questions and realize that, look, if you're going to train an AI model on wetware, you have to ask the implications for humankind. You have to ask the implications for the being or the biological system that those AIs, AI systems are going to be deployed On, Right.
[01:22:33] Speaker B: Yeah.
[01:22:34] Speaker A: I think like, you know, like I often say, like I, I am always like criticizing that we even call AI models models because like what an AI model is, is basically an abstraction that it's, it's an extrapolation on data, right? I mean, you have data and you try to like, you know, like between two data points is like you have this data point here and here. If you try to probe like basically like a prediction in the middle, you're going to have some sort of an average, right? I mean, so basically, basically like you're extrapolating from the data what the average behavior is going to be, right? In physics, a model is trying to you know, like explain what the heck is going on, right? I mean, and basically. So AI is not trying to explain, right? I mean it's not even trying to reproduce, right? It's just trying to like. So it's a very different definition of modeling, right? I mean, which is basically like, for me, like I approach modeling is like trying to like explain, explain and represent fundamental phenomena in nature, right? AI doesn't do that. Right? But like, so, so AI doesn't even try to do this, which is why I don't believe like the typical AI model now can actually lead us to reproducing consciousness or life or whatnot. Right? I mean, so that's, that's my proposition.
[01:23:44] Speaker B: Well, what should we call them instead of models? I, I like that point, but what, what would.
[01:23:49] Speaker A: Extrapolations, right?
[01:23:50] Speaker B: I mean, are basically like transformations engineered. They're like engineering models. What would be a term for engineering model?
Because it's more engineering than modeling for me.
[01:24:01] Speaker A: It's just basically like, you know, like models are meant to try to explain the world in the best way possible.
[01:24:07] Speaker B: Yeah, there's no explanatory.
Although those models are being used in neuroscience as explanatory features for like brain high density neural recording, populational neural recordings.
[01:24:22] Speaker A: See, now you're going into like explore like you know, like a exploratory, like an explanatory AI, right? I mean, basically like xai, right? I mean, not.
[01:24:31] Speaker B: No, no, no, not. You mean like explainable AI.
No, no. This, this is just like. So when large language models were. Sorry. When like convolutional neural networks models started to work really well, it turned out that. So convolutional neural network models, I'm not sure how much of the history of this that, you know, are roughly designed based on what we know about our ventral visual stream and the way that it's layered even before that Like Fukushima designed the Neocognitron based on like simple and complex cells and our visual cortex with layers that. For that abstract things over times in. In ways that are. Our visual cortex is supposed to abstract things based on neural record single neuron recordings in different layers. Okay?
So then it turns out if you take those convolutional neural networks and you kind of build them the way that we. Closer to the way that we think that our visual stream is layered in a hierarchical structure and you train it on imagenet, then if you look at the different layers of the convolutional neural network, the response properties of the units match after some linear decoding match well to the response properties of various layers in your visual cortex. So then, oh, it's, aha. This is the best model of our visual cortex that we've ever had. So in that sense, they're explanatory, but they're more predictive. So there's this battle in neuroscience. Is this predicting, Is it explaining, do we actually understand it?
[01:25:52] Speaker A: Absolutely.
[01:25:53] Speaker B: Et cetera, et cetera.
[01:25:54] Speaker A: You could state, you could state that basically like, it's because, like, you have fundamental, like, similarity in the topology of like the way the brain works and basically, basically. So is it, is it, Is it like the constraints of the topology might explain or reproduce the same, you know, like the same patterns, right? I mean, it doesn't necessarily explain in the proper sense of the term, right? I mean, basically what I mean, like, with a. In physics is like you're trying to like, write equations that actually describe, right? I mean, as opposed to like, emulate or reproduce something, right? So basically like a. But I mean, it might or it might not be the case. Like, I think there is a reason why people don't know, right? I mean, whether it's a, it's an explanation or not, right? Because like, you know, like you have this example, like, for example, for, for evolution, right? I mean, basically, like, why do, why do all mammals or, you know, like terrestrial, like animals basically, like have four legs, right? I mean, and whatnot. And it's just like, basically like, is it because, like, we evolved from one another or is it because this is the best, best way of being on planet Earth given the constraints of like, gravity and, and the, the. The, you know, like the way that, you know, like a carbon, like life is being built, right?
[01:27:12] Speaker B: So what. So all right, so your bet is that if we scale up with the current AI models, we will. Consciousness will not emerge. I agree with you. What will we get when we continue to Scale.
[01:27:24] Speaker A: Well, I mean, so look, I'm, I'm a data person. I don't think it's even feasible to scale that much more because we're still gon at the limits of like the data that exists, right?
[01:27:34] Speaker B: Are we people? But people always talk about that, right? And like we're at the wall, but then you just step over the wall.
[01:27:40] Speaker A: I'll tell you, I'll tell you. Basically so the way I think about model and I think like this is the, like everybody needs to get that, right? I mean, so basically like so data, the data that you have on the Internet is basically like, and everything that we've produced, like arts of work, like literature or whatnot, right? Is like the representation of human knowledge, right? And basically so what models do is basically like they extrapolate based on this, right? I mean, so basically so those models can only do one thing. But like, you know, like the first time somebody uses like ChatGPT, they're like, oh, they have a superhuman, like those, those chatbots have a superhuman capability.
Is that true or not? In a way, yes. Because like if I'm, if I'm a doctor, I'm the best expert on a specific type of disease.
The ideal AI model will do better than me because it will basically extract all information there is about. So even if I'm the best expert worldwide, the union of all knowledge on that disease has to be bigger than my own knowledge, bigger or equal to my own knowledge, right? I mean, so basically it's not going to produce something new, right? I mean, so basically like, like it's not extra human, it's just like the sum of all human knowledge on one given topic, right? And so we're always gonna be bounded by, you know, like the, the content of that knowledge, right?
[01:29:08] Speaker B: And basically without any consciousness, by the way, it's superhuman. Without any consciousness. Yeah.
[01:29:13] Speaker A: So anyways, right? I mean, so basically so, so it is superhuman in the sense that basically like it's the, the, the union of all the experts on a given topic, right? I mean assuming that you can create the perfect model that extract that information, right? So basically for me, and, and this is all there is to it, like you have like my opinion is basically like there is a, an asymptotic, you know, like basically like limit to what you can achieve, which is basically like extracting the maximum more most relevant amount of information from human knowledge, right? I mean as long as we don't come up with Leica, with AGI, right? Whatever AGI is.
[01:29:52] Speaker B: But that, but that asymptote was supposedly the same with training error, right? And then what I for what's the double dip in the training error when that you get with scale, do you remember what's it called? It has a name, you know, like the training error goes down and then after a while it starts going up again and people thought we were at the limits.
[01:30:10] Speaker A: Like what? Like overfitting or whatnot? Like.
[01:30:12] Speaker B: Yeah, yeah, overfitting. Yeah, yeah, overfitting. And then you get just given more data and then it gets way better, right? So the generalization gets better. So you don't think that there's going to be another double dip. You think that we're asymptoting.
[01:30:23] Speaker A: I mean you are not making like it's all, it's look, it's information theory, right? We're extracting information from, from that data.
Yeah, but that, that's, that's what like we're training. Like basically as long as it's a data based, right? So basically. So like for me, like if you really wanted to go for further, you would need to have like an experiential kind of AI, right? I mean, so AI that can basically make hypothesis go in the real world and say like I have this new idea and test whether and this goes into physics, right? I mean, but basically as long as you learn from existing content, you're not going to be able to go above that.
Okay, so for me, like the real AGI, right? I mean if we're going to talk about this is basically like you want and it goes like into embodiment or whatnot, right? I mean it's basically like an AI that can create new ideas and say like, I don't know if this idea has merit. I'm going to go experiment and test it in the way that a scientist can do it, right?
[01:31:23] Speaker B: Yeah, yeah. I mean it's interesting how humans come up with new ideas, right? I mean there are different ways of doing it, combining two previously, unlike things like the romantic poets.
But then you hear artists like musicians, their experience of it is like, I don't know, it just came out of the ether. Like there's just like, there's no accounting for where it comes from, right? It feels like it just drops down into your lap sometimes, but it's almost always in the context of someone who is very skilled and is working a lot like trying to produce things. And then one day it just comes like without. And it seems like there's no effort. But you've put a ton of effort into doing it. But we don't know, like, how to generate ideation.
And we're not going to get that with scale, right?
[01:32:10] Speaker A: Yeah, yeah, definitely. No, I mean, it goes like, you could define this as being like, human intuition because, like, you have, like, it's always about, like, combining things, right? I mean, basically, like, you have, like, relationships between items, things, ideas, concepts, right? And for me, like. Like, I've developed a lot of, like, new ip, new ideas in my life by combining ideas from physics with ideas from biology. And just like, what we. If we combine both, right? And basically, so you have. You have, you could say, not necessarily infinitely many, but like something, you know, like, basically like extremely large space of like, different combination of things, right? And then you have to evaluate if these ideas or combinations have merit, right? In order to experience whether they have merit, you have to either have intuition from real life and the real world, right? Which is why we humans are able to do this, right? And AI scant, or you have to provide AIs with the capability to evaluate whether it's a good idea, basically, in order to do this. It's like, how do you define a good idea, basically, for you as a person, right? How does a poet or a singer. Singer or. Or, you know, like a composer say, like, this is good music, right? And so basically it comes back to, like, developing along with the idea the. The evaluation system, right? So basically, like, so and so it goes back to, like, basically like, you know, like, like a balance, right? I mean, basically, like, how do you evaluate whether, you know, like a social pressure. I believe my music is good because, like, it made me feel good when I wrote it or, you know, like. Or I listened to it or it made other people.
It made other people happy and I saw their spines, right? And I can measure that this was merit because people liked it, right? Basically. So there is like, actually, like, this is also an interesting topic as well, right? Because, like, there is, like, more and more like, research on, like, right now. The AI is an entity on its own, right? I mean, some experts believe that the way you really, truly reach the next generation is by letting AIs collaborate with one another, which is sort of like agents, right? I mean, whatnot, right? But basically, so that true intelligence, or AGI will come from, like, collaborative intelligence, basically. So the idea is like, basically a good idea is more than the sum of its parts, right? I mean, so basically you and I together can come up with, like, you know, like, that's why podcasts work, right? Because, like, you are asking me questions that I wouldn't necessarily ask Myself and it pushes me to the limits of like, you know, like. And you like whether you're evaluating my ideas, I'm just like, oh, maybe, maybe this is interesting, maybe not, right? I mean, so basically, so by combining like by means of like culture and interaction, right? I mean basically like you can take, take intelligence to the next level, right? I mean so basically it goes back to like we as a society can achieve a lot more than any of us taken separately would, but any of us like, you know, like the union of us living in separate worlds would.
[01:35:24] Speaker B: More is different. That's a complexity science concept.
Before I ask you a couple more questions, let me just tell you something very human that just happened to me. That would never happen to an LLM.
I feel very embarrassed right now because the windmill reference that you made, it's from Don Quixote. I know that, but I didn't remember it like right when you said it.
I knew that that's so embarrassing that I didn't get it. I thought it was like an AI machine learning sort of computer science like term that I didn't know about. But no, it's for classical literature. So there you go. Very, very human moment.
So okay, so what about what, okay, so there are basically the take. Well there are so many take homes. It's not that LLMs are missing a single thing. They're missing like a list of ingredients, right?
Or is it like a category error to you almost? Because it's so far, I think it's.
[01:36:25] Speaker A: A lot, it's got a lot to do with like infrastructure, right? I mean basically like, I think like if you, you know, like. And again I, I have like the questions like what, what are we trying to achieve? I still don't know exactly what we're trying to achieve.
[01:36:39] Speaker B: Right? Frustrating.
[01:36:42] Speaker A: No. And it's just like I work in this space, right? And it's basically like so yeah, you're not like basically like when generative AI became like, you know, like a thing, right? I mean people are like what am I doing with this? Like generating images for ads, right? I mean our, our deep fakes, etc, etc, and it's like basically like what, what application does it have? We are still having this, those questions, right? People want to do like it's like when I started in data, like as a data scientist, companies wanted to do something with data, now they want to do something with AI, right? I mean our generative AI, our AI agents, right? So as long as we don't have an answer to this, that we start with the technology as opposed to solving a problem and trying to find what to do with this. Right. I mean, basically, like, I think, you know, like a.
It's complicated, right? Basically to figure out.
But I mean, for me, it's just like, this is the nature of technology, basically.
New things appear, there are different applications and there are different ways you could use this. And you have to embrace it and basically figure out later.
[01:37:44] Speaker B: Do you feel any solace? I feel some solace that predictions about the future are always incorrect or, you know, 99% incorrect, you know, flying cars, examples, etc. Like, sometimes I find some solace in that I'm potentially most likely worrying about nothing because it's going to be nothing. Like, we believe that we. That the doomsayers. It's going to be nothing like the doomsayers say it's going to be. It's going to be nothing like the utopia that Aldous Huxley wrote about, you know, and brave new world, etc.
[01:38:20] Speaker A: I mean, so I would say I'd rather think about it and it ends up being not a problem than not thinking about it. No, but it's almost not worth thinking about it if you believe this way. Right. Then basically, like, it will never be what you thought it would be. Right? So let's talk about the worst case scenario. And it won't be anything like it. Right? I mean, as opposed to like, ignoring it.
[01:38:42] Speaker B: Okay, but let's take like, Nick Bostrom's book Super Intelligence and where the paperclip example came from. I was so frustrated reading that book because actually your writings remind me of it in this respect. Although your writings didn't make me frustrated because you weren't making, like, strong claims. Nick Bostrom was making strong claims. Every, almost every sentence had the word if in it. And so this is a conditional. And if the probability when you multiply a billion conditionals together goes to zero. Right.
So basically, I don't have to worry about anything that Nick Bostrom was writing about because they're all unlikely conditionals and then they all have to happen and.
[01:39:26] Speaker A: Maybe there's some equation, right? I mean, far extraterrestrial library.
[01:39:32] Speaker B: Yeah, but we will find. But, but aliens are here. We all, we know that. Right?
Yeah, that's got. But come on, there's got to be intelligent life in the universe, don't you think?
[01:39:43] Speaker A: Yeah. Well, I mean, you would.
No. I mean, why not? Right? I mean, so basically, like, until somebody has proof for.
[01:39:50] Speaker B: I mean, if we're the most intelligent thing in the universe, man, it's pretty sad, isn't it?
[01:39:56] Speaker A: Okay, so isn't ChatGPT the most?
Well, okay, that would be really sad.
[01:40:04] Speaker B: Well, this is. All right, here's another quote. It may turn out that intelligence was never about logic. It was always about life.
So what I want to ask you. This is from your writing. So what I want to ask you is, like, what is eating at you right now? Like, what's blocking your current thinking?
You know, like, what are the roadblocks to your current thinking that's bothering you right now that you don't have a grip on? Because this life issue, like, sort of almost equating bringing intelligence into the domain of life processes is something that I've been thinking about for a long time, and I'm so frustrated that I cannot articulate why I believe that.
And.
Yeah, so what is that for you right now? What are you so frustrated about that you can't quite recognize?
[01:40:54] Speaker A: Yeah, I mean, I'm frustrated about what we talked about earlier with technology. It's just like people rushing ahead, creating things without thinking about the consequences.
[01:41:06] Speaker B: We've always done that throughout human history.
[01:41:08] Speaker A: Right, but throughout human history. But, like, now we're. We're touching the substrate of life, right? I mean, because, like, for me, like, look, you know, like, look at the example of artificial intelligence now, right? I mean, where we're changing forever the job market, right? I mean, I don't think, like, anybody would, like, disagree with the fact that, you know, like, the, like, what a software engineer is today or was a year ago is going to be very different from. From what it's coming to. It's going to be. Right?
[01:41:37] Speaker B: I mean, so it's like, fast. The change is super fast.
[01:41:39] Speaker A: It's fast. It's just like, basically, we are not prepared as a society to deal with it. I'm not talking about, like, basically, like, AI is becoming conscious or like the robot apocalypse or whatnot, right? I mean, I'm talking about a real problem where lots of people are going to be left with all the job. Are we ready to support them? Right? And so basically, so what. What frustrates me is, like, because we're operating in silos like this, right? I mean, where you have people working on wetware, you have people working on, like, you know, like. Like Ilia Elias Kever working on super, super alignment. It's great, right? And I respect that. Like, he left OpenAI because he felt like OpenAI was not thinking about the consequences sense. Yeah.
[01:42:21] Speaker B: But he also had money already.
It's not like it was a monetary.
[01:42:25] Speaker A: You know, like, I'M not going to get into that. But like basically anyways, right? I mean, so basically it's like if you want to truly achieve super alignment trait, don't you want to have like thinkers and you know, like, and, and you know, like ethicists and philosophers and neurological like, you know, like neuroscientists, like experts, like basically be part of that conversation, Right? I mean, I think it's, it's a little bit naive, right? I mean, so now it was fine as long as we were like, you know, like you were building the car. Yes, it's going to change society, but we are not touching on like deeply what makes us humans, right?
[01:43:06] Speaker B: Yeah.
[01:43:06] Speaker A: And basically like now that we're starting to talk about like what is intelligence? What is, what is consciousness, what is, what is life, right? I mean, basically, like, I think this is something that concerns all this of.
[01:43:18] Speaker B: Okay, last question for you. Well, first of all, are there other things that you wanted to bring up that we haven't touched on here?
[01:43:25] Speaker A: We talked about a lot of things.
You asked me a lot of difficult questions.
[01:43:29] Speaker B: That's great. Okay, well, my last question for you then is you really do cite a lot of philosophy in your writings, right? I mentioned Kierkegaard, Malu Ponty, you cite Kant, Descartes, Aristotle, you know, Heidegger. I didn't mention. I've gotten into Heidegger again lately. I was, I got into existentialism when I was in high school because of the angst, you know.
But, but so did you have that in your background already? Like have you been reading and appreciating philosophy this whole time or have you revisited it?
[01:44:05] Speaker A: So I, I look, I wanted to be a physicist like ever since I was a child because I wanted to understand the universe, right? I mean, and so for me, like, basically like the meaning of life, why we're here, why we behave, like it goes hand to it, like it's, it's also relatively unique for physicists because physicists usually go into physics because math because like they want to explain like specific processes, right? I, I felt I wanted to understand the universe as a whole, right? I mean, and so basically so, and, and what attracted me to deep mine at some point in my career was exactly that. Because like a Demis has a best. Right. I mean the, the founder, basically, like it has as a goal, like to understand intelligence, right?
[01:44:47] Speaker B: No, he has the goal to solve, to solve intelligence.
That's not necessarily the.
[01:44:52] Speaker A: Yeah, but like I think, I think it's both, right? I mean, basically like so, so I I, I also, like, I relate to this, right? And basically, for me, like, it goes together, right? I mean, basically, you cannot understand the universe without trying to, to give sense to why are we here?
What the heck are we supposed to.
Why were we brought here in the first place? So for me, early on was always philosophy, was always going hand in hand with physics.
And so what better time now to reengage that conversation for a topic like artificial intelligence, basically? So for me, it's really a shame at this stage that basically computer sci and computer scientists pair together, right? I mean, basically when you try to get a job in AI, like, the interview questions are not about intelligence, they're about programming, of course, algorithms and whatnot. Right? And of course it needs to be, but basically, like, I think, like, you know, like, you still need to appreciate, like, what is intelligence? Try to understand what, you know, like, basically what makes us different from. Why do we need to be, be different from machines? Are we really different from ChatGPT in the way we're thinking? Right? And yeah, I mean, if you're serious about artificial intelligence, you need to be serious about intelligence. You need to be serious about not just the algorithms, but also the philosophy that comes with it.
[01:46:19] Speaker B: Well, Jennifer, you are a, I guess what Nicholas Taleb calls a black swan in your.
Given your, your background, you get that reference, right? I didn't get windmills at first, but of course you immediately, once. God is embarrassing Don Quixote. I didn't look it up either. It really just came to me. I promise I didn't, like, look it up.
Okay, so anyway, thank you for being on here and.
Oh, this is what I was going to say. And it's kind of a question.
Does it feel like.
What I imagine is that you're experiencing some joy getting back, like kind of into these questions and having some space and time to think about these things. Is it joyful? Is it, is it satisfying? What does it feel like?
[01:47:04] Speaker A: It is. It is very satisfying, right? Because, like, it's like taking a little bit of distance, right? I mean, basically, like, you know, like, you know, like when you have a role in AI, it's very operational, getting those models to work and whatnot and forgetting, like, why am I doing this? Right? I mean, and why does it matter? Right? And yeah, I mean, basically, like, look, I, I know I am a black swan, and I think I'm useful to engage this sort of conversation on the market, right?
[01:47:35] Speaker B: I hope so. I hope that you continue to do that and I hope that people lend their ears to you people in the right places. So anyway, this has been really fun. We covered a lot of territory. Thanks for coming on and I appreciate it.
[01:47:48] Speaker A: Absolutely.
[01:47:57] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.