[00:00:00] Scott Aaronson: So, you know, these are, these are some of the, these sort of philosophical enormities that, that I think, you know, once you have A.I. That can sort of perform, you know, at, at human level or simulate humans, you know, these are going to be activated.
[00:00:22] Al Scott: The Rational View is a weekly series hosted by me, Dr. Alan Scott, providing a rational evidence based perspective on important societal issues.
[00:00:34] Soapbox Media: Produced by Soapbox Media.
[00:00:38] Al Scott: Hello and welcome to another episode of The Rational View. I’m your host, Dr. Al. In this episode, I’m continuing my investigation into the so-called hard problem of consciousness.
[00:00:49] I’ve spoken to several people who believe that consciousness arose in single celled organisms and is somehow integrated at higher levels through electrical synchronization or [00:01:00] intercellular molecular transport into some sort of a unified experience. Hindus and Buddhist that I’ve talked to believe that there’s a universal consciousness of which we all partake in some way, and this is actually similar in some ways to Sir Roger Penrose’s theory of consciousness called orchestrated objective reduction, where microtubule organelles and the brain’s neurons have evolved to concentrate this diffuse universal consciousness present in the collapse of quantum superpositions.
[00:01:28] Some of these folks believe that the randomness at the heart of quantum mechanics is necessary for free will and volition. Others like Bert and Russel believe that we act in accordance with our will, even if our actions. Have past causes in the future is predetermined. Today, I’m honored to be interviewing an expert who pushes the limits of human knowledge in terms of our understanding of the implications of quantum computing in regards to artificial intelligence.
[00:01:53] If you like what you’re hearing, please press like on your podcast app and share it with your friends. Love to see you on our Facebook group, The [00:02:00] Rational View.
[00:02:02] Scott Aaronson is David j Bruin, Centennial, professor of Computer Science at the University of Texas at Austin, and previously at mit, he received his bachelor’s from Cornell University and his PhD from uc, Berkeley.
[00:02:17] Aronson’s Research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, Quantum Computing, since Democrats was published in 2013 by Cambridge University Press, he’s received the National Science Foundation’s Allen T Waterman Award, The United States P Case Award, the Tomason Chiesi Prize in Physics, and the ACM Prize in Computing.
[00:02:44] And as a fellow of the acm, Dr. Aaronson, welcome to The Rational View.
[00:02:49] Scott Aaronson: Thanks. It’s great to be here.
[00:02:52] Al Scott: So I’ve been exploring theories of consciousness and how it arises, and it’s such a diverse field. A significant fraction of the [00:03:00] people I’ve spoken with have said that computers cannot be conscious. We need continuous processes.
[00:03:04] What’s your position on the sentient possibilities of artificial intelligence?
[00:03:10] Scott Aaronson: Okay, this is the kind of thing you wanna take a deep breath before, right? Yes. So, I will confess that I do not know which kinds of physical systems can and cannot be associated with consciousness. I regard that as as not merely, you know, a confusing question, but in a certain sense, the confusing question. It is you know, you know, one of, one of the most confusing questions that, that human beings have ever asked, right? You know, I mean, it has all sorts of, you know, obvious moral and ethical implications.
[00:03:43] You know, at what point does a fetus become conscious? What about a coma patient? What about various animals, You know? What about A.I.’s of, you know, possibly the not so distant future, you know? Now, now what, what would be the argument that a [00:04:00] continuous process is necessary for consciousness?
[00:04:02] You know, that, like, what, what, what is, what is so special about something continuous as opposed to a, to a digital process?
[00:04:11] Al Scott: It’s not clear to me what the argument is. I interviewed Dr. Arthur Reber, who’s a philosopher, who believes that the cells are sentient and he holds that there’s a significant difference between a simulation of a cell and a cell. There’s, the continuous real thing is, has a, has a different… is is qualitatively different than a simulation of the same thing. Now I know this, this kind of falls into the, the question of, of artificial realities, you know, and questions about is nature computable and can we simulate the world.
[00:04:43] Scott Aaronson: Yeah. So, so, you know, I I, I’m, I’m not sure that this distinction people wanna make between, you know, a real thing and a simulation of the thing, you know, is really going to answer for us the questions that we want answered, Right? Because, I mean, you know, there were skeptics of, of A.I. you know, who [00:05:00] would, you know make this point for a long time that they will lost a simulation of a hurricane, you know, doesn’t make anyone wet. Right. You know, and then, you know, to which one might apply. Well, well, well what about a, a simulated person inside of that simulation, Right? They would certainly react, you know, within the simulation as if they were getting wet, right? And so, you know, at what point does the, does the magic pixie dust of, of, of reality get, imbued into something, you know? When does, when does Pinocchio become a real boy?
[00:05:29] Right? But you know, I I, I really loved the way that Russell and nor approached this issue and their, they’re famous textbook on, on artificial, artificial intelligence, right? Where they say well, you know, it might be true that a simulated hurricane will not make you wet. But imagine someone said, Well, you know imagine someone looked at a calculator and said, obviously it’s not really multiplying the numbers. It’s only simulating, multiplying them. Right. You know, and then this would just be nonsense. Like we would have trouble even parsing this because [00:06:00] it seems obvious that a simulation of multiplication is multiplication, right? Or, you know, you could say that, that the, the you know, the, the, the is a simulation of operator just acts identically on, on, on multiplication, right?
[00:06:15] So, then the question is, well, what is consciousness? Or like, you know, is it more like a hurricane or more like multiplication, you know, and lacking a, a theory of what consciousness is or, you know, or even less than that, you know, lacking, you know, the a any agreed upon criteria for identifying what is or isn’t conscious.
[00:06:37] You know, how on earth are we supposed to be able to answer that sort of question?
[00:06:41] Al Scott: It’s very difficult to have a discussion about something that you can’t define. It’s, it’s aqualia. This is, this is the, the word that’s been invented to make this a hard problem. But you can’t distinguish a universe with aqualia from a universe without a qualia, as far as I can tell.
[00:06:56] Scott Aaronson: Except it, presumably you wouldn’t, you wouldn’t be in the latter because [00:07:00] no one would be experiencing anything there. But, but yeah. But, but you could say that, you know, the, the entire way that science has made progress, you know, since, you know, Galileo and Newton has been by, you know, setting this kind of question aside. Right by saying, you know you know, we are going to you know, write down a mathematical model of the physical world. You know, just sort of, you know, you know, and, and, and we’re going to satisfy you know, we’re going to be satisfied if, if we can predict, you know, the, you know you know, appearance of color, you know what, you know, if we can predict, you know, based on atomic spectra or whatever, what is going to appear red or blue, you know, how fast something is going to go. Right? We are not going to obsess over well, okay, but is my red the same as your red? You know? And you know, and, and what, what is the true experience of seeing red? Right? And, and, you know, you know, for with those questions, you know, one could argue whether there has [00:08:00] been any progress since, you know Democratis you know, very recognizably talked about these sorts of questions 2,400 years ago. Okay.
[00:08:09] But at least you know, if you, if you put those sorts of questions to the side, then you can make progress. And so then the question is, well, you know the progress that you make, sort of not explicitly talking about qualia or consciousness or whatever, you know, is it ever going to make contact with the, with the hard problem of consciousness? Or is that question just in the, in some completely different, you know magisterium just, you know, serenely untouched by all of the ordinary progress in science.
[00:08:38] Al Scott: Some people would argue that it is an unanswerable question. I think many physicists and philosophers have said that. Yeah. It’s, you know, you cannot answer it.
[00:08:47] Yeah. And I, I’m, I’m not of that mind yet. , I’m somewhat agnostic as perhaps you are. That I, I think, you know, let’s keep pushing forward on, on where we can push and we’ll look under the light for, [00:09:00] for consciousness and until we can spread the light a little further.
[00:09:04] Scott Aaronson: Yeah. Well, I mean, I mean like, like with every a famous unsolved math problem, like, you know, the, the p versus NP problem, you know, is the one that I, I know the most about. Right. But you know, you will get people saying, Well, why do you assume that it has an answer? What if it just independent of set theory? Right? And, and, and the answer to that question is always, Well, you know, ever since you know, you know, Girdle, you know, we indeed, we, we can’t rule out such possibilities, you know, for, for any question that hasn’t already been answered. You know, just, you know, with a, with a few exceptions, like, you know, whether, you know whether white has a win in chess, you know, we know that that question has an answer even if we can’t prove it. Right. But you know, other, other questions might just be undecidable, let’s say from the currently accepted axioms of set theory.
[00:09:51] Okay. But then you have to push back and ask, Well, could we prove. You know, could we, you know, you know, if you could at least prove that the question is [00:10:00] undecidable, then, then that would give you a, a maybe, maybe a, you know, a meta kind of resolution. Right. You know? But, you know, and that was done for a few questions, most famously the continuum hypothesis and the axiom of choice in the 1960s.
[00:10:16] But for most of the questions that we care about in math, you know, even if they’re independent, we don’t know how to prove independence. Right. And so, I would say the same thing with the hard problem of consciousness. You know, even if you could just convincingly show that it has no solution, you know, then that itself would constitute progress of a sort. But I don’t know how to do that either. Mm.
[00:10:38] Al Scott: Could could we just step back a second? Many of us, including myself, is probably not familiar with this.
[00:10:43] Scott Aaronson: The continuum hypothesis? Okay, well, the continuum hypothesis was maybe, you know, one of the most famous unsolved problems of mathematics and in particular set theory. You know, it was posed by Cantor in the, in the late 19th century.
[00:10:59] So, [00:11:00] cantor very famously discovered that there are different levels of infinity, right? So there’s the infinity of, you know, the natural numbers, you know, 1, 2, 3, 4, and so forth. And, and, you know, you can argue that that is equivalent in a sense to the infinity of even numbers, right? In the sense that, you know, you could put the two in one to one correspondence with each other, right? Like, you know, you could marry off, you know, one with two and two with four and three with six and so forth, so that, you know, every, every natural number would, would be, would be matched with exactly one even a natural number. Okay. So you could say, you know, even though, you know, it seems like there’s only half as many even numbers, then there are numbers now that, you know, those are, are the same degree of infinity, because the two sets can be placed into one to one correspondence, and similarly with the rational numbers and so on.
[00:11:52] But then Cantor made a very, you know, one of the most shocking discoveries in the history of math, which is that if you look at the infinity [00:12:00] of real numbers, you know, the points on a line, that is a greater infinity, okay? That cannot be placed in one to one correspondence with the infinity of natural numbers. No matter how you try to do it, there will always be real numbers left over. Okay? And this was, you know, proved via what was called a cancers diagonal proof, you know, which then inspired you know, most of the subsequent work in mathematical logic from, you know, Bert and Russell you know, Russell and Whitehead, the Pria to you know, girdles in completeness theorem to, you know, Alan Touring’s work on computability.
[00:12:36] But you know, Cantor, you know, made these enormous discoveries. But then you know, he posed the next question, which he wasn’t able to answer, which is that you know, there’s the infinity of natural numbers, which is called Olive zero. That’s the smallest infinity.
[00:12:50] And then there’s the infinity of real numbers, which is called the continuum, or, or it’s also called two to the olive of zero Power. Okay. So, and, and that is [00:13:00] provably a greater Infinity. Okay? He then asks, are there any infinities in between the two? So is there, is there any set which is larger than the natural numbers, you know, but smaller than the real numbers, you know, in, in its cardinality, is there any intermediate infinity? And he could not prove or disprove that there was one. And supposedly, you know, he worked on that for decades and it drove him insane. And, you know, he literally, you know, died in a, in a mental institution. Okay?
[00:13:30] Al Scott: But it’s hurting my brain right now.
[00:13:32] Scott Aaronson: All right. But but then David Hilbert in 1900, when he announced the, the Twi, his 23 greatest math problems for the 20th century to continuum hypothesis was number one.
[00:13:45] Okay. That was his first problem. And, and people you know, didn’t know what to do with it until finally, you know, in, in the 1930s girdle, you know, and this was just, you know, some few years after he proved his incompleteness theorem girdle managed to show [00:14:00] that the continuum hypothesis is which, which is the statement that there are no intermediate infinity by the way.
[00:14:06] That the, you know, that the, that the, that the real numbers are the next bigger infinity after the natural numbers. He proved that that was consistent with the currently accepted axioms of set theory. Okay, so in other words, if set theory itself is consistent, And, and, and if you just add in, you know, which, which, you know, by, by girdles incompleteness, theorum, set theory can’t prove its own consistency. Right. You know, but, but we can assume that it is consistent, right? If you assume that it’s consistent, and then you just add in the assumption of the continuum hypothesis, you know, that, that, that, that the continuum is the next infinity, after, after all of zero, then you will not create any contradiction. That will not create any inconsistency.
[00:14:52] Okay. But, so, and then, but then in the early 1960s another guy, Paul Cohen proved that if you add in [00:15:00] the negation of the continuum hypothesis, that is, if you assume that there are intermediate infinities, then that also will not create any inconsistency.
[00:15:09] Okay. So, in other words, either answer is consistent with the axioms of set theory. So on the basis of the axioms, the axioms of set theory that were accepted through the 20th century, the continuum hypothesis is provably unsolvable. If you wanna solve it, then you have to introduce new axioms and you have to convince everyone that your axioms, you know, really are correct or true axioms, and people continue to argue about it to this day.
[00:15:37] Okay. So, you know, we now know in math that a question that people cared about, you know, can have that sort of an answer. Right. And, you know, of course you might wonder whether the hard problem of consciousness is similarly just sort of independent from, you know, sort of all, all the, you know, reasoning that we are able to do as, you know, creatures in the empirical world. Right? But [00:16:00] you know, but, but you know, if, if so I, I wouldn’t say that we know that either.
[00:16:05] Al Scott: Now that’s very interesting. A lot of that’s beyond my head but very, very cool. I know you’ve spoken with Roger Penrose or spoken at an event with, with Roger Penrose discussing his theory and I’ve read his book Shadows of the Mind, where he argues that you, you pronounce it Gertel. Gertel. Yeah. So he argues that Gertel’s theorem shows that our thinking is not computable. The, the process of consciousness is not computable. We are thinking beyond computational, beyond the capabilities of computational systems.
[00:16:41] What’s your response to that position?
[00:16:43] Scott Aaronson: Yeah, I don’t, I don’t think Penrose is right about that, to be honest. You know, and I’m not staking out a like a weird, you know, like, I, I would say that almost every mathematician and computer scientist, you know, or you know, everyone who, you know, knows Girdles Theorum, who was looked into this, has [00:17:00] said no, that, that this argument just doesn’t work. But I will, you know, I’m happy to spell out for you the sort of, you know, the, the sort of usual reasons that would be given as to why, you know, penrose’s argument just, just doesn’t do what he, what he wants it to do. You know, the, the issue is that, you know, if I’m observing someone, if I’m looking at a mathematician and saying like, Wow, you know, they’re, you know, they’re, you know, doing something really impressive, you know, they, they really understand what’s going on and they, you know, can not only, you know, use these axioms, but they could propose new axioms.
[00:17:33] But the truth is, you know, I don’t know what’s going on inside of their head. Right. I just see their behavior. Right. I see, you know, that they publish these papers. I see that they answer these follow up questions and so on. And now for any behavior, you know, that I see them doing, you know, including even, you know, inventing new axioms, inventing new math problems, inventing radical new mathematical insights, I could imagine an AI that would do the [00:18:00] same thing, right? I could certainly imagine programming a computer to, to, you know, have that same behavior. You know, you may have played around with G P T three, which is, you know, maybe the strongest AI in the world right now. Right. It’s a, it’s a text engine and that you know, it can often, you know, sort of give you a, a reasonable argument, similar to what a high school student would give you, maybe. You know, although if, if you ask it to justify something false, it will just as happily do that, Right? So, you know, it, it will just sort of run with any premise that you give it, right? So you can say, you know, clearly this is not yet at the point where it is going to challenge Penrose’s thesis. Right. Although it’s much more impressive than I expected any AI to be by this point in history.
[00:18:45] But now imagine instead of G B T three, you know, we have G B T 10. You know, g b t 20. Right? And, and imagine that, you know, you could, you know, interact with it. You could give it a mathematical touring test, as it were. Right. [00:19:00] And, and it will just, you know, act indistinguishably from the most brilliant human mathematician that you know, Right?
[00:19:07] And so then, you know, you would say, is set theory consistent, right? It would say, Well, you know, set theory, from within the axioms of set theory, you can’t prove, you know, its own consistency. But yes, I believe that it, that it is consistent, right? You know, because I have some intuition for it. And now the truth is a human mathematician wouldn’t have been able to tell you anything better. Right. You know, that’s just, that’s just the same as a human mathematician would’ve been able to say, and you could, you could imagine some, you know, machine learning procedure that would learn to, to say the same thing when, when asked the same question.
[00:19:42] Okay. And so now, you know, in order to say why that doesn’t count, Penrose is forced to retreat to some kind of internal criteria. Okay. So he’s forced to say, Well, you know, Yeah the AI might say that it believes that the axioms of set theory are [00:20:00] consistent or you know, that they, that they have a model or, or whatever, or, but you know, it doesn’t really mean that, right. It’s just, you know, parroting what was in its code whereas when I say that I really mean it, you know, I can just, I can just see all the sets just sort of laid out there in my, in my mind’s eye, you know, I can see that set theory is consistent.
[00:20:20] Right? And, and, okay, there, there, there are a couple of problems with that, right? You know, one of them is, you know, some, you know, in the past humans have thought that they could just see intuitively that some axioms were consistent and have been wrong about it, right? Maybe the most famous example was Fraga, right?
[00:20:38] Who like wrote this whole treaties on foundations of arithmetic in the late 18 hundreds. And then famously the entire program was killed by Bertrand Russell. Right. When Russell asked, Well, what about, what if we define the set of all sets that do not contain themselves as members? Right. Which was a thing that you were allowed to do in [00:21:00] Fraga system. Right. And then you could ask the question, Well, does that set contain itself as a member? Right. And if it does, then it doesn’t, but if it doesn’t, then it does. Right. And that, that killed the entire system that Fraga had labored on for a decade or more. Okay. And, and, you know, Fraga to his credit, immediately admitted that.
[00:21:19] But you know, the truth is that, you know, the greatest human magicians on earth at the time missed this, right? They missed that there was this inconsistency in the system, you know, until until Bertrand Russell noticed it. Okay? So, and you know, there, there are many, you know, more contemporary examples, right? Where you know people, you know, conjecture that some axiom is consistent and, and it turns out not to be okay. So you know you know, our experience is that humans, you know, do not have some kind of supernatural insight. There are people, of course who are more, more brilliant than others at math. You know, there are people who are, you know, you know, for, for just about any of us, there will be people who will maybe seem magical to us because, you know, they will be [00:22:00] so much more brilliant than us. But then those people will, you know, will, will also make ours, right? Those people will, you know, will have other people who they look up to as more, you know, as more brilliant than them. Okay? So now, you know, if, if you wanna say, Well, I, I okay. I, I can just see that, that, you know, a model of piano arithmetic exists, or a model of set theory exists. I could just see it intuitively. I mean, it becomes an un falsifiable claim.
[00:22:30] It becomes the same sort of argument, you know, as if someone had said, Well, you know, the, an AI might say that it, it, you know, enjoys the, the beauty of a poem or it enjoys strawberries with cream or whatever. But, but it’s really just saying it. Right. It doesn’t really have qualia. Right. But in that sense, you know, in that case, I would say, Well, then why not just go back to the original argument that people have been having for thousands of years? You know, why even bring girdles theorem into it? Right? Why not just talk about [00:23:00] strawberries and cream?
[00:23:02] Al Scott: My intuition is good as far as it can be backed up by calculations. Because otherwise you could be wrong.
[00:23:11] Scott Aaronson: Yeah. Yeah. I mean, look, you know, one could imagine where Penrose or some version of Penrose was able to make a case, right?
[00:23:20] Where, that would be really like a sword in a stone, right? Like, humans would be King Arthur, right? They could pull the sword out of the stone. They can just do this demonstrable task that in AI just provably cannot do, or provably would need astronomically greater time to do.
[00:23:38] Right. So, you know, you could, you could imagine like for, for example, if, if humans were able to, you know, reliably solve some kind of really, really hard computational problem, you know, in computer science we talk a lot about NP complete problems, right? Where, you know, you. You might have an exponential space of possible solutions to [00:24:00] search through, you know, and, you know, a good solution if you find one, right? But there are just so many possibilities to check. A huge fraction of what we wanna do in computer science, you know, from training a machine learning model to optimization to you know, breaking cryptographic codes or, you know, mining Bitcoin or, you know, can be reduced to these kind of, you know, NP problems. That’s what they’re called problems. Where, you know, you can, there are exponentially many possible solutions, but you know, you can verify a good solution if someone shows it to you.
[00:24:31] And now imagine that humans were able to reliably solve those problems, and imagine that we could prove that a computer needed, you know, an exponential amount of time to solve it, right? Then that would be a sword in the stone test, right? That would be the sort of thing that I imagine, you know, Penrose could then seize upon, right? And say, Ah, you see, Right. This is the, this is the task that separates humans from, you know, at least any currently [00:25:00] constructed computers, at least any computers that you know, are based on the touring model. Right?
[00:25:06] But we don’t know of any such example, right? You know, I mean, in fact, humans cannot reliably solve these np complete problems. And, and for most examples, like finding the prime factors of a, you know, let’s say of a huge number or finding the shortest route that visits a whole bunch of cities. While we don’t know of a provably efficient method even on computers, I would much rather give that problem to my computer than try to do it myself. Right? Or try to do it with pen and paper. Right? So it’s very hard to sort of point to a clear separating example of that kind.
[00:25:42] Al Scott: So, in your opinion then, is there any evidence that would distinguish classical computing from quantum computing in the, in the realm of human cognition? Or, you know, can, can we even tell whether we’re using quantum mechanics or is there any magic there?
[00:25:58] Scott Aaronson: Oh, well, okay. I [00:26:00] mean, of course you know, this is what most of my work has been about for 20 plus years about quantum computing and in what, you know, ways can it outperform classical computing?
[00:26:09] So to tell you in one sentence: yes, we now believe that there are clear domains in which quantum computing can exponentially outperform classical computing, and as far as we know, none of those domains have anything to do with human cognition.
[00:26:25] Al Scott: Okay. That’s fair.
[00:26:26] Scott Aaronson: Okay. So, you know, the examples where quantum computers, you know, can get huge speed ups, we think over classical computers. Well, there are a few of them. Okay. The, the original and, you know, maybe still today most important one practically is just simulating quantum mechanics itself, right?
[00:26:46] Like, if you want to know the rate of some chemical reaction or you wanna know, you know, the behavior of some new material, right? A quantum computer, you know, of the sort that, you know, many companies are now racing to build, could be an [00:27:00] incredibly useful tool of, of discovery for those sorts of things.
[00:27:04] You know, it could help in just potentially designing new drugs, designing new photo volta, high temperature superconductors, right? Which are all quantum mechanical problems at the core, right? So, you know, maybe it’s not surprising that a computer that itself is quantum mechanical could help with that. That was, you know, that was Richard Freeman’s original idea when he introduced quantum computing in the early eighties. But now, the discovery that really got everyone excited about quantum computing sort of put it on the map was Peter Shore’s discovery in the mid 1990s that a quantum computer could also quickly find the prime factors of a huge composite numbers, right?
[00:27:44] So, you know, like if I give you a composite number that is, let’s say, N digits long, right? If you tried to find, and I asked you to find its prime factors, if you tried to do that by just testing one device after the other in a brute force way, that would [00:28:00] take time. That would be exponential in end. Okay. So, you know, if N is in the thousands, that could easily be longer than the age of the universe.
[00:28:07] Now we do know factoring methods that are better than that, but they’re still pretty slow. The best ones that we know use time, that’s like exponential in the cube root of N, okay. Or something like that.
[00:28:19] And so, you know, because factoring is believed to be a hard problem, you know, we use it as the basis of much of the encryption that currently protects the internet. Okay? So anytime you order something from Amazon or you, you know, you send your credit card number online or your bank, you visit any website with https in the url, you know, and you see the little padlock icon on your web browser, right? Your information is being protected by, well, you know, one of a few cryptographic codes, but, you know, which are based on things like the, the hardness of factoring numbers. Okay. The believed hardness.[00:29:00]
[00:29:00] And what Shore showed is that if you could build a scalable quantum computer, then that would no longer be true. Okay? So, so he gave an algorithm on a quantum computer, which, you know that didn’t exist at the time, and still doesn’t exist at the, at the level that we, we would need it. Okay? But he gave an algorithm that would factor an end digit number using only about N squared operations. Something like that.
[00:29:25] Okay. So, he showed that, you know, if you could build a quantum computer with thousands or millions of what we call qubits, which are quantum mechanical bits, bits that can have superposition of zero state and then one state. And, you know, if it worked according to textbook quantum mechanics, like the theory says it should, then you could use it to, to factor numbers quickly, and thereby break most of the encryption that currently protects the internet.
[00:29:51] Okay. So that was a tremendously exciting discovery and that’s really what started, you know, quantum computing as a field and started, you [00:30:00] know, the quest to actually build practical quantum computers. Since then we’ve, people have discovered other applications for quantum computers you know including a, a Grover search algorithm which is, you know, any time you have like a list of N possible solutions to some problem, you know, could be like an NP complete problem or, you know, a search or optimization problem. And you know, you just, you know how to recognize a solution if you find it. So like classically, you know, if you knew nothing else, then you just be doing pure trial and error. And it might take about N steps until you find the solution, right? You know, maybe, maybe N over two steps on average, right? If you’re just guessing them randomly. Okay.
[00:30:42] What Grover showed was that a quantum computer could find the solution with only about the square root of N steps. Okay, so compared to Shore’s algorithm, this had an enormously greater range of applications. Right? So what Shore did was extremely specific to [00:31:00] factoring numbers and a few other very, very special problems in number theory and group theory, right? Where we really had to take advantage of the structure of those problems. It was not just a simple matter of just, you just try all the different possible devisors in super positioning or something like that. If it were that simple, you wouldn’t have needed Shore to think of it. Right? So Shore’s algorithm was this exponential speed up, but for these very specialized problems. Grover’s algorithm is for an enormous range of practical problems, but the speed up is not exponential. The speed up is only by this square root. Okay. So the square root of like two to the thousand power is two to the 500 power. Right? It’s still pretty big, right? It’s you know, it’s so, so Grover’s algorithm would, you know, would give you something, but it would not sort of move problems from this sort of exponential, you know, time family to the linear or quadratic time family. Okay. It wouldn’t, [00:32:00] it wouldn’t, it wouldn’t do that much.
[00:32:01] So what we learned early on was that even quantum computers would have limitations. Right. You know, there, there would still be problems that would be hard even for them, you know, But, there would also be some problems that are exponentially hard classically, but that allow these remarkable quantum mechanical speed ups.
[00:32:20] And now you could say, well, well, where exactly is the line between the two? Right? Like you know, for which kinds of problems can we expect quantum speedups? And that is an enormous question, you know, one that we’ve been working on for the past 30 years. Right. You know, the answer to that question is not a sentence. It is, you know, an entire field of study, Right. Just like classical algorithms is an entire field of study. Okay.
[00:32:43] But now, now your, your question was about, you know, what does any of this have to do with human consciousness? And I would say the answer is far from. Right. I mean, for a couple of reasons, right?
[00:32:56] One of them is that, you know, I would say that there is [00:33:00] no physical evidence that the brain is, that any animal brain, you know, including the human brain, sort of is operating as a quantum computer, right? You know what, what we see when we look at the brain is we see, you know, neurons, which there are, you know, billions upon billions of neurons connected by synapses but the neurons, each neuron is itself quite complicated. But it seems complicated in basically classical ways. You know? And in particular, once a neuron either fires or does not fire, that seems like a classical event, right? A neuron does not go into a superposition of firing and not firing. Right. And in fact, you know, if it tried to do that, then that superposition would not survive for more than the tiniest fraction of a second. Okay. And the reason is, you know, the brain is a very sort of hot, wet environment, right? It, it is no place for a qubit, right? You [00:34:00] wouldn’t expect a quantum system to be able to maintain its state for any appreciable amount of time without, you know, in effect being measured by its environment, sort of leaking or becoming entangled with its environment. These are all different ways to say it. Okay?
[00:34:15] You know, now, like when people are trying to build quantum computers now, right? They are you know, doing things like taking superconducting coils; josephson junctions they’re called, putting them on a chip and then cooling that chip in a dilution refrigerator to like a hundredth of a degree above absolute zero, right?
[00:34:34] Al Scott: And that’s just to keep in this quantum superposition.
[00:34:38] Scott Aaronson: Right. Right. In order to maintain the quantum superposition of the qubits, and even then, that’s not keeping the qubits alive for as long as we want them to stay alive. Right. That’s just what a commercially available dilution fridge will give you. Okay. So you know, you’re entering a regime which seems very far removed from biology.
[00:34:58] Okay. But then, then the other [00:35:00] thing is that even supposing that somehow the brain were able to do quantum computation, you know, it’s not clear how much that would be good for. Like, suppose that you had a superpower of, you know, of simulating molecular dynamics, and of breaking the https encryption that protects the internet right? How much survival value do you think that would’ve had for your ancestors on the African Savannah? Right?
[00:35:26] Al Scott: The fact that we haven’t been able to find, you know, a lot of application of this quantum computers, like, you know, when when they were first posited it was like this magical thing that’s going to solve all of our problems, will superimpose everything, it’ll come to a solution…
[00:35:40] Scott Aaronson: Right. Well, you have to distinguish how it got written about in the press from what the experts understood. Right. And you know, a lot of what I’ve been doing on my blog, frankly, for the past, 16 years has precisely been trying to bridge that gap. You know, it’s been just trying to, you know, take like, no, we don’t think a quantum computer will just [00:36:00] solve NP complete problems by trying all the answers in parallel and, you know, and, and just saying things that are obvious to everyone who works in this field, right? They’re not even controversial. But then when you say them to the public or to business people, or to investors, then it’s news. It’s news to them because they were led by a lot of irresponsible hype to expect something totally different.
[00:36:23] Al Scott: You feel these are real limitations and not just the fact that we haven’t come up with the right algorithms. You think that there is no real magic here?
[00:36:30] Scott Aaronson: Well, that’s a, that’s a hard question because Yeah. Because, you know, the truth is that like, you know, we don’t even really understand the limitations of classical algorithms yet, right? So, you know, this famous P versus NP problem, that’s exactly what it’s about.
[00:36:45] So like, could it be that for all of these NP problems, meaning all the problems where you could quickly recognize a solution if you were shown one, that there is actually a fast algorithm to find the solution even on a conventional computer, Right? That might [00:37:00] sound fanciful and yet no one has ruled it out, right? That is the P versus NP problem, which I would say is, is, you know, a math problem that is sort of defining for the 21st century, you know, at least as much as the continuum hypothesis was for the 20th century.
[00:37:16] Al Scott: And that’s right. Basically, the traveling salesman problem; the salesman has a set of cities to visit and you have to do it in the shortest period, and the more cities you get, the more complex the solution has to be and the more space you’d have to look at to solve the problem.
[00:37:31] Scott Aaronson: Exactly. The traveling salesman problem is one of the most famous examples of what are called the NP complete problems, which are the problems in np, which sort of capture the entire difficulty of the class np. If you can solve them, if you can solve any one of them, then p would equal np, meaning that you could efficiently solve all NP problems. Factoring is not believed to be NP complete. That’s why Shore was able to give a quantum algorithm to solve it, right. By taking advantage of very [00:38:00] special properties that it had.
[00:38:01] Okay. We, so for the NP complete problems, we don’t know whether there is a fast quantum algorithm and we don’t even know whether there is a fast classical algorithm. Okay. We know that quantum computers could give you the speed up of Grover’s algorithm, which is this modest sort of square root speed up. We don’t know whether they could generally do better than that. Okay. So these are all enormous unknowns.
[00:38:24] Okay. But I, you know, I like to say that, if we were physicists instead of, you know, mathematicians, computer scientists, we would’ve just declared P not equal to NP to be a law of nature. You know, we would’ve just said, you know, this is an observed fact about the world, we’re going to assume that it’s true until some revolution comes along and tells us otherwise. Right. You know, it seems like the entire software industry for half a century or more has been, looking for faster ways to, you know, get approximate solutions to combinatorial problems. Right. And [00:39:00] no one has ever found anything that would sort of bridge this divide. And so whereas if indeed, you know, it’s not possible, then we understand a lot about why we haven’t been able to prove it yet. Right. There are just immense difficulties and sort of proving a negative, you know, proving that there is no fast algorithm to do something.
[00:39:18] Okay. But you’re right. I talked about this sort of black box world where all you know how to do is guess a solution and, you know, check whether it’s valid or not. Like within that world, we know that Grover’s algorithm is the best that even a quantum computer could do.
[00:39:33] But in the real world, you know, you always know more about your problem than that. Right? Like if I give you a traveling salesman problem, right? You’re able to do more than just guess a route and then see like, is it short enough or not? Right? You can make part of a route and then backtrack because it doesn’t look like, it’s right. You can have a root that is, that is not quite right, but then you can make, you know, little local improvements. Right. You can play all kinds of [00:40:00] tricks. And so, you know, are there any tricks that will let either a classical or a quantum computer solve those sorts of problems much faster than how we currently know?
[00:40:09] You’re right. You know, these are enormous problems. These are, these are what sort of keep us employed as classical and as quantum computer scientists.
[00:40:19] Al Scott: So assuming then that consciousness and our minds and everything that we’re doing is, is we know the physical laws. There’s no evidence that the physical laws are being broken. There’s no evidence that there’s a soul behind this or magic or, or something doing this. So assuming there is no magic, and assuming that we know the rules pretty well, and we just haven’t been able to prove it. How does this then affect your moral judgment on AI? When do we have to start treating our AI as beings?
[00:40:51] Scott Aaronson: Yeah. You know, that’s it. That, that’s an enormous question. I mean, I would say that my sort of first intuition when I try to think about this, [00:41:00] you know, is very close to, to Alan Tourings, you know, as, which, you know, expressed in his, his ethical paper in 1950, Computing Machinery and Intelligence. Right. Which this is the paper that proposed the touring test, right. That basically said, Look, if there is a being who you can interact with, you know, from behind a screen or, you know, let’s say, we would say today, like in an online chat, right? And if this being is totally indistinguishable to you from a human right. By any test that you can devise, right. You know, so that you will swear up and down that you know that it is a human, then why aren’t you morally obligated to regard it as if it were a human? Right? And, and, you know, ultimately, people treat this as a metaphysical argument. Or they get obsessed with kind of the details of how this imitation game, you know, would actually run. Right. But, but I feel like at its core, it’s really a moral argument. Right.
[00:41:56] It’s like, you know, like if the being on the other side of the [00:42:00] screen turns out to be made out of silicon rather than out of carbon, why is that more relevant than, let’s say the color of a person’s skin? Why is it more relevant than, you know, whether they’re a male or a female? Right? You know, and of course, you know, Touring faced, you know, enormous discrimination because of being gay. Right. And, you know, you can even, if you read his 1950 paper, you can see places where he’s alluding to that.
[00:42:26] A large part of the moral struggle of humanity over the millennia has been to sort of widen the circles of empathy. To say that, you know, just because something looks different, if it is going to act like a sentient being, then we are morally obligated to treat it as a sentient being, right?
[00:42:44] So I feel like that that intuition has an enormous amount of purchase on me. Okay. But having said that, you know, there would be one at least very important difference between an AI and, at least any currently [00:43:00] existing, humans. Which does seem morally relevant. Okay? And the difference is simply this: with the AI you could make a backup copy, right? You could copy the code somewhere else, you know, and then you could say, Well, okay, now is it murder if I delete a copy, if I can just restore it from backup? Or is it only murder if I delete the very last copy? Right? You know, you could say, I’m going to rerun this AI from the same initial condition, right?
[00:43:27] So suppose that you simulated torturing in AI. You know that, that sounds like a really bad thing to do, but suppose that you then re wound it, you know, you then just deleted that entire interaction and you just reset it to its initial condition. Now, you know, did you know the bad thing you did to the AI, the torture or whatever, did it ever happen? Right. You know, there’s no record of it anymore.
[00:43:52] Al Scott: We don’t understand consciousness enough to make these judgements.
[00:43:56] Scott Aaronson: No. I mean, right. And yet all of these sorts of questions seem [00:44:00] morally relevant now. Right?
[00:44:01] Or like, let’s say the way that my computer works, you know, like in order to just have some redundancy, it runs every computation three times and takes the majority vote. Like the computer on the space shuttle I think did something like that. Now, have three consciousnesses therefore been brought into existence? Or is it only one? How do I decide when a second running copy of this AI counts as a second consciousness, you know, a second observer? Right?
[00:44:29] Al Scott: Well, we have the same problem in our brains. We’re bilateral brains and they can operate independently as independent consciousness as you break the wires between.
[00:44:36] Scott Aaronson: Yeah, yeah. Right, right, right. If we have to worry about, you know, traveling problems like people do in ethics, like the, you know, should I, you know sacrifice one person in order to save 10 other people? Well, now I need to know, like, how many copies of this AI have actually been brought into existence by this?
[00:44:54] One can even ask stranger questions. So there, there was a revolution in cryptography 15 years [00:45:00] ago when it was discovered how to do something called fully homomorphic encryption. Okay. And what this means is you can, we now understand how to do arbitrary computations on encrypted data without ever decrypting it. Okay. I mean, right now it’s pretty slow in practice, but eventually this could have all sorts of applications in cloud computing. Like, you know, you want Amazon, AWS to do a huge computation for you, but you’re very worried that they’re going to snoop on it. Right? We now know how to give them encrypted data that they can compute on and be none the wiser about what they did for you. Okay?
[00:45:37] But now you know, here’s a, here’s a thought experiment that was a due to actually a student of mine Andy Drucker. Suppose that we, we took a simulation of a person, so a consciousness, right? And we ran it in a homomorphic encrypted form. So we ran it in a, in a form, you know, where all of the information about the state of the brain [00:46:00] is, is encrypted. And as we do the simulation, it all remains encrypted. And let’s say that the decryption key is in another galaxy or something, right? It, it’s just not available, right? So now we can say, you know, has a consciousness been brought into existence by this totally encrypted computation that is completely meaningless to everyone in this galaxy, right? Does it only become conscious if we go and retrieve the decryption kit from the other galaxy?
[00:46:29] So, you know, these are, these are some of the philosophical enormities that I think, you know, once you have AI that can sort of perform, you know, at human level or simulate humans, you know, these are going to be activated.
[00:46:43] Al Scott: These, these sort of questions bring, bring up, I don’t know if you’re familiar with Daniel Dennis’s approach to the consciousness that, you know, it’s not a thing, it’s a post hoc rationalization. And, you know, that process almost… that viewpoint almost seems to be [00:47:00] necessary when you start getting to these problems.
[00:47:03] Scott Aaronson: Yeah. Well, you know, you could say, Well, okay, I think the challenge now becomes, if you wanna say that there is something special about the brain that is going to differentiate it from all of these computer simulations that you can play all of these weird games with, Right? Then, now, now the burden is on, you know, the believer in the brain specialness, right? They’re the one who has to articulate what is it that is different about the brain. Right. Why isn’t it just a meat computer? Right. I think the closest thing to a potential answer to that, that I have been able to think of, and I’m not sure whether I believe it, but like this is the kind of thing that a principal to answer might look like, is to say, Well, maybe it is just physically impossible to make a good enough copy of someone’s brain. Right? Maybe, you know, if you wanted to make a good enough copy of you, let’s say, good enough to [00:48:00] instantiate a second copy of you, or, you know, duplicate your identity or whatever, then you would actually have to scan your brain all the way down to the molecular and atomic level.
[00:48:12] Okay? But, but in some sense, you can’t do that because of what’s called the no cloning theorem in quantum mechanics, right? That, you know, you’re what’ll be more familiar to most people is probably the, the uncertainty principle, right? That if you try to measure the exact quantum state of all of the molecules in the brain, you’ll actually necessarily change those things.
[00:48:32] You might say, Well, in order to make a good enough copy of your brain, maybe you would necessarily have to destroy it in the process. And so then if so, there is a kind of privacy or uniqueness to an individual human identity, you know, that an AI program would not have, because, you know, you could always just freely copy the AI program, right?
[00:48:55] So ironically, this would not be kind of a positive thing that that would [00:49:00] distinguish humanity, right? Like a task that we can do and that the AI can’t do. This would be a negative. This would be something that cannot be done with us, but that can be done with the AI .
[00:49:12] Al Scott: You’re gonna have to bring the evidence for that.
[00:49:13] Scott Aaronson: Yeah. Right. But now, like, is that actually true? Like, you know, so it is also plausible that, yes, it would be technologically very challenging to scan the state of someone’s brain and make a duplicate of it. But you would only have to be looking at the classical level. You would just need some nanobots that would just scan, let’s say the strength of every synapse and the connectivity pattern of the neurons. And, and that would be good enough. Right. So, you know, so that, that, that, you know, and I think probably most people who, who, who, who think about this would inline toward that latter position. But you know, I would say if I were really going to believe in ineffable human specialness, then this is the kind of question that I would be looking at. Like, you know, can [00:50:00] you actually physically copy the state of someone’s brain without destroying them in the process, right? This is a question where, unlike the hard problem of consciousness, it seems like we could actually learn something, right? Like progress in neuroscience and chemistry and physics could actually tell us more about this. And this does seem somehow relevant to what we wanted to know.
[00:50:22] Al Scott: I’ve talked to some people that are doing experiments on quantum image information processing and, you know, superpositions and I’ve talked to people that are looking at polarized electron spins and fruit flies, and a lot of really cool stuff going on right now, and I’m really excited to follow along with that and just see where it goes.
[00:50:41] Scott Aaronson: Yeah. So I know I would say that there, there is a lot of evidence that, you know, quantum effects are important in biology in various contexts. I mean, green plant photosynthesis, you know, relies on quantum tunneling effects. Birdstone art, you know, relies on apparently, some quantum entanglement effect and [00:51:00] some molecule in the bird’s inner ear. European Robins in particular, right?
[00:51:04] So people have found all of these cool things, right. But they’re all kind of at the, you know, molecular level. Which is what you would expect, you know, at the molecular level, kinda everything is quantum mechanical, so why wouldn’t, you know, natural selection find ways to exploit that. It looks like indeed it does.
[00:51:22] Al Scott: And whether or not that’s important for consciousness or not is also a good question.
[00:51:26] Scott Aaronson: Exactly. But then is any of that actually relevant to consciousness? That is a much, much harder question.
[00:51:32] Al Scott: So we’re getting to the end of our time slot. I really appreciate you coming on and chatting with me about this stuff. Really mind bending. I love the perspective and the clear articulation of what we don’t know and what we do know. And that, I think that’s very helpful.
[00:51:45] And so for coming on, I’m gonna send you a rational view t-shirt. Really appreciate it.
[00:51:49] One last question for you. I ask a lot of my contributors, what kind of science fiction are you interested in?
[00:51:56] Scott Aaronson: Oh gosh. Well, you know, I loved Asimov of as a [00:52:00] kid. Yeah, I read all that I could get my hands on. And you know, I, these days, I sometimes feel like studying science has kind of ruined science fiction for me to some degree. As soon as like I see them say something that I know is just absurdly wrong or that no scientist in that situation would ever say, then it’s hard for me to stay inside of the story, you know? But I love, you know, any kind of science fiction that plays things for laughs and I was a huge fan of Futurama, the science fiction cartoon show. I love that. And, you know, I also enjoy just fiction about science and scientists, you know, when it’s funny and it’s well written, like The Big Bang Theory, for example, I confess that I enjoyed.
[00:52:42] Al Scott: That was fun. A good depiction of that level of interaction scientists actually, their personal interactions and getting into that. It was fun. Yeah.
[00:52:51] So thanks so much for coming on. Appreciate chatting with you.
[00:52:54] Scott Aaronson: Yeah. Well, thank you. Thank you. I enjoyed talking.[00:53:00]
[00:53:01] Al Scott: If you’d like to follow up with more in depth discussions, please come find us on Facebook @ The Rational View and join our discussion group. If you like what you’re hearing, please consider visiting my patron page. @patron.podbean.com/therationalview. Thanks for listening.