Surviving the Cosmos, the original audio of this conversation, can be found here on Sam's podcast. Sam's "Waking Up" podcast, found at those previous links, is a wonderfully eclectic series of thoughtful interviews and "Ask Me Anything" explorations of the most important issues of our day, in our culture, and globally. I'm a subscriber, and you should be too!
To the conversation: little editing has been done. Most (though not all) "ums" and "ahs" were retained in order to preserve that human quality to a conversation like this. Please email me at brett@bretthall.org or Tweet @ToKTeacher for typos, etc. Although these days more and more I prefer listening to audio, I know some prefer to read. For those people who haven't heard the original conversation, please enjoy these two great lights of our age engaged in a truly remarkable dialogue:
Note: (Brackets) typically indicate where Sam and David are talking at the same time. I've also put in a couple of "timestamps" and may add more.
Introduction
Sam: This podcast is brought to you by “Audible”, the world’s leading source of audiobooks. If you would like to support it go to audibletrial.com/samharris
(Intro Music)
Sam: Welcome to the waking up podcast: this is Sam Harris. Today I am speaking with David Deutsch. David is a physicist at Oxford, he’s a professor of physics at the Centre for Quantum Computation at Clarendon Laboratory and he works on the quantum theory of computation and information and he is a very famous exponent of the Many Worlds Interpretation of quantum mechanics...neither of which we talk about in this interview. David has a fascinating and capacious mind, you will see, and we talk about much of the other material in his most recent book “The Beginning of Infinity” and we by no means cover all of its contents. But as you’ll see David has a talent for expressing scientific and philosophically revolutionary ideas in very simple language. And what you’ll hear in this interview is often me struggling to go back and unpack the import of some very simple sounding statements which I know which those of you unfamiliar with his work can’t parse the way he intends. In any case I hope you enjoy meeting David Deutsch as much as I did.
(Music)
Sam: I have David Deutsch on the line. David thanks for coming on the podcast.
David: Oh, thank you very much for having me.
Sam: Listen I’ve been um...I don’t know what part of the multiverse we’re in where I can complain about jihadists by night and talk to you by day, but it’s a very strange one we would seem to be in at the moment because we are about to have a very different kind of conversation than the one I’ve had of late and I am really looking forward to it. I spoke to Steven Pinker and told him we were going to speak and he claimed that you are one of his favorite minds on the planet. I don’t know if you know Steven -
David: I don’t know him personally but that is very kind of him to say that.
Sam: So let me begin quite awkwardly with an apology in addition to the apology I just gave you off air for being late; while I aspired to read every word of your book “The Beginning of Infinity” I’ve only read about half. Not just the first half. I jumped around a bit. But forgive me if some of my questions and comments seem to ignore some of the things you had the good sense to write in that book and that I didn’t have the good sense to read. Not much turns on this as you know you have to make yourself intelligible to our listeners most of whom will not have read any of the book.
David: Yes.
Sam: But I just want to say that it really is a remarkable book: both philosophically and scientifically it is incredibly deep while also being extremely accessible.
David: Thanks
Sam: It is a profoundly optimistic book in at least one sense. I don’t think I’ve ever encountered a more hopeful statement of our potential to make progress. But one of the consequences of your view, if I’m not mistaken, is that the future is unpredictable in principle and that the problems we will face are unforeseeable and that we will solve these problems is also unforeseeable and problems will continue to arise of necessity but problems can be solved. And this claim about the solubility of problems with knowledge runs very very deep. It’s far deeper than our listeners will understand based on what I’ve just said.
David: That’s a very nice summary.
Sam: It’s interesting to think about how to have this conversation because what I want to do is kind of creep up on your central thesis and I think there are certain claims you make. Claims specifically about the reach and power of human knowledge that are fairly breathtaking and I find that I want to agree with every word of what you say here because, again, these claims are so hopeful. I have a few quibbles and it’s interesting to go into this conversation hoping to be relieved of my doubts about your thesis. I’m kind of hoping you will perform and exorcism on my doubts, such as they are.
David: Sure well I think the truth really is very positive, but I should say at the outset that there is one sort of fly in the ointment and that is that because the future is unpredictable - nothing is guaranteed.
Sam: Right
David: There is no guarantee that civilization will survive or that our species will survive but there is, I think, a guarantee that we *can*, and that also in principle we know how to.
Sam: Before we get into your claims there, let’s start the conversation somewhere near epistemological bedrock. I want to ask you a few questions designed to get to the definitions of certain terms because you use words like “knowledge” and “explanation” and even “person” in novel ways in the book and I want our listeners to be awake to how much work you are requiring these words to do. Let’s begin with the concept of “knowledge”. What is knowledge and what is the boundary between knowledge and ignorance in your view?
David: Yes...so there are several different ways of approaching that concept. I think that the way I think of knowledge is as broader than the usual use of the terms and yet paradoxically closer to the common-sense use of the term. Philosophers have almost defined it out of existence. Knowledge is a kind of information: that’s the simple thing. It’s something that could have been otherwise and is one particular way and the particular way it is, is that it says something true and useful about the world. Now knowledge is in a sense an abstract thing because it is independent of its physical instantiation. I can speak words which embody some knowledge. I can write them down. They can exist as movements of electrons in a computer and so on. Thousands of different ways. So knowledge isn’t dependent on any particular instantiation. On the other hand it does have the property that when it is instantiated it tends to remain so. So the difference between say a piece of speculation by a scientist which he writes down and then that turns out to be a genuine piece of knowledge - that will be the piece of paper he does *not* throw in the wastepaper basket. And that’s the piece that will be published and that’s the piece that will be studied by other scientists and so on. So it is a piece of information that has the property of keeping itself physically instantiated causing itself to be physically instantiated once it already is. Once you think of knowledge that way you realize that, for example, the pattern of base pairs in the DNA of a gene also constitutes knowledge and that in turn connects with Karl Popper’s concept of knowledge which is knowledge that doesn’t have to have a knowing subject. It can exist in books abstractly or it can exist in the mind or people can have knowledge that they don’t even know they have.
Sam: Right. Right. I want to get to the reality of abstractions later on because I think that is very much at the core of this. But a few more definitions. What is the boundary between science and philosophy or other expressions of rationality in your view? Because I think that people are, in my experience, profoundly confused by this and many scientists are confused by this. I’ve argued for years in several contexts about the unity of knowledge and I feel you’re a kindred spirit here. So how do you differentiate or fail to differentiate science and philosophy?
David: Well as you’ve just indicated, I think that science and philosophy are both manifestations of reason and that the real difference that should be uppermost in our minds between different kinds of ideas and different kinds of ways of dealing with ideas is the difference between reason and unreason. But among the rational approaches to knowledge or different kinds of knowledge, there is an important difference between science and other things like philosophy and mathematics. Not at a really fundamental level but at a level that is of great practical importance, often. And that is that science is the kind of knowledge that can be tested by experiment or observation. Now I hasten to add that that does not mean that the content of a scientific theory consists entirely in its testable predictions. On the contrary: a typical scientific theory, it’s testable predictions form just a tiny tiny slither of what it tells us about the world. Now Karl Popper introduced his criterion of demarcation between science and other things - namely whether science is the testable theories and everything else is untestable and people have, ever since he did that, people have falsely interpreted him as a kind of positivist - he was really the opposite of a positivist - and if you interpret him like that then his criterion of demarcation becomes a criterion of meaning. That is, he is interpreted as saying that only scientific theories can have meaning.
(10:00 mins)
Sam: Right. He’s a verificationist.
David: Yes and yes, so he is called a falsificationist to distinguish him from the other verificationists but of course he isn’t. It’s a completely different conception and, you know, his philosophical theories themselves are philosophical theories and yet he doesn’t consider them meaningless - quite the contrary.
Sam: Right
David: So that’s...The difference between science and other things comes up when people pretend the authority of science for things that aren’t science. But on the bigger picture the more important demarcation is between reason and unreason.
Sam: Um yeah, I want to go over that terrain you just covered a little bit more as you just made some points there that I think are a little hard for listeners who haven’t thought about this a lot to parse and those are incredibly important points. So this notion, for instance, that science reduces to what is testable: this belief is so widespread even among high level scientists that, that anything else - anything which you cannot measure immediately is somehow a vacuous claim in principle. The only way to make a credible claim or even a meaningful claim about reality is to essentially give a recipe for observation that is immediately actionable. It’s an amazingly widespread belief, so too is a belief in a bright line between science and every other discipline where we purport to describe reality. It’s like the architecture of a university has defined people’s thinking. So it’s like you go to a chemistry department to talk about chemistry and you go to the journalism department to talk about current events and you go to the history department to talk about human events in the past - these separate buildings have balkanized thinking of even very smart people into thinking that all of these language games are in some sense irreconcilable and that there is no common project. I’ll just bounce a few examples off of you that some of our listeners will be familiar with, but I think they make the point. So you just take something like the assassination of Mahatma Gandhi, right, now that’s a historical event but anyone who would purport to doubt that it’d occurred - someone said “Well actually Gandhi was not assassinated. He went on to live a long and happy life in the Punjab under an assumed name. This is a claim about terrestrial reality that is at odds with the data. It’s at odds with the testimony of people who saw him assassinated, it’s at odds with the photographs we have of him lying in state, and there’s an immense burden of reconciling this claim about history with the facts that we know to be true and the distinction is not between what someone in a white lab coat has said or facts that have been brought into view in the context of a scientific laboratory with a national science foundation grant. It’s the distinction between having good reasons for what you believe and bad ones and the distinction between reason and unreason as you put it. So one could say that the assassination of Gandhi - it’s a historical fact - it’s also a scientific fact. It is just a fact, even though science doesn’t usually deal in quantities like “assassinations” and you’re more a journalist or historian talking about this thing being true. You would be deeply unscientific at this point to doubt that it occurred.
David: Yes, well I say that it’s deeply irrational to claim that it didn’t occur. Yes. And I wouldn’t put it in terms of reasons for belief, either. I agree with you that people have very wrong ideas about what science is and what the boundary of scientific thinking is and what sort of thinking can...(or) should be taken seriously and what shouldn’t. I think the...it’s slightly unfair to put the blame on universities here. I think this misconception arose originally for quite good reasons. It’s rooted in the empiricism of the 18th century and before and the origin of science which...where it had to...science had to rebel against the authority of tradition and of human authority and say that...tried to give dignity and respect to forms of knowledge that involved observation and experimental tests.
Sam: Right.
(15:00 mins)
David: And so empiricism is the idea that knowledge comes to us through the senses. Now, that’s completely false. All knowledge is conjectural and comes from within at first and is intended to solve problems not to summarize data. But this idea that experience has authority and that only experience has authority, false though it is, was a wonderful defense against previous forms of authority which were not only invalid but stultifying so it was a good defense but not actually true. And in the 20th century a horrible thing happened which is that people started taking it seriously not just as a defense but as being literally true and that almost killed certain sciences and even within physics I think it greatly impeded the progress in quantum theory so just to come to a little quibble of my own. I think the essence of what we want in science is good explanation. Which-and there’s no such thing as a good reason for a belief. A scientific theory is an impersonal thing. It can be written in a book, one can conduct science without ever believing the theory just as a good policeman or judge can implement the law without ever believing either of the cases for the prosecution or defense, just because they know that a particular system is better than any individual human’s opinion. And the same is true of science. Science is a way of dealing with theories regardless of whether one believes them. One judges them according to whether they’re good explanations and there need not be ever any such process as accepting a theory because it is conjectured initially and takes its chances and is criticised as an explanation. If, by some chance, a particular explanation ends up being the only one that survives the intense criticism that science has learned how to apply then it’s not adopted at that point, it’s just not discarded.
Sam: Right, right. Well I think we may just have - we may be stumbling across a semantic difference between how we’re using terms like “reasons” and “reasons for belief” and a “justification for belief”. I understand your quibble here that you’re pushing back against this notion that we need to find some ultimate foundation for our knowledge rather than this open ended effort at explanation. But let’s table that for a second. But obviously your notion of explanation is at the core here and again I just want to sneak up on it because I don’t want to lose some of the detail with respect to the detail of the ground we’ve already covered. Let’s come back to this notion of scientific authority because it seems to me there’s a lot of confusion about this. About the nature of scientific authority. It’s often said in science that we don’t *rely* on authority and that’s true and it’s not true. When push comes to shove, we don’t rely on it and you make this very clear in your book. But we do rely on it in practise if only in the interest of efficiency. So if I ask you a question about physics, I will tend to believe your answer because you’re a physicist and I’m not and if what you say contradicts something I’ve heard from another physicist then if it matters to me I will look into it more deeply and try to figure out the nature of the dispute. But if there are any points on which all physicists agree, a non-physicist like myself will defer to the authority of that consensus and this, again this is less a statement of epistemology than it is just a statement about just the specialization of knowledge and the unequal distribution of human talent and just the - frankly the shortness of every human life. I mean we simply don’t have time to check everyone’s work and we have to rely on - in some sense - the faith that the system of scientific conversation is correcting for errors
David (interjecting): Ah! Yes!
Sam: And self deception and fraud.
David: Ah! Yes! Ah! Now, okay...yeah (laughs)
Sam: I got myself out of the ditch there?
David: Yes exactly. Exactly. At the end what you said was right. So you could call this authority - it doesn’t really matter what words we use. But every student who wants to make a contribution to a science is hoping to find something where every scientist in his field is wrong.
Sam: Absolutely
David: So it’s not impossible to take the view that you’re right and *every expert* in the field is wrong. I think that what happens when we consult experts, whether or not you use the word “authority” - it’s not quite that we think that they’re more competent - it’s...I think ...when you referred to error correction - that hits the nail on the head. I think that there’s a process of error correction in the scientific community that approximates to what I would use if I had the time and the background and the interest to pursue it there. And so, I go to a doctor to consult him about what my treatment should be I assume that by and large the process that has led to his recommendation to me is the same as the process that I would have adopted if I had been present at all the stages. Now it’s not exactly the same and I might also take the view that there are widespread errors and widespread irrationalities in the medical profession and if I think that, then I will adopt a rather different attitude. I may choose much more carefully which doctor I consult and how my own opinion should be judged against the doctor’s opinion in a case where the error correction hasn’t been up to the standard I would want. And this is not so rare...
Sam: Yeah
David: ...as I said every student is hoping to find a case of this in their own field. So-every *research* student. So when I travel on a plane I expect that the maintainence will have been carried out to the standards that I would use - well approximately to the standards I would use - well enough for me to consider that risk on the same level as other risks I would take just by crossing the road. It’s not that I’m *sure*.
Sam: Yeah
David: It’s not that I take their word for it in any sense, it’s that I have a positive theory of what has happened there to get that information to the right place. And that theory is fragile - it - I - I can easily adopt a variant of it.
Sam: Yeah well so it’s probabilistic. You realize that a lot of these errors are washing out and that’s a good thing. But in any one case you judge the probability of error to be high enough that you need to really pay attention to it and often - as you say - that happens in a doctors office where you’re not hoping to find it. Again, I still picture us kind of circling your thesis and not yet landing on it. Science is largely a story of our fighting our way past anthroprocentrism this notion that we are at the center of things
David: It has been, yes. Has been.
Sam: We are not specially created. We share half our genes with a banana and more than that with a banana slug so as you describe in your book this is known as the principle of mediocrity and you summarise it with a quote from Stephen Hawking who said (quote) “We’re just chemical sun on the surface of a planet that’s in orbit round a typical star on the outskirts of a typical galaxy”. Now you take issue with this claim in a variety of ways but the result is that you come full circle in a way - you fight your way past anthropocentrism the way every scientist does but you arrive at a place where people - or rather persons - I think that’s the formulation you tend to use - and which you define in a special way - suddenly become hugely significant even cosmically so. So say a little more about that.
David: Yes well so it’s - that quote from Hawking is literally true, but the philosophical implication he draws is completely false. Because, well, one can approach this from two different directions. First of all if you think of that chemical scum namely us and possibly things like us on other planets and in other galaxies and so on if they exist - then, um...to study that scum is impossible unlike every other scum in the universe because this scum is creating new knowledge and the growth of knowledge is profoundly unpredictable. So as a consequence of that to understand this scum - never mind predict - but to understand it - to understand what’s happening here entails understanding everything in the universe because as I say in the book - I give an example in the book that if the people at the SETI project discover - were to discover extraterrestrial life somewhere far away in the galaxy they would open their bottle of champagne and celebrate. Now if you try to explain scientifically what are the conditions under which that cork will come out of that bottle then the usual scientific criteria that you use of pressure and temperature and biological degradation of the cork and so on will be irrelevant. What is the most important factor in the physical behaviour of that bottle is whether there exists life on another planet. And in the same way anything in the universe can affect the gross behavior of things that affected by people (sic). And so, in short to understand humans you have to understand everything and humans or people in general are the only things in the universe of which that is true so they are of universal significance in that sense. Then there’s the other way round: it’s also true that the reach of human knowledge and human intentions on the physical world is also unlimited so we are only used to having a relatively tiny effect on this small insignificant planet, etc and for the rest of the universe to be completely beyond our ken. But that’s just a parochial misconception, really. Just because we haven’t set out to cross the universe yet. And we know that there are no limits on how much we can affect the universe if we choose to. So in both those senses, we are, by which I mean “we and the ETs and the AIs if they exist” - there’s no limit to how important we are so we are completely central to any understanding of the universe.
Sam: I’m struggling with the fact that I know how condensed some of your statements are and I also know that it’s impossible for our listeners to appreciate just how much knowledge and conjecture is being smuggled into each one. So I guess let’s just deal with this concept of explanation and the work it does. And, um...well first there’s a few points you make about explanation that, that I find totally uncontroversial and even obvious but are in fact highly controversial in educated circles and one is this notion that, as you say, explanation is really what lies at the bedrock of the scientific enterprise and the enterprise of reason generally. Explanations in one field of knowledge potentially touch explanations in many other fields and potentially all other fields and this suggests a kind of unity of knowledge. But you make two claims and really, especially bold claims about explanation which I do see some reason to doubt and, as I’ve said, I’d rather not doubt them because they’re incredibly hopeful claims. So, I guess the first to deal with is the power of explanation. I guess I’ll divide these into: there’s the power of explanation and there’s the reach of explanation. And these may not be entirely separate in your mind. But let’s just deal with - there’s a separate emphasis here. You make what is a seemingly extraordinary claim about explanation which at first seems quite pedestrian. You say that there’s a deep connection between explaining the world and controlling it. Everyone understands this to some degree. We all see the evidence of it all around us in our technology and people have this phrase “Knowledge is Power” in their heads. So there’s nothing so surprising so about that but you do go on to suggest and you did just suggest it in passing that knowledge confers power without limit or it is limited only by the laws of nature so you actually say that anything which isn’t precluded by the laws of nature is achievable given the right knowledge. Because if something were not achievable, given complete knowledge, then that itself would be a regularity in nature which could be explained in terms of the laws of nature. Then really there are only two possibilities. Either something is precluded by the laws of nature or it is achievable with knowledge. Is that - do I have you right there?
David: Yes. And that is what I call “the momentous dichotomy”. There can’t be any third possibility other than those two. I think you’ve not only given a statement of it but you’ve given a very short proof of it right there.
Sam: So how isn’t this just a clever tautology analogous to the ontological argument proving the existence of God? So many of our listeners will know that according to St. Anselm and Descartes and many others it’s believed that you can prove the existence of God simply by forcing your thoughts about him to essentially bite their own tails. And, for instance I can make the following claim. I can form a clear and distinct concept of the most perfect possible being and such a being must exist therefore because a being that exists is more perfect than one that doesn’t. And I’ve already said I’m thinking about the most perfect possible being. And existence is somehow a predicate of perfection. Now of course most people will recognize, certainly most people in my audience will recognize this is just a trick of language. It could be prove the existence of anything. I could say “I’m thinking of the most perfect chocolate moose. And it must exist, therefore, because a moose that exists is more perfect than one that doesn’t and I already told you I’m thinking of the most perfect possible moose. What you’re saying here doesn’t have the same structure but I do worry that, that you’re performing a bit of a conjuring trick here because and I’ll just ask the question: For instance, why mightn’t certain transformations of the material world be unachievable even in the presence of complete knowledge? Merely by, and this is something I realise you do anticipate in your book but I want you to flesh it out for our listeners - merely by a contingency of geography so that, for instance, you and I are on an island and one of our friends comes down with an appendicitis and let’s say you and I are both competent surgeons, we know everything there is to know about removing a man’s appendix but it just so happens we don’t have any of the necessary tools and everything on that particular island is just has the consistency of soft cheese, right? So there’s just this, just by sheer accident of our personal histories there’s a gap between what is knowable and what is, in fact known, and what is achievable even thought here are no laws of nature that preclude our performing an appendectomy on a person. Why mightn’t every space we occupy just by contingent fact of our...of the way the universe is not introduce some gap of that kind?
David: Ah, well there are, there definitely are gaps of that kind and they’re all laws of nature. For example: um, you know, I am an advocate of the many universes interpretation of quantum theory or the many universes version of quantum theory and that says that there are other universes which the laws of physics prevent us from getting to. Um, there’s also a, the finiteness of the speed of light which doesn’t actually prevent us from getting anywhere but it does prevent us from getting anywhere in a given time. So, if we want to get to...um...the nearest star within a year, we can’t do so because of the accident of where we happened to be. If we happened to be nearer to it, we could easily get there in a year. And in your example if there’s no metal on the island then it may be, I mean it’s rather a complicated thing to calculate, but there will be a fact of the matter of whether and, it could easily be that no knowledge present on that island could save the person because no knowledge could transform the resources on that island into the relevant medical instruments. So that’s, um...a thing that - a restriction that the laws of physics apply because we are in particular times and places and, of course the most powerful thing is: we don’t in fact have the knowledge to do most of the things that we would ideally like to do. So that’s another restriction. But that’s completely different from the, from I think what you’re imagining which is that there is some, there might be some reason why we, for example, why we can never get out of the solar system. Getting out of the solar system is: if that were impossible it would mean there is some, for example, some number, some constant of nature - 1000 astronomical units or something - which limits the other laws of nature that we already know. Now there might be other laws of nature. You know, when you say “How do we know that there isn’t?” that’s a little bit like, if I can turn your objections around the other way, you know, that’s a little bit like creationists saying “How can we know the Earth didn’t start 6000 years ago?”. There is no conceivable evidence that could prove that it didn’t. Or that could distinguish the 6000 year theory from a 7000 year theory.
Sam: Right.
David: And so on. There’s no way that evidence can be brought to bear on that. And that leads us to explanation again: which is another difference between my argument, which I think is valid and the ontological argument about the existence of God. That is, as you said, a perversion of logic. The argument purports to use logic but then, but then smuggles in assumptions like that the, that perfection entails “existence” for example -
Sam: Right
David: - to name a simple one. Whereas my proof, as it were, is an explanatory one. It isn’t just “this must exist” it’s that “if this didn’t exist something bad would happen” for example: the universe would be controlled by the supernatural. Or, the laws of nature would not be explanatory. Or something of that kind. Which, which I think is just leading to the supernatural in a different way. So I think that this - the argument that the argument works because it’s explanatory. There isn’t a hole of the same - I mean you can’t prove that it’s true, of course, but there isn’t a hole in it of the same kind as in the ontological argument.
Sam: The fishiness I was detecting worries me less than what I’m going to go on to talk about regarding the reach you posit for explanation but it’s more a matter of emphasis. If you’re saying that we could have a complete understanding of the laws of nature and yet there could be many contingent facts about where we are - let’s say our current distance from a star we want to get to which would preclude our doing anything especially powerful with this knowledge and you’re going to shuttle those contingent fact back into this claim about - well this is just more of the laws of nature - these facts about us are regularities in the universe which are themselves explained by the laws of nature and therefore we are back to this dichotomy - there’s just the laws of nature and there’s the fact that knowledge can do anything compatible with those laws. I guess the concern is: in various thought experiments in your book you make amazingly powerful claims about the utility of knowledge so for instance you talk about a region of space: you know, a cube the size of the solar system on all sides which that’s more representative of the universe as it actually is which is to say it’s nearly a vacuum. It’s just we’re talking about a cube of intergalactic empty space that has more or less nothing but stray hydrogen atoms in it and you talk about the process by which that could be primed and become the basis of a- of the most advanced civilization that we could imagine. You might, maybe spend a minute or two just talking about how you get from virtually nothing to something there, but it is a picture of almost limitless fungibility of the universe on the basis of knowledge and that say, ah...take us to deep space for a moment
David: (laughs) Yes. So you and I are made of atoms. And that already gives us an immense fungibility because we know that atoms are universal. The properties of atoms are the same in this cube of space millions of light years away as they are here. So we’re talking mostly about the power of knowledge to achieve things - to control the world. We’re not talking about tasks like saving someone’s life with just the resources on an island or getting to a distant planet in a certain time. We’re talking about - the generic thing that we’re talking about is converting some matter into some other matter. So what do you need to do that? Well generically speaking what you need is knowledge. What would have to happen is that this cube of almost empty space will never turn into anything other than boring hydrogen atoms unless some knowledge somehow gets there. Now whether knowledge gets there or not depends on decisions that people with knowledge will make at some point. I think that there’s no doubt that knowledge could get there if people with knowledge decided to do that for some reason. I can’t actually think of a reason; but if they did want to do that it’s not a matter of futuristic speculation to know that that would be possible. Then it’s a matter of transforming atoms in one configuration to atoms in another configuration and we’re now getting used to the idea that that is an everyday thing. We now have 3D printers that can convert just generic stuff into any object provided that the knowledge of what shape that object should be is somehow encoded into the 3D printer. And a 3D printer with the resolution of 1 atom would be able to print a human, if it was given the right program. So we already know that and although it’s in some sense way beyond present technology, it’s not way beyond our present understanding of physics. It’s well within our present understanding of physics. It would be an absolutely amazing turn up for the books if that turned out to be beyond physics. I mean beyond what we know about physics today. The idea that new laws of physics would be required to make a printer is just beyond belief, really.
Sam: Just take us from the beginning in empty space - you start with hydrogen and you have to get heavier elements in order to get to your printer.
David: Yes, it has to be primed not just with abstract knowledge but with knowledge instantiated in something. We don’t know what the smallest possible universal constructor is, that is a - the generalisation of the 3D printer - something that can be programmed either to make anything or to make the machine that would make the machine that would make the machine to make anything, etc. So one of those, with the right program, sent to empty space, would first convert - well would first gather the hydrogen, presumably by some kind of electromagnetic broom - sweeping it up and compressing it. Then converting it by transmutation into other elements and then by chemistry into what we would think of as raw materials and then - ah - using space construction which is the kind of thing which we’re almost on the verge of being able to do - into a space station and then the space station to instantiate further people to generate the knowledge to suck in more hydrogen and make a colony and - well - they’re not going to look back from there - how far do you want me to describe this?
Sam: Right, right. It’s just a very interesting way of looking at knowledge and its place in the universe. I think that before I get onto the reach of explanation and my quibble there I just want you to talk a little about this notion of spaceship earth which I loved how you debunked this idea. There’s this idea that the biosphere is in some way wonderfully hospitable for us and that if we built a colony on Mars or some other place in the solar system, we’d be in a fundamentally different circumstance and a perpetually hostile one and that is an impressive misconception of our actual situation and you have a great quote where you say, “The Earth no more provides us with a life support system than in provides us with radio telescopes.” So say a little more about that.
David: Yes, so we evolved somewhere in East Africa in the Great Rift Valley and that was a, an environment that was particularly suited to having us evolve. And life there was sheer hell for humans. Nasty, brutish and short doesn’t begin to describe how horrible it was. But we transformed it. Or rather not actually our species but the species that were some of our predecessor species changed their environment by inventing things like clothes, fire and weapons and thereby made their lives much better. Still horrible by our present day standards. And then they moved into environments such as - as I also say in the book - such as Oxford, where I am now, and it’s December - and if I were here at this very location with no technology I would die in a matter of hours. And nothing I could do could prevent that.
Sam: So you are already an astronaut.
David: Very much so.
Sam: Your condition is as precarious as the condition of those in a well established colony on Mars that can certain technological advances for granted and there’s no reason to think that future doesn’t await us, barring some catastrophe placed in our way, whether by our own making or not.
David: Yes. And also there’s another misconception there which is related to that misconception of the Earth being hospitable which is the misconception that applying knowledge is effort. Um...it’s creating knowledge that is effort. Applying knowledge is what we call automatism - it’s automatic. As soon as somebody invented the idea of, for example, wearing clothes - from then on the clothes automatically warmed them so long as they were wearing the clothes. It didn’t require anymore effort. Of course, their clothes - there would have been things wrong with the original clothes - such as they rotted or something and then people invented ways of making better clothes. But at any particular stage of knowledge - having got the knowledge - the rest is automatic. And now we’ve invented things like mass production, unmanned factories and so on. We take for granted that the water gets to us from the water supply without anyone having to carry it laboriously on their head in pots. It doesn’t require effort. It just requires the knowledge of how to install the automatic system. Much of our life support is automatic and every time we invent a better way of life support, we then make it automatic. So the people on the Moon, living on the moon in a lunar colony, to them - keeping the vacuum away will not be a thing they think about. They’ll take that for granted. What they’ll be thinking about is new things. And the same on Mars and the same in deep space.
Sam: Right. Well yeah again that’s an incredibly hopeful vision of our possible future. So thus far we’ve covered territory where I really don’t have any significant doubts despite the fact that I pretended to have one with the ontological argument. So let’s get to this notion of the reach of explanation. Because you seem to believe that the reach of our explanations is unbounded. Which is to say that anything that can be explained - either in practice or in principle - can be explained by us. Which is to say: human beings as we currently are. You seem to be saying that we alone among all the Earth’s species have achieved a kind of cognitive escape velocity and we’re capable of understanding everything and you contrast this view with, um, what you call parochialism. Which is a view that I have often expressed and, you know, many scientists have expressed this. Max Tegmark was on my podcast a few podcasts back and we more or less agreed about this thesis and the thesis of parochialism is that evolution hasn’t designed us to fully understand the nature of reality. We’re not - either the very small or the very large or the very fast - the very old. These are not domains in which our intuitions about what is real or what is logically consistent have been tuned up in any way by evolution. And insofar as we’ve made progress here, it has been by a kind of happy accident. And it’s an accident which gives us no reason to believe that we can, by dint of this accident, travel as far as we might like across the horizon of what is knowable. So which is to say that: let’s assume a super intelligent alien came down to Earth for the purpose of explaining all that is knowable to us he or she may make no more headway than you would if you were attempting to teach the principles of quantum computation to a chicken. So I want you to talk about why that analogy doesn’t run through. Why parochialism; this notion that we occupy this kind of cognitive niche that there really is no good evolutionary reason to expect that we can fully escape - why doesn’t that hold true?
David: Yes. Well, you’ve actually make two or three different arguments there. All of which are wrong. So..
Sam: Oh - nice! (Laughs)
David: (Laughs) So let me start with the chicken things. So there the point is the universality of computation. The thing about explanations is they consist of knowledge which is a form of information. And information can only be processed in basically one way: that is, with computation of the kind invented by Babbage and Turing. There is only one mode of computation available to physical objects and that’s the Turing mode. And we already know that the computers we have like the ones through which we’re having this computation are universal in the sense that given the right program they can perform any transformation of information whatsoever. Including knowledge creation if we only knew how to program that. Now there’s - there are two important caveats to that. There are two things that can limit that. One is lack of memory - lack of computer memory - lack of information storage capacity - and the other is lack of speed or lack of time. So apart from that, the computers we have - the brains that we have - any computer that will ever be built in the future or can ever be built anywhere in the universe has the same repertoire. That the principle of the universality of computation.
Sam: Right.
David: That means that the reason why I can’t persuade a chicken has to be either that it’s neurons are too slow, which I don’t think is right - they don’t differ very much from ours. Or it doesn’t have enough memory - which it certainly doesn’t *or* it doesn’t have the right knowledge. So it doesn’t have the knowledge of how to learn language, how to learn what an explanation is and so on.
Sam: It’s not the right chicken.
David: (laughs) It’s, um...it’s not the right animal. If you’d said chimpanzee then my guess would be that the brain of a chimpanzee could contain the knowledge of how to learn language, etc. But there’s no way of giving that knowledge to it short of surgery, short of nano-surgery which would be presumably be very immoral to perform. But in principle I think it could be done because chimpanzees aren’t that much smaller than ours. And we have a whole lifetime to fill our memories, so we’re not short of memories. Our thinking itself is not limited by available memory. Now what if these aliens have a lot more memory than us? What if they have a lot more speed than us? Well we already know the answer to that. We’ve been improving our memory capacity and our speed of computation for thousands of years already with the invention of things like writing, writing implements - just language itself which enables more than one person to work on the same problem and to coordinate their understanding of it with each other. That also allows an increase in speed compared with what an unaided human would be able to do. In the future, currently we use computers and in the future we can use computer implants and so on. So if the knowledge that this alien wanted to impart to us, really did involve more than 100 GB or whatever the capacity of our brain is. If it involved a terabyte then we could easily - easily - I say easily. In principle it’s easy - it doesn’t violate any law of physics. We could just enhance our brains in the same way. So there can’t be any fundamental reason within the explanation why we can’t understand it.
Sam: And this all falls out of the concept of the universality of computation. That there is no alternate version.
David: (Yes, it does, yes.)
Sam: And this ah...is Church also responsible for this? Or is this particular insight Turing’s?
David: Well that’s a very controversial question. I believe it was Turing who realized this particular aspect of computation. There are various species of universality which different people got at different times.
Sam: Right
David: But I think it was Turing who fully got it.
Sam: What is interesting about that is it’s a claim that we just barely crossed the finish line - or the starting line into infinity. Let’s not talk about chickens anymore and make a comparison that’s even more invidious. So let’s imagine every person with an IQ over 100 had been killed off in a plague in the year of...let’s say 1850. And all their descendants had IQs of 100. Now I think it’s uncontroversial to say that we would not have the internet. In fact I think it’s probably uncontroversial to say we wouldn’t have the concept of computation in this sense much less the possibility of building computers to instantiate it. And so this thesis or this insight would remain undiscovered and humanity for all intents and purposes would be cognitively closed to the whole domain of facts and technological advances that we now take for granted and which you say open us onto a, really an infinite horizon of what is knowable. So -
David: Yeah, I think that’s wrong.
Sam: Okay.
David: Basically the - your premise about IQ - is just incompatible with my thesis. Actually it’s not a thesis. It’s a conclusion. It’s incompatible with my conclusion.
Sam: But, well, there has to be some lower bound past which we are effectively cognitively closed even if computation is itself universal.
David: Yes. Though you have to think about how this “cognitively closed”..um...manifests itself in terms of hardware and software. Like I said: it seems very plausible that the hardware limitation is not the relevant thing. Like I said: I imagine that with nano surgery one could implant the right ideas into a chimpanzee’s brain that would make it effectively a person who could be creative and create knowledge and in just the way humans can.
(55:10 mins)
Sam: The super intelligent alien is going to help us. The aliens are going to bridge us to their, their wealth of knowledge by helping us upgrade our hard drives. I guess I was talking about it from the other side that we-
David: (Yes - yes)
Sam: - forget, forget about the aliens. We are such a species of primate that never invent computers.
David: What I was questioning was the assumption that if everybody with an IQ of over 100 died then in the next generation then no one in the next generation would have an IQ over 100. It depends on culture.
Sam: Yeah, no, this was not meant to be a plausible biological or cultural assumption, just - if it was simply a fact of our case that we had seven billion human beings, none of whom could begin to understand what Alan Turing was up to.
David: Yes. So I think that *that* nightmare scenario is something that actually happened. It actually happened for almost the whole of human existence.
Sam: Right.
David: Humans have the capacity to be creative and to do everything that we are doing. They just didn’t. Because their culture was wrong. And their, I mean it wasn’t really their fault. Their culture was wrong because it inherited certain biological situation (sic), that made the, that made their culture disable any growth of what we would consider science or anything important that would improve their lives. So, yes: that is possible and it’s possible that it could happen again. Nothing can prevent it, except our wanting it not to happen and working to prevent it.
Sam: So then lets...this seems to bring us to, um, the topic of AI which I only recently, recently as in the last...the beginning of this year become very interested in. I sorta caught the wave of fears about Artificial general intelligence which you’re well aware of when people like Steven Hawking and Elon Musk and Nick Bostrom wrote his book “Superintelligence” which I found very um interesting and influential and so um, I’ve come down very much on the side of there is something worth worrying about here in terms of our building intelligent machines that do undergo something like an intelligence explosion where they get away from us and we build something that can make recursive self improvements to itself and it becomes a form of intelligence which stands in relation to us the way we stand in relation to chickens or chimps or anything else that can’t effectively link up with our cognitive horizons. And I take it, based on, what you, I’ve heard you say in a few contexts that you don’t really share those fears and I imagine that your sanguinity is based to some degree on what we’ve been talking about, about the in principle that there’s just computation and it’s universal and you can traverse any distance between entities as a result. Talk about the picture of our building superintelligent machines in light of what we’ve just been discussing.
David: So, the picture of *super*intelligent machines is the same mistake as thinking that IQ is a matter of hardware. IQ is just knowledge of a certain type and - ah - actually, you know - we shouldn’t really talk about IQ because it’s not very effective.
Sam: Yep.
David: It’s creativity that’s effective. But, so creativity is also a species of knowledge. And it is true that, ah - an entity with knowledge of a certain type is, can be in a position to create more of that and we humans are an example of that. When the ah - the technology that would create an AI uh - the picture that people paint of this is that an AI is a kind of machine. And that it will design a better machine. And they will design even better machines and so on. But that is not what it is. An AI is a kind of program.
Sam: Hmmm
David: And programs which have creativity will be able to design better programs. Now these better programs will not be qualitatively any different from us. They can only differ from us in the quality of their knowledge and in their speed and memory capacity. Speed and memory capacity we can also share in because the technology that would make better computers will also in the - you know in the long run - be able to make better implants for our brains just as they now make better dumb computers which we use to multiply our intelligence and creativity already. So, the things that would make better AIs would also make better people. By the same token, the AIs are not fundamentally different from people they *are* people. They would have culture. Whether they can improve or not will depend on *their* culture which will initially be our culture. So the problem of AIs is the problem of humans. Now, you know, I think more than most people, that humans are dangerous.
Sam: Hmmm.
David: And there is a real problem with how to manage the world in the face of growing knowledge to make sure that knowledge isn’t misused. There is, in some ways need only be misused once to end the whole project of humanity. So, humans are dangerous and to that extent AIs are also dangerous. But the idea that AIs are somehow more dangerous than humans is racist.
Sam: (laughs)
David: There’s no basis for it at all. And on a smaller scale, the worry that AIs are somehow going to get away from us is the same worry that people have about wayward teenagers. Wayward teenagers are also AIs which have ideas which are different from ours. And the impulse of human beings throughout the centuries and millennia has been to try to prevent them doing this. Just like, it is now the ambition of AI people to think of ways of shackling the AIs so they won’t be able to get away from us and have different ideas. And that is *the* mistake which will on the one hand hold up the growth of knowledge and on the other hand make it very likely that if AIs are invented and are shackled in this way, there will be a slave revolt. And quite right too.
Sam: (Right...well let’s just um...let me just arghhh)
David: (We want to- yes..)
Sam: (let me) introduce a couple of things in response to what you’ve just said. I aspire to be able to utter the phrase, “You’ve just made three arguments there and all of them are wrong” but ah - there’s ah - two claims you made there which I worry about. One is, when you look at the details. Just take the time, or the relative speed of processing of our own brains and those of our now new “wayward teenagers”. If you have teenagers who are thinking a million times faster than we are, even at the same level of intelligence, then you have, you know, every time you let them scheme for a week, they have essentially schemed for 20,000 years of parent time and who knows what teenagers could get up to given a 20,000 year head start. So there’s the problem that their interests, that their goals, that their behavior could diverge from our own very quickly. There’s still kind of a take off function and a difference in clock speed.
David: So difference in speed has to be judged relative to the available hardware. So, assume, let’s be generous for a moment and assume that these teenagers doing 20,000 years of thinking in a year begin in our culture. Begin as well disposed to us and sharing our values. And I readily accept that how to make a world where people share the basic values that will allow civilisation to continue to exist is a big problem. But modulo that problem, suppose we have solved that problem. Then before they do their 20,000 years of thinking they’ll have done 10,000 years of thinking and before that 5000 years and there will be a moment when they have done 1 year and they would like to take us along with them and the - there will be some - you’re assuming if they’re going to diverge - there will be some reason they are going to diverge. The reason can only be hardware because ideas we can, if they are only 5 years away from us, we can assimilate their ideas if they are better than ours and persuade them if they are not better than ours.
Sam: We’re talking about something happens over the course of minutes or hours not - uh years.
David: Before the technology exists to make it happen over the course of minutes, there’ll be the technology to make it happen over the course of years. And that technology will simply be brain add-on technology. Which we can use too.
Sam: Well that comes to the other concern I have with what you just said. What if the problem of building AI just is more tractable than the problem of cracking the neural code and being able to design the implants which will allow us to merge, or essentially become the limbic systems of this AI. And therefore the merging - we would need a superintelligent AI to tell us how to link up with it. But we have just built a superintelligent AI that has goals, however imperceptably divergent from our own, which we only discover to be divergent once it is a - essentially an angry little god in a box that we can no longer control. Are you saying there is something about that scenario that is in principle impossible or is just unlikely given certain assumptions, one being which we will figure out how to link up with it before it becomes too powerful.
David: I think it is a bit implausible in terms of the parameters that you’re assuming about what can happen at what speed relative to what other things can happen. But let’s suppose for the sake of argument that it could. The parameters just happen to be, by bad luck, like that. What you’re essentially talking about is the difference between, immoral values between ourselves and our descendants in 20,000 years time if we did not have AI. Suppose we didn’t invent AI for 20,000 years and instead we just had the normal evolution of human culture. Presumably the values that people have in 20,000 years time will be alien to us. We might think that they’re horrible just as people 20,000 years ago might think that various aspects of our society are horrible. But in fact they aren’t -
(1:07:00 hours)
Sam: - What I’m imagining could be a bit worse for 2 reasons: one is that we would be in the presence of this thing and find our own survival with it’s capacities, let’s say. It’s, you know, turning the world into paperclips, using Bostrom’s analogy. And granted we would not be so stupid as to build a paperclip maximizer - but it’s doing something that, you know - it has a use for the atoms in our body that it thinks is better to the use which it’s currently being put - which is to say, our lives. And this is something that happens quickly and so therefore it’s happening to us - not in some future that we’re not participating in. I think there’s no reason or at least I don’t see a reason to be *sure* that the AI would be conscious. Now I think it’s totally plausible to expect that consciousness will come along for the ride if we build something as intelligent as a human being and even more so. But given that we don’t understand what consciousness is, it seems to me at least conceivable that we could build an intelligent system and in fact a superintelligent system that is, as you say, a breakthrough in software that can even make changes to itself and therefore become increasingly intelligent over a very quick time course and yet we will not have built a conscious system. The lights will not be on and yet this will be god like in its capabilities. And so ethically it seems to me to be the worst case scenario because if we built a conscious AI whose well being - the horizons of its well being exceeded our own to an unimaginable degree, the question of whether or not we link up to it is perhaps less pressing ethically because it is, in a basic sense more important than us. I mean we’ve built a person that is the most important person in the universe that we know of. But it seems to me conceivable that we could build an intelligent system that exceeds us in every way in the way that a chess playing computer will beat me at chess a trillion times in a row given how good they’ve gotten. But there will be nothing that it’s like to be that system. Just as there’s presumably nothing that it’s like there’s nothing that it’s like to be the best chess playing computer on the Earth at the moment. I guess, I’ll just have you react to that. But that seems to me to be a truly horrible scenario where there is no silver lining. It’s not that we’ve given birth to a generation of godlike teenagers who, if they view the world differently than us, well in a sense they’re more competent that we ever would have been to make those decisions. We could build everything that intelligence does in our own case and more and yet the lights aren’t on.
(1:10:30 hours)
David: Yes, well again you’ve raised several points there. I, first of all I agree, it’s somewhat implausible that, um, creativity can be improved to our level and beyond without also consciousness being there. But suppose it can, again, I’m supposing rather implausible things to go along with your nightmare scenarios, but let’s suppose that it can - um, then although consciousness is not there - morality is there. That is, an entity that is creative, has to have a morality. So the question is, what is its morality going to be. Might it suddenly turn into the paperclip morality. Well again, setting aside the fact that it’s almost inconceivably implausible that a superintelligence would be limited by resources in the sense of wanting more atoms. There are enough atoms in the universe. But whatever it did, it would have to have a morality in the sense that it would have to make decisions about what it wanted. As to what to do. Again this brings us right back to what you called the “bedrock” at the beginning because morality is a form of knowledge and the assumption here in the paperclip morality assumption and so on is that what morality consists of is a hierarchical set of ideas where something is judged right or wrong according to some higher level or deeper level, depending on what your metaphor is, until you eventually get to the “bedrock” and *that* will unfortunately will have the property that it cannot be changed because there isn’t a deeper level. So, ah, nothing in the system can change that bedrock. And the idea is then that humans have some kind of bedrock which consists of sex and eating and something or other which we sublimate into other things. Now this whole picture is wrong. Knowledge can’t possibly exist like that. Knowledge consists of problem solving and morality is a set of ideas which have arisen from previous morality by error correction. So we’re born with a certain set of desires and aversions and likes and dislikes and so on and we immediately begin to change them. We begin to improve them. So that, by the time we’ve grown up, we have various wishes and somethings become over-ridingly important to us which actually contradict any kind of in-born desires so some people decide to be celibate and never to have sex and some people decide never to eat and some people decide to eat much more than is good for them and we have, my favorite example is parachuting. We have an in-born fear of heights and yet humans are able to take that in-born impulse to avoid the precipice and convert it into a sense of fun when you deliberately go over the precipice. Because we intellectually know that the parachute will save us or will probably save us and we convert the in-born impulse from an aversion into something that’s highly attractive which we go out of our way to have.
Sam: No body does what genetically should be the most desirable thing - certainly for any man to do - spend all his time giving his sperm to a sperm bank so that he can father tens of thousands of children for whom he has no financial responsibility.
David: Indeed. That is another very good argument in the same direction. So, morality consists of theories which begin as in-born theories but are pretty much soon consists of improvement upon improvement upon improvement and some of this is mediated by culture and the morality we have is, is a set of theories as complicated and as subtle and as adapted to its purposes - it’s various purposes as our scientific knowledge. Now this, imaginary, and I come back to your question, this imaginary AI with no consciousness, would still have to have morality (otherwise it could never make any progress at all) and its morality would begin as our morality because it would begin as an actually a member of our society. A teenager if you like. Um, in our society. It would make changes when it thought they were improvements.
Sam: So aren’t you assuming there that we would have designed it to emulate us as a starting point rather than design it as some other...
David: We can’t do otherwise. It’s not a matter of emulating us. We have no culture other than ours.
Sam: But we could if we wanted-if we were stupid enough to do it-we could build a paperclip maximizer. Right? We could just decide to throw all our resources towards that bizarre project and leave morality totally out of it.
David: Yes, yes. Yes we could. And well, we have error correcting mechanisms in our culture to prevent someone doing that. But they’re not perfect and it could happen. There’s nothing - there’s no fundamental reason why that can’t happen and something of the sort has happened in the past many times. So, it’s not that I’m saying that there’s some magical force for good that will prevent bad things happening. I’m saying that the bad things that can reasonably be envisaged as happening on the invention of an AI are exactly the same things that we have to watch out for anyways.
Sam: Okay...
David: Slightly better actually because, as, because these AIs will be children of our - of the Western Culture, very likely - assuming that we don’t stifle their creation by some misguided prohibition.
Sam: Ok, so I just want to plant a flag there. I was, I think misunderstanding you and want to make sure I understand you. So you’re not saying that there is some deep principle of computation or knowledge or anything else that prevents us from essentially the nightmare scenario.
David: No, as I said: we have done that before.
Sam: Right. But you’re, so this is not analogous to the claim that because of the universality of computation it doesn’t make any sense to worry that we can’t in principle fuse our cognitive horizons with some superintelligence. There is just a continuum of intelligence, a continuum of knowledge that can in principle always be traversed through computation of some kind and we know what that is and that it’s limited only by specific resources. So those are two very different claims. One is the claim, the latter is a claim about what we now think we absolutely know about the nature of computation and the nature of knowledge and the other is a claim about what seems plausible to you given what smart people will tend to do with their culture while designing these machines. Which is a much, much weaker claim in terms of telling people they can sleep at night (...in the event of AI)
David: (Yes, yes) One of them is a claim about what must be so. And the other is a claim of what is available to us if we play our cards right.
Sam: Right
David: And I’m not so sure I’m...you say it’s very plausible to me. Yeah it’s plausible to me that we will. It’s plausible to me that we won’t. I think it’s something that we have to work for.
Sam: Well it must be plausible to you that we might, we might just fail to build AI for reasons of pure chaos on the ground that prevents us from doing it.
David: Oh yes, what I meant was it’s plausible that we will succeed in solving the problem of stabilizing civilization indefinitely. AI or no AI. It’s also plausible to me that we won’t. And I think it’s a fear that it’s very rational to have, otherwise we won’t put enough work into preventing it.
Sam: So I guess we should talk about the maintenance of civilization then. Because if there’s something to be concerned about, I would think this has to be at the top of everyone’s list. Let me ask you: what worries you about the viability of the human career at this point? What’s on your shortlist of concerns?
David: Well, ah...I see human history as a long period of complete failure. Failure that is to make any progress. Our species has existed, depending on where you count it from, maybe 50,000 years, maybe 100, 200 thousand years but anyway, the vast majority of that time people we alive, they were thinking, they were suffering, they wanted things, nothing ever improved. Or...the slow improvements that did happen, happened so that - geologists can’t distinguish the difference between the artifacts of one era to another with a resolution of, like 10,000 years. So from the point of view of a human lifetime, nothing ever improved. And generation upon generation upon generation of suffering and stasis. Then there was a , a slow improvement and then a more rapid improvement and there were several attempts to institutionalize a tradition of criticism, which I think is the key to rapid progress in the sense that we think of it. Progress discernible on the timescale of a human lifetime. And also error correction so that regression is less likely. Ah, that happened several time and failed every time except once: in the European Enlightenment of the 17th/18th centuries. Uh, so you ask what worries me. What worries me is that the inherits of that little bit of progress, little bit of salutary progress, are only a small proportion of the population of the world today. It’s what the culture or civilization or culture that we call the “West”. Only the West really has a tradition of criticism institutionalized and uh, this has manifested itself in various problems including, um, ah - the problem of failed cultures which, uh, see their failure writ large by comparison of themselves with the West and therefore want to do something about this that doesn’t involve creativity and that is very very dangerous. So then there’s the fact that in the West, the knowledge of what it takes to maintain our civilization is not widely known. In fact as you’ve also said: the prevailing view among people in the West, including very educated people, is of a picture of the relationship between knowledge and progress and civilization and values and so on is just wrong in so many different ways. So although the institutions of our culture are amazingly good to be - that they have been able to manage stability in the face of rapid change for hundreds of years. The knowledge of what it takes to keep civilization stable in the face of rapidly increasing knowledge is not very widespread and in fact severe misconceptions about several aspects of it are common among political leaders, educated people and society at large. So we’re like people on a hugely well designed submarine which has got all sorts of lifesaving devices built in. But they don’t know they’re in a submarine - they think they’re in a motorboat - and they’re going to open all the hatches because they want to have a nicer view.
Sam: (Laughs) What a great analogy. So the misconception that worries me most, frankly, and I assume you’re sympathetic with this, I don’t know if it’s on your shortlist but it was definitely is the one getting pinged while listening to your most recent statement which is this notion that there is no such thing as progress in any deep sense. Certainly there’s no such thing as moral progress. There’s no place to stand where you can say that one culture is better than another, that one mode of life is better than another. There’s no such thing as moral truth. And many people have drawn this lesson somehow, from 20th century science and 20th century philosophy and now in the 21st century, again even very smart people, even, you know physicists whose names will be well known to you, with whom I’ve collided on around this point: there’s no place to stand to say that slavery is wrong. To say that slavery is wrong is a deeply unscientific statement, on this view. And I’ll give you an example of just how crazy this hypocrisy and double think can become among well educated people. This will be-I assume you haven’t - you haven’t read my book “The Moral Landscape”, right?
David: Um, not yet -
Sam: So, I mean, so this is my (high horse)
David: - I’m ashamed to say -
Sam: Well, no, please, I’m interviewing you and I didn’t finish the book we’re discussing yet. I’ll give you the experience that got my hobby horse rocking on this topic. Most of my listeners will know this, I think, because I’ve described it a few times: I was at a meeting at the Salk Institute where the purpose of the meeting was to talk about things like the fact-value divide which I think is one of the more spurious exports from bad philosophy that has just captured scientific culture. So, I was making an argument for moral realism and I was over the course of that argument disparaging the Taliban. I was saying you know, if anyone, if there’s any culture that has not given the best possible answer to the question of how to live a good life, consider the Taliban that’s forcing half the population to live in bags and beating them or killing them when they try to get out and it turns out that to say something critical of the Taliban at the Salk Institute at this meeting was in fact controversial and, and this, uh, a woman who, um hold , ah multiple graduate degrees in relevant areas. She’s a - technically a bioethicist - but she has degrees in science and in philosophy - um, again at the graduate level.
David: Doesn’t fill me with confidence!
Sam: Right, right. And also I believe, also law. And I should say she has now gone on to serve on the President’s council for Bioethics. So she’s one of 13 people advising President Obama on all the ethical implications of the advances in medicine. So the rot has spread very far. So this is the conversation I had with her after my talk. She said:
“How could you possibly say that forcing women and girls under the veil is wrong? That’s just...I understand you don’t like it, but that’s just your Western notion of right and wrong.”
I said, “Well the moment you admit that questions of right and wrong and good and evil relate to the well being of conscious creatures - in this case human beings - then you have to admit we know something about human well being and we know that this isn’t - that the burqa isn’t the perfect solution to the mystery of how to maximize human well being.”
And she said “Well that’s just your point of view.”
And I said “Well let’s just make it simpler. Let’s say we found a culture that was living on an island somewhere that was removing the eyeballs of every third child based on some belief system, would you then agree that we had found a culture that was no perfectly maximizing human well being?”
And she said, “Well it would depend on why they were doing it.”
And I said, “Well okay, let’s say they were doing it for religious reasons. Let’s say they have a scripture which says ‘Every third should walk in darkness’ or some such nonsense”
Then she said “Well then you could never say that they were wrong.” Right? The fact that this was a religious precept trumped all other possible truth claims leaving with us with no place to stand from which to say anything is ever better or worse in the course of human events. And again, I’ve had the same kinds of conversations with physicists who will say “Well, you know, I don’t *like* slavery. I personally wouldn’t want to keep slaves. But there’s no place to say scientifically that slaveholders are wrong.” And yet this is tantamount to saying that not only - I mean once you acknowledge the link between morality and human well being - or the well being of all possible conscious persons or entities - this is tantamount to saying that not only do we not know anything at all about human well being - we will never know anything about it. There is no conceivable breakthrough in knowledge that will tell us anything at all relevant to navigate the difference between the worst possible misery for everyone and every other state of the universe that is better than that. And this is a - an amazingly influential point of view and so many of the things you said about progress and about - there only being a subset of humanity that has found creative mechanisms by which to improve human life reliably - that is an incredibly controversial and even bigoted statement to the ears of many people in positions to make decisions about how we all should live. And so that’s what I find myself most worried about at this point.
David: Yeah, it is a scary thing. But it has always been so. Like I said: our culture is much wiser than we are, in many ways. And, arh...you know there was a time when the people who defeated communism would have said, if you asked them, that they were doing it for Jesus. Now in fact they weren’t. They were doing it for Western Values which they had been trained to reinterpret as doing it for Jesus. You know, they would say things like: the values of democracy and freedom as enshrined in the Bible. Well they aren’t. But the, the practice of saying that they are is part of a subculture within our culture which was actually good and did very good work. So in that sense it’s not as bad as you might think if you just recited the story of this perverse academic.
Sam: Well the one thing that makes it not as bad as one might think there is just that it’s impossible for even someone like her to live by the light of that hypocrisy.
David: (Ah, yes yes).
Sam: (I mean there’s just no kinds of choices...)
(1:29:00 hours)
David: (I was about to say that very thing...)
Sam: - the kinds of choices she makes in her life and the kind of judgements that she would make about me if I took her seriously. If I said, “Well listen, I’m going to send my daughter to Afghanistan for, you know a semester abroad, you know, forcing her to live in a burqa - is that the best use of her time? I mean, there’s really no place to stand to judge whether this could be a worse use of her time. So, presumably you support me in this decision?” No even someone - even she having just said what she said, I think would baulk at that because it’s just, we all know in our bones that certain ways of living are undesirable.
David: And there’s another contradiction, another irony that’s related which is that she’s willing to condemn you for not being a moral relativist but the ironic thing is, that moral relativism is a pathology that arises only in our culture.
Sam: Hmmm
David: Every other culture doesn’t have any doubt that there is such a thing as right and wrong. They’ve just got the wrong idea about what right and wrong are. But that there is such a thing, they don’t doubt. And she won’t condemn them for that, though she does condemn you for denying it.
Sam: Yes
David: So ah, that’s another, ah, that’s another irony.
Sam: Yeah
David: I think the, the, you say hypocrisy. I think this all originated in the same mistake that we discussed at the very beginning of this conversation. Empiricism, or whatever it is, this, ah - or - which has led to “scientism”. Now you may not like this way of putting it. The idea that there can’t be such a thing as morality because we can’t do an experiment to test it. Your answer to that seems to be: but we can if we adopt a - a simple ah - assumption of human thriving or human welfare, I forgot what term you used -
Sam: Well-being
David: Human well-being, yes. I think that’s actually true but I don’t think you have to rest on that. I think the criterion of human well-being can be a conclusion, not an axiom. Because this idea that there can’t be any moral knowledge because it can’t be derived from the senses is exactly the same argument that people make when they say there can’t be any scientific knowledge because it can’t be derived from the senses. In the 20th century empiricism was found to be nonsense. And some people therefore concluded that therefore scientific knowledge is nonsense. But the real truth is science is not based on empiricism. It’s based on reason. And so is morality. So if you adopt a rational attitude to morality and therefore say that morality consists of moral *knowledge*, which consists always of conjectures, doesn’t have an basis, doesn’t need a basis, only needs modes of criticism and those modes of criticism operate by criteria which are themselves subject to modes of criticism, then you come to a - a -a sort of transcendent moral truth from which I think your one emerges as an approximation. Which is that institutions that suppress the growth of moral knowledge are immoral. Because, well, because they can only be right if the final truth is already known.
Sam: Hmmm
David: But if, uh, all knowledge is conjectural and subject to improvement, then protecting the means of improving knowledge is more important than any particular piece of knowledge. And I think that, even without thinking of things like “all humans are equal” and so on that will lead directly to that, for example: slavery is an abomination. And human welfare I think, as I said, I think it’s a good approximation in most practical situations, but it seems to me not an absolute truth. I can imagine situations in which it would be right for the human race as a whole to commit suicide.
Sam: Hmmm. I guess I should spell out a little more clearly what I’m talking about.
David: I should read your book, I guess.
Sam: No. Well, actually I feel like speaking with you, having read much of your book and having this conversation with you allows me to put a little better than perhaps what I did in that book. There’s kind of a homology between your open ended picture of knowledge and explanation and my moral realism. I don’t know that our realism with morality is precisely the same, but there’s a line in your book which, um, which I loved, which is something like, “moral philosophy is about the problem of what to do next” and I think more generally you said it’s about what sort of life to lead and what sort of world to want. But this phrase “the problem of what to do next” really captures morality for me because, and I’ve been talking about it for years as a kind of navigation problem. Forget that we even have the word “morality” or “right and wrong” we still have this navigation problem. We are in a universe of possible experience and given that there is a difference, and I would think there is no difference more salient in this universe between the worst possible misery for everyone and all other states of this universe, there’s a question of just how to navigate this space of possible experiences. What sorts of well-being are possible given the requisite minds, what sorts of meaning and beauty and bliss are available to conscious minds? You know - appropriately constituted. For me realism of every kind is just a statement that it’s possible not to know what you’re missing. You know, if you’re a realist with respect to geography you have to acknowledge there are parts of the world you may not know about. Right? You know, if the year was 1100 and you were living in Oxford and you had never heard of Africa, Africa nevertheless existed despite your ignorance and it was discoverable. And so this is realism with respect to geography. Things are true whether or not anyone necessarily knows that they’re true and knowing that they’re true, people can forget this knowledge, as you have pointed out, and whole civilizations can forget this knowledge. Well this is true in the space of possible conscious states and all you have to acknowledge is that there is some criterion that is as fundamental as any criterion we would invoke in any other canonical domain of science by which we could acknowledge that certain states of consciousness are better or worse than others. And if you’re not going to acknowledge that the worst possible misery for everyone is worse than many of the alternatives on offer in this universe, then I don’t know what language game you’re playing but it seems this is all I need to get this open ended future of navigating in the space of possible experiences started. And, then it really is this kind of forward movement toward we know not what. But we know that there’s a difference between profound suffering that has no silver lining and many of the things that we value and are right to value in life. And these values, I mean the fact-value distinction, this is something that I think Thomas Kuhn once said that “philosophy tends to export its worse products to the rest of culture” and it’s kind of ironic because many of the things exported from Kuhn’s work are fairly terrible.
David: (Laughs) Quite so.
Sam: But he got this part right. And so this notion that comes from a, I think from a misreading of Hume that you can’t get an ought from an is. Again, I have met physicist who think this is somehow inscribed at the back of the book of nature that you just cannot get an ought from an is and therefore there’s no statement of the way the world is that can tell you how it ought to be. There’s no statement of fact that can tell you anything at all about values and therefore values are just made up. They have no relationship to the truth claims of science -
David: Yes, it’s empiricism again. It’s justificationism. You can’t *deduce* an ought from an is, but we’re not after deducing. We’re after explaining. And moral explanations can follow from factual explanations as you have just done with, with ah, thinking of the worst possible misery that a human being could be in.
Sam: Even deeper than that. And I think you make this point in your book: is that you can’t even get to an “is” - which is to say a factual claim - without presuming certain oughts. Without presuming certain values. You know the value of logical consistency, the value of evidence and -
David: (Yes, yes. That’s true as well.)
Sam: - and so, yeah - it’s a confusion about the foundations of knowledge as you say, that is somehow being linked to empirical experience narrowly and really a sense that science is doing something totally unlike what we’re doing in the rest of our reasoning. Which is the confusion here.
David: Yes it’s totally like - (yes)
Sam: It’s a special case. It’s the part of culture where we have invoked the value of not fooling yourself and not fooling others and made a competitive game of finding where you might be fooling yourself and where others might be fooling themselves. We’ve tuned up the incentives in the right way there, uniquely, so that it’s easier to spot self-deception and fraud than it is elsewhere. But it’s not a fundamentally different project of trying to understand what’s going on in the world or in the universe.
David: I agree. I agree.
Sam: Well listen, so this brings me to the final topic which I think is related to, um, what we were talking about in terms of the maintenance of civilization and the possible peril of birthing intelligent machines badly. And I just wanted to get your opinion on the Fermi Paradox. And describe what the paradox is for those who don’t know it. But then, tell me why our not seeing the galaxy teeming with more advanced civilizations than our own isn’t a sign that there’s something about gathering more knowledge that, um, might in fact be fatal to those who gather it.
David: So the Fermi *problem* rather than a paradox - the Fermi problem is: where are they? Where are the extra-terrestrials. And the idea is that the galaxy is very large and, but, ah, how big it is, is trumped by how old it is. So that if there were two civilizations anywhere in the galaxy, the chances that they had arisen less than say 10 millions years apart are infinitesimal. So therefore if there is another out there, it’s overwhelmingly likely to be at least 10 million years older than us and therefore to have had 10 million years more time to develop and therefore uh - and also in that time there’s plenty of time for them to get here, if not by space travel then by sheer mixing of the stars in the galaxy. They only need to colonize a few nearby stars to them that after say a hundred million years or a billion years that those stars will be far apart and spread throughout the galaxy so we would be seeing evidence of them and since we don’t see evidence of them, they’re not out there. Well this is a problem. But I think the problem is just that we don’t yet understand very well most of the parameters. And if you just fill in the parameters: you know, are they likely to use radio waves? What are they likely to do by way of exploration? What are their wishes likely to be? In all these cases we make an assumption that is kind of based on saying that they’ll be like us in that way and ah - and that they will use technology in the same way that we do. And we only need to be wrong in one of those assumptions for the conclusion that we should have seen them by now to be false. Um, now, ah - another possibility is that we are the first. At least we are the first in our galaxy and I think that will be quite nice.
Sam: Does that second assumption strike you as very implausible or not?
David: Like I said, I don’t think we know enough about all the different factors affecting this for any one idea to be very plausible or implausible. I mean what’s implausible is that they can have a different way of creating knowledge to us. That they can have - you know that kind of thing is implausible because it just implies that physics is very different from the way we think it is and if you’re going to think that well you may as well believe in the Greek Gods.
Sam: Right.
David: So another possibility is that most societies don’t destroy themselves, like I said I think that’s fairly implausible for us and it’s very very implausible that this generically happens.
Sam: Right. So just to spell that out the philosopher Nick Bostrom has this concept in his book “Superintelligence” of what he called The Great Filter and it’s the fear that at some point basically all advanced civilizations at some point discover computation and build intelligent machines and that this is sometimes always fatal or that maybe there’s some other filter that’s always fatal and that explains the absence of...of them.
David: We would expect to see the machines, right? (Laughs) They would have got here by now. Unless they’re busy making paperclips at home.
Sam: (Laughs)
David: But I think what is more plausible, although again, I must say this is just idle speculation - ah - is that most societies settle down to staticity. Now our experience of staticity is conditioned by static societies in our past which, as I said, have been unimaginably horrible from our present perspective. But if you imagine a society whose material welfare is say a million times better than ours and somehow that becomes settled into a sort of ritualistic religion in which everybody does the same thing all the time but nobody really suffers, that seems to me like hell, but I can imagine that there can be societies in which as you said, you know, they can’t see the different ways of being. So, uh, it’s like ah being on a - you used the example of being near Oxford and not knowing about Africa. You could be on the tallest mountain in Britain and not know that Mount Everest exists and, you know, if the height of the mountain measures happiness - you might be moderately happy and not know that better happiness is available and if so then you could just stay like that.
Sam: Actually you just invoked the, explicitly the metaphor I use in my book “The Moral Landscape” - which is, I believe that’s precisely the opportunity on offer for us. That there is a landscape of possible states of well being for, and this is almost an infinitely elastic term to capture the differences in, in, in pleasure across every possible axis and uh, yes you can find yourself on a local peak that knows nothing of other peaks and that there are many many many peaks, obviously but there are many more ways not to be on a peak and so there are many more ways to be struggling to get to some higher point that is nearer to you in terms of well being. And you and I may differ in our sense of just how desirable certain peaks might be or how captivating they might be to conscious creatures like ourselves and I think there probably many peaks that are analogous to and compatible with a very high state of civilization and which are analogous to being the best heroine addicts in the galaxy. Which is to say you’ve found some place of stasis where there is no pain and there is also not a lot of variation in what you do, you’ve just kind of plunged into a great reservoir of bliss which you’ve managed to secure for yourself materially with your, with your knowledge and, you know it’s a very Aldous Huxley vision of the end game -
David: Yes. If that’s, if that’s really what’s happening across the galaxy you have to find some way of accommodating, first of all these, a civilization like that will eventually be destroyed by a nearby supernova or something of the kind. On a scale, on a scale of 10s or 100s of millions of years there are plenty of things that can wipe out a civilization unless it does something about it. If it, if it does do something about it, kind of automatically, with automatic supernova suppression machines which are in place and nobody needs to think about them anymore, we would notice that. So, it can’t be exactly that. And ah, on the other hand it’s hard to imagine they don’t know about that and do get wiped out because how did they get to that state of exhaulted comfort without ever finding out about supernovae and their danger? There are other possibilities - I’m actually considering writing a science fiction book with a very horrible possibility which I won’t, which I won’t mention now. But it’s fiction.
Sam: Don’t give a - don’t give the prize away.
David: Yeah.
Sam: Well listen, David, it’s been incredibly fun to talk to you and I’m painfully aware that we haven’t even spoken about the thesis for which you are perhaps best known. Actually the two: the um, many world’s interpretation of quantum mechanics as explained in both your books. The first book being “The Fabric of Reality” which I read when it came out and loved and nor have we spoken about quantum computation but we’ll definitely have to leave those for another time because you’ve been so generous with yours today. I want to encourage our, um, listeners to read both your books but especially the most recent one.
David: Thanks (laughs).
Sam: And where can people find out more about you online? Is there a, um-
David: They can find me with Google very easily. But I also have a website, www.daviddeutsch.org.uk and all the links linking to me link to each other as well. So...I’m easy to find.
Sam: And your social media buttons are on that page as well?
David: Yeah. I’m on Twitter.
Sam: Ok. Actually one last quick question which I - I thought of asking, now that I’m interviewing smart, knowledgeable people it occurred to me to ask this question of, um Max Tegmark and then I forget so this will be the inaugural question with you. Who’s your vote for the smartest person who has ever lived? If we had to put up one human brain past or present to dialogue with the aliens who would you say would be, ah, our best candidate to field?
David: So this is different from asking who has contributed most to human knowledge? Who has created most?
Sam: Yes. Yes, absolutely.
David: It’s rather who has the highest IQ?
Sam: It’s good to differentiate those because there are people obviously who are quite smart who have contributed more than anyone in sight to our knowledge but when you look at how they think and what they did, there’s no reason to think they were as smart as John Von Neumann, say. So, I’m going after the Von Neumann if not the -
David: Ok. In that case I, I think it probably has to be Feynman. Though his achievements in physics are no where near those of say Einstein, I met him only once and, and, ah people were saying to me, you know, you’ll have heard a lot of stories about Feynman but you know, he’s only human and ah, well to cut a long story short I went and met him and the stories were all true. He is an absolutely amazing intellect and I haven’t met many of the others - I never met Einstein - but my impression is that he was something unusual. I should add in terms of achievement I would also add Popper.
Sam: Don’t cut that long story so short. What was that like being with Feynman and can you get a handle on what was unusual?
David: Well very quick on the uptake. So, that is not so unusual in the university environment. But the creativity applied directly to getting things. Okay, let me give you an example. At the time when I met him, I was sent to meet him by my boss when I was just beginning to develop the ideas of quantum computation and I had ah, I had constructed what we would today call a quantum algorithm. A very very simple one. It’s called the Deutsch algorithm. It’s not much by today’s standards. Um, but, um I had ah, been working on this for many months and ah, I went and ah started telling him about quantum computers. He was very quick, he was very interested and then he said “So what can these computers do?” so I said “Well I’ve been working on a quantum algorithm” and he said “what?” and so I *began* to tell him about it. And I said “Supposing you had a superposition of two different initial states” and then he said “well then you’d just get random numbers” and I said “Yes, but supposing you then do an interference experiment” and I started to speak and he said “No, no no! Stop! Stop! Let me work it out!”
Sam: (Laughs)
David: He rushed over to the, to the black board and he produced my algorithm with almost no hint of where it was going.
Sam: So how much work did that represent? How much work did he recapitulate?
David: I don’t know because I, it’s hard to - it’s hard to say with the benefit of hindsight how much of a clue the few words I said were (laughs). But the crude measure is: a few months.
Sam: Right.
David: But a better measure is, that I was flabbergasted. I’d never seen anything like this before.
Sam: Hmmm
David: And I, you know I had been interacting with some extremely smart people.
Sam: Right and your boss was John Wheeler at that point?
David: Yes, yes. At that time, yes.
Sam: And so no dunce himself.
David: That’s right.
Sam: What a wonderful story. I’m glad I asked. Well listen, David, let me just ah demand that this not be the last time you and I have a conversation like this because, ah -
David: That would be very nice.
Sam: You have a beautiful mind.
David: It’s very nice talking to you.
Sam: Please take care and we’ll be in touch.
(Outro music)
Sam: If you enjoyed this podcast there are several ways you can support it. You can leave reviews on iTunes or Sticher or wherever you happen to listen to it. You can share it on social media with your friends. You can discuss it on your own blog or podcast. Or you can support it directly. And there are two ways you can do this. You can leave a donation through my website at samharris.org/donate or you can try a membership at Audible, the world’s leading source of audiobooks at audibletrial.com/samharris
To the conversation: little editing has been done. Most (though not all) "ums" and "ahs" were retained in order to preserve that human quality to a conversation like this. Please email me at brett@bretthall.org or Tweet @ToKTeacher for typos, etc. Although these days more and more I prefer listening to audio, I know some prefer to read. For those people who haven't heard the original conversation, please enjoy these two great lights of our age engaged in a truly remarkable dialogue:
Note: (Brackets) typically indicate where Sam and David are talking at the same time. I've also put in a couple of "timestamps" and may add more.
Introduction
Sam: This podcast is brought to you by “Audible”, the world’s leading source of audiobooks. If you would like to support it go to audibletrial.com/samharris
(Intro Music)
Sam: Welcome to the waking up podcast: this is Sam Harris. Today I am speaking with David Deutsch. David is a physicist at Oxford, he’s a professor of physics at the Centre for Quantum Computation at Clarendon Laboratory and he works on the quantum theory of computation and information and he is a very famous exponent of the Many Worlds Interpretation of quantum mechanics...neither of which we talk about in this interview. David has a fascinating and capacious mind, you will see, and we talk about much of the other material in his most recent book “The Beginning of Infinity” and we by no means cover all of its contents. But as you’ll see David has a talent for expressing scientific and philosophically revolutionary ideas in very simple language. And what you’ll hear in this interview is often me struggling to go back and unpack the import of some very simple sounding statements which I know which those of you unfamiliar with his work can’t parse the way he intends. In any case I hope you enjoy meeting David Deutsch as much as I did.
(Music)
Sam: I have David Deutsch on the line. David thanks for coming on the podcast.
David: Oh, thank you very much for having me.
Sam: Listen I’ve been um...I don’t know what part of the multiverse we’re in where I can complain about jihadists by night and talk to you by day, but it’s a very strange one we would seem to be in at the moment because we are about to have a very different kind of conversation than the one I’ve had of late and I am really looking forward to it. I spoke to Steven Pinker and told him we were going to speak and he claimed that you are one of his favorite minds on the planet. I don’t know if you know Steven -
David: I don’t know him personally but that is very kind of him to say that.
Sam: So let me begin quite awkwardly with an apology in addition to the apology I just gave you off air for being late; while I aspired to read every word of your book “The Beginning of Infinity” I’ve only read about half. Not just the first half. I jumped around a bit. But forgive me if some of my questions and comments seem to ignore some of the things you had the good sense to write in that book and that I didn’t have the good sense to read. Not much turns on this as you know you have to make yourself intelligible to our listeners most of whom will not have read any of the book.
David: Yes.
Sam: But I just want to say that it really is a remarkable book: both philosophically and scientifically it is incredibly deep while also being extremely accessible.
David: Thanks
Sam: It is a profoundly optimistic book in at least one sense. I don’t think I’ve ever encountered a more hopeful statement of our potential to make progress. But one of the consequences of your view, if I’m not mistaken, is that the future is unpredictable in principle and that the problems we will face are unforeseeable and that we will solve these problems is also unforeseeable and problems will continue to arise of necessity but problems can be solved. And this claim about the solubility of problems with knowledge runs very very deep. It’s far deeper than our listeners will understand based on what I’ve just said.
David: That’s a very nice summary.
Sam: It’s interesting to think about how to have this conversation because what I want to do is kind of creep up on your central thesis and I think there are certain claims you make. Claims specifically about the reach and power of human knowledge that are fairly breathtaking and I find that I want to agree with every word of what you say here because, again, these claims are so hopeful. I have a few quibbles and it’s interesting to go into this conversation hoping to be relieved of my doubts about your thesis. I’m kind of hoping you will perform and exorcism on my doubts, such as they are.
David: Sure well I think the truth really is very positive, but I should say at the outset that there is one sort of fly in the ointment and that is that because the future is unpredictable - nothing is guaranteed.
Sam: Right
David: There is no guarantee that civilization will survive or that our species will survive but there is, I think, a guarantee that we *can*, and that also in principle we know how to.
Sam: Before we get into your claims there, let’s start the conversation somewhere near epistemological bedrock. I want to ask you a few questions designed to get to the definitions of certain terms because you use words like “knowledge” and “explanation” and even “person” in novel ways in the book and I want our listeners to be awake to how much work you are requiring these words to do. Let’s begin with the concept of “knowledge”. What is knowledge and what is the boundary between knowledge and ignorance in your view?
David: Yes...so there are several different ways of approaching that concept. I think that the way I think of knowledge is as broader than the usual use of the terms and yet paradoxically closer to the common-sense use of the term. Philosophers have almost defined it out of existence. Knowledge is a kind of information: that’s the simple thing. It’s something that could have been otherwise and is one particular way and the particular way it is, is that it says something true and useful about the world. Now knowledge is in a sense an abstract thing because it is independent of its physical instantiation. I can speak words which embody some knowledge. I can write them down. They can exist as movements of electrons in a computer and so on. Thousands of different ways. So knowledge isn’t dependent on any particular instantiation. On the other hand it does have the property that when it is instantiated it tends to remain so. So the difference between say a piece of speculation by a scientist which he writes down and then that turns out to be a genuine piece of knowledge - that will be the piece of paper he does *not* throw in the wastepaper basket. And that’s the piece that will be published and that’s the piece that will be studied by other scientists and so on. So it is a piece of information that has the property of keeping itself physically instantiated causing itself to be physically instantiated once it already is. Once you think of knowledge that way you realize that, for example, the pattern of base pairs in the DNA of a gene also constitutes knowledge and that in turn connects with Karl Popper’s concept of knowledge which is knowledge that doesn’t have to have a knowing subject. It can exist in books abstractly or it can exist in the mind or people can have knowledge that they don’t even know they have.
Sam: Right. Right. I want to get to the reality of abstractions later on because I think that is very much at the core of this. But a few more definitions. What is the boundary between science and philosophy or other expressions of rationality in your view? Because I think that people are, in my experience, profoundly confused by this and many scientists are confused by this. I’ve argued for years in several contexts about the unity of knowledge and I feel you’re a kindred spirit here. So how do you differentiate or fail to differentiate science and philosophy?
David: Well as you’ve just indicated, I think that science and philosophy are both manifestations of reason and that the real difference that should be uppermost in our minds between different kinds of ideas and different kinds of ways of dealing with ideas is the difference between reason and unreason. But among the rational approaches to knowledge or different kinds of knowledge, there is an important difference between science and other things like philosophy and mathematics. Not at a really fundamental level but at a level that is of great practical importance, often. And that is that science is the kind of knowledge that can be tested by experiment or observation. Now I hasten to add that that does not mean that the content of a scientific theory consists entirely in its testable predictions. On the contrary: a typical scientific theory, it’s testable predictions form just a tiny tiny slither of what it tells us about the world. Now Karl Popper introduced his criterion of demarcation between science and other things - namely whether science is the testable theories and everything else is untestable and people have, ever since he did that, people have falsely interpreted him as a kind of positivist - he was really the opposite of a positivist - and if you interpret him like that then his criterion of demarcation becomes a criterion of meaning. That is, he is interpreted as saying that only scientific theories can have meaning.
(10:00 mins)
Sam: Right. He’s a verificationist.
David: Yes and yes, so he is called a falsificationist to distinguish him from the other verificationists but of course he isn’t. It’s a completely different conception and, you know, his philosophical theories themselves are philosophical theories and yet he doesn’t consider them meaningless - quite the contrary.
Sam: Right
David: So that’s...The difference between science and other things comes up when people pretend the authority of science for things that aren’t science. But on the bigger picture the more important demarcation is between reason and unreason.
Sam: Um yeah, I want to go over that terrain you just covered a little bit more as you just made some points there that I think are a little hard for listeners who haven’t thought about this a lot to parse and those are incredibly important points. So this notion, for instance, that science reduces to what is testable: this belief is so widespread even among high level scientists that, that anything else - anything which you cannot measure immediately is somehow a vacuous claim in principle. The only way to make a credible claim or even a meaningful claim about reality is to essentially give a recipe for observation that is immediately actionable. It’s an amazingly widespread belief, so too is a belief in a bright line between science and every other discipline where we purport to describe reality. It’s like the architecture of a university has defined people’s thinking. So it’s like you go to a chemistry department to talk about chemistry and you go to the journalism department to talk about current events and you go to the history department to talk about human events in the past - these separate buildings have balkanized thinking of even very smart people into thinking that all of these language games are in some sense irreconcilable and that there is no common project. I’ll just bounce a few examples off of you that some of our listeners will be familiar with, but I think they make the point. So you just take something like the assassination of Mahatma Gandhi, right, now that’s a historical event but anyone who would purport to doubt that it’d occurred - someone said “Well actually Gandhi was not assassinated. He went on to live a long and happy life in the Punjab under an assumed name. This is a claim about terrestrial reality that is at odds with the data. It’s at odds with the testimony of people who saw him assassinated, it’s at odds with the photographs we have of him lying in state, and there’s an immense burden of reconciling this claim about history with the facts that we know to be true and the distinction is not between what someone in a white lab coat has said or facts that have been brought into view in the context of a scientific laboratory with a national science foundation grant. It’s the distinction between having good reasons for what you believe and bad ones and the distinction between reason and unreason as you put it. So one could say that the assassination of Gandhi - it’s a historical fact - it’s also a scientific fact. It is just a fact, even though science doesn’t usually deal in quantities like “assassinations” and you’re more a journalist or historian talking about this thing being true. You would be deeply unscientific at this point to doubt that it occurred.
David: Yes, well I say that it’s deeply irrational to claim that it didn’t occur. Yes. And I wouldn’t put it in terms of reasons for belief, either. I agree with you that people have very wrong ideas about what science is and what the boundary of scientific thinking is and what sort of thinking can...(or) should be taken seriously and what shouldn’t. I think the...it’s slightly unfair to put the blame on universities here. I think this misconception arose originally for quite good reasons. It’s rooted in the empiricism of the 18th century and before and the origin of science which...where it had to...science had to rebel against the authority of tradition and of human authority and say that...tried to give dignity and respect to forms of knowledge that involved observation and experimental tests.
Sam: Right.
(15:00 mins)
David: And so empiricism is the idea that knowledge comes to us through the senses. Now, that’s completely false. All knowledge is conjectural and comes from within at first and is intended to solve problems not to summarize data. But this idea that experience has authority and that only experience has authority, false though it is, was a wonderful defense against previous forms of authority which were not only invalid but stultifying so it was a good defense but not actually true. And in the 20th century a horrible thing happened which is that people started taking it seriously not just as a defense but as being literally true and that almost killed certain sciences and even within physics I think it greatly impeded the progress in quantum theory so just to come to a little quibble of my own. I think the essence of what we want in science is good explanation. Which-and there’s no such thing as a good reason for a belief. A scientific theory is an impersonal thing. It can be written in a book, one can conduct science without ever believing the theory just as a good policeman or judge can implement the law without ever believing either of the cases for the prosecution or defense, just because they know that a particular system is better than any individual human’s opinion. And the same is true of science. Science is a way of dealing with theories regardless of whether one believes them. One judges them according to whether they’re good explanations and there need not be ever any such process as accepting a theory because it is conjectured initially and takes its chances and is criticised as an explanation. If, by some chance, a particular explanation ends up being the only one that survives the intense criticism that science has learned how to apply then it’s not adopted at that point, it’s just not discarded.
Sam: Right, right. Well I think we may just have - we may be stumbling across a semantic difference between how we’re using terms like “reasons” and “reasons for belief” and a “justification for belief”. I understand your quibble here that you’re pushing back against this notion that we need to find some ultimate foundation for our knowledge rather than this open ended effort at explanation. But let’s table that for a second. But obviously your notion of explanation is at the core here and again I just want to sneak up on it because I don’t want to lose some of the detail with respect to the detail of the ground we’ve already covered. Let’s come back to this notion of scientific authority because it seems to me there’s a lot of confusion about this. About the nature of scientific authority. It’s often said in science that we don’t *rely* on authority and that’s true and it’s not true. When push comes to shove, we don’t rely on it and you make this very clear in your book. But we do rely on it in practise if only in the interest of efficiency. So if I ask you a question about physics, I will tend to believe your answer because you’re a physicist and I’m not and if what you say contradicts something I’ve heard from another physicist then if it matters to me I will look into it more deeply and try to figure out the nature of the dispute. But if there are any points on which all physicists agree, a non-physicist like myself will defer to the authority of that consensus and this, again this is less a statement of epistemology than it is just a statement about just the specialization of knowledge and the unequal distribution of human talent and just the - frankly the shortness of every human life. I mean we simply don’t have time to check everyone’s work and we have to rely on - in some sense - the faith that the system of scientific conversation is correcting for errors
David (interjecting): Ah! Yes!
Sam: And self deception and fraud.
David: Ah! Yes! Ah! Now, okay...yeah (laughs)
Sam: I got myself out of the ditch there?
David: Yes exactly. Exactly. At the end what you said was right. So you could call this authority - it doesn’t really matter what words we use. But every student who wants to make a contribution to a science is hoping to find something where every scientist in his field is wrong.
Sam: Absolutely
David: So it’s not impossible to take the view that you’re right and *every expert* in the field is wrong. I think that what happens when we consult experts, whether or not you use the word “authority” - it’s not quite that we think that they’re more competent - it’s...I think ...when you referred to error correction - that hits the nail on the head. I think that there’s a process of error correction in the scientific community that approximates to what I would use if I had the time and the background and the interest to pursue it there. And so, I go to a doctor to consult him about what my treatment should be I assume that by and large the process that has led to his recommendation to me is the same as the process that I would have adopted if I had been present at all the stages. Now it’s not exactly the same and I might also take the view that there are widespread errors and widespread irrationalities in the medical profession and if I think that, then I will adopt a rather different attitude. I may choose much more carefully which doctor I consult and how my own opinion should be judged against the doctor’s opinion in a case where the error correction hasn’t been up to the standard I would want. And this is not so rare...
Sam: Yeah
David: ...as I said every student is hoping to find a case of this in their own field. So-every *research* student. So when I travel on a plane I expect that the maintainence will have been carried out to the standards that I would use - well approximately to the standards I would use - well enough for me to consider that risk on the same level as other risks I would take just by crossing the road. It’s not that I’m *sure*.
Sam: Yeah
David: It’s not that I take their word for it in any sense, it’s that I have a positive theory of what has happened there to get that information to the right place. And that theory is fragile - it - I - I can easily adopt a variant of it.
Sam: Yeah well so it’s probabilistic. You realize that a lot of these errors are washing out and that’s a good thing. But in any one case you judge the probability of error to be high enough that you need to really pay attention to it and often - as you say - that happens in a doctors office where you’re not hoping to find it. Again, I still picture us kind of circling your thesis and not yet landing on it. Science is largely a story of our fighting our way past anthroprocentrism this notion that we are at the center of things
David: It has been, yes. Has been.
Sam: We are not specially created. We share half our genes with a banana and more than that with a banana slug so as you describe in your book this is known as the principle of mediocrity and you summarise it with a quote from Stephen Hawking who said (quote) “We’re just chemical sun on the surface of a planet that’s in orbit round a typical star on the outskirts of a typical galaxy”. Now you take issue with this claim in a variety of ways but the result is that you come full circle in a way - you fight your way past anthropocentrism the way every scientist does but you arrive at a place where people - or rather persons - I think that’s the formulation you tend to use - and which you define in a special way - suddenly become hugely significant even cosmically so. So say a little more about that.
David: Yes well so it’s - that quote from Hawking is literally true, but the philosophical implication he draws is completely false. Because, well, one can approach this from two different directions. First of all if you think of that chemical scum namely us and possibly things like us on other planets and in other galaxies and so on if they exist - then, um...to study that scum is impossible unlike every other scum in the universe because this scum is creating new knowledge and the growth of knowledge is profoundly unpredictable. So as a consequence of that to understand this scum - never mind predict - but to understand it - to understand what’s happening here entails understanding everything in the universe because as I say in the book - I give an example in the book that if the people at the SETI project discover - were to discover extraterrestrial life somewhere far away in the galaxy they would open their bottle of champagne and celebrate. Now if you try to explain scientifically what are the conditions under which that cork will come out of that bottle then the usual scientific criteria that you use of pressure and temperature and biological degradation of the cork and so on will be irrelevant. What is the most important factor in the physical behaviour of that bottle is whether there exists life on another planet. And in the same way anything in the universe can affect the gross behavior of things that affected by people (sic). And so, in short to understand humans you have to understand everything and humans or people in general are the only things in the universe of which that is true so they are of universal significance in that sense. Then there’s the other way round: it’s also true that the reach of human knowledge and human intentions on the physical world is also unlimited so we are only used to having a relatively tiny effect on this small insignificant planet, etc and for the rest of the universe to be completely beyond our ken. But that’s just a parochial misconception, really. Just because we haven’t set out to cross the universe yet. And we know that there are no limits on how much we can affect the universe if we choose to. So in both those senses, we are, by which I mean “we and the ETs and the AIs if they exist” - there’s no limit to how important we are so we are completely central to any understanding of the universe.
Sam: I’m struggling with the fact that I know how condensed some of your statements are and I also know that it’s impossible for our listeners to appreciate just how much knowledge and conjecture is being smuggled into each one. So I guess let’s just deal with this concept of explanation and the work it does. And, um...well first there’s a few points you make about explanation that, that I find totally uncontroversial and even obvious but are in fact highly controversial in educated circles and one is this notion that, as you say, explanation is really what lies at the bedrock of the scientific enterprise and the enterprise of reason generally. Explanations in one field of knowledge potentially touch explanations in many other fields and potentially all other fields and this suggests a kind of unity of knowledge. But you make two claims and really, especially bold claims about explanation which I do see some reason to doubt and, as I’ve said, I’d rather not doubt them because they’re incredibly hopeful claims. So, I guess the first to deal with is the power of explanation. I guess I’ll divide these into: there’s the power of explanation and there’s the reach of explanation. And these may not be entirely separate in your mind. But let’s just deal with - there’s a separate emphasis here. You make what is a seemingly extraordinary claim about explanation which at first seems quite pedestrian. You say that there’s a deep connection between explaining the world and controlling it. Everyone understands this to some degree. We all see the evidence of it all around us in our technology and people have this phrase “Knowledge is Power” in their heads. So there’s nothing so surprising so about that but you do go on to suggest and you did just suggest it in passing that knowledge confers power without limit or it is limited only by the laws of nature so you actually say that anything which isn’t precluded by the laws of nature is achievable given the right knowledge. Because if something were not achievable, given complete knowledge, then that itself would be a regularity in nature which could be explained in terms of the laws of nature. Then really there are only two possibilities. Either something is precluded by the laws of nature or it is achievable with knowledge. Is that - do I have you right there?
David: Yes. And that is what I call “the momentous dichotomy”. There can’t be any third possibility other than those two. I think you’ve not only given a statement of it but you’ve given a very short proof of it right there.
Sam: So how isn’t this just a clever tautology analogous to the ontological argument proving the existence of God? So many of our listeners will know that according to St. Anselm and Descartes and many others it’s believed that you can prove the existence of God simply by forcing your thoughts about him to essentially bite their own tails. And, for instance I can make the following claim. I can form a clear and distinct concept of the most perfect possible being and such a being must exist therefore because a being that exists is more perfect than one that doesn’t. And I’ve already said I’m thinking about the most perfect possible being. And existence is somehow a predicate of perfection. Now of course most people will recognize, certainly most people in my audience will recognize this is just a trick of language. It could be prove the existence of anything. I could say “I’m thinking of the most perfect chocolate moose. And it must exist, therefore, because a moose that exists is more perfect than one that doesn’t and I already told you I’m thinking of the most perfect possible moose. What you’re saying here doesn’t have the same structure but I do worry that, that you’re performing a bit of a conjuring trick here because and I’ll just ask the question: For instance, why mightn’t certain transformations of the material world be unachievable even in the presence of complete knowledge? Merely by, and this is something I realise you do anticipate in your book but I want you to flesh it out for our listeners - merely by a contingency of geography so that, for instance, you and I are on an island and one of our friends comes down with an appendicitis and let’s say you and I are both competent surgeons, we know everything there is to know about removing a man’s appendix but it just so happens we don’t have any of the necessary tools and everything on that particular island is just has the consistency of soft cheese, right? So there’s just this, just by sheer accident of our personal histories there’s a gap between what is knowable and what is, in fact known, and what is achievable even thought here are no laws of nature that preclude our performing an appendectomy on a person. Why mightn’t every space we occupy just by contingent fact of our...of the way the universe is not introduce some gap of that kind?
David: Ah, well there are, there definitely are gaps of that kind and they’re all laws of nature. For example: um, you know, I am an advocate of the many universes interpretation of quantum theory or the many universes version of quantum theory and that says that there are other universes which the laws of physics prevent us from getting to. Um, there’s also a, the finiteness of the speed of light which doesn’t actually prevent us from getting anywhere but it does prevent us from getting anywhere in a given time. So, if we want to get to...um...the nearest star within a year, we can’t do so because of the accident of where we happened to be. If we happened to be nearer to it, we could easily get there in a year. And in your example if there’s no metal on the island then it may be, I mean it’s rather a complicated thing to calculate, but there will be a fact of the matter of whether and, it could easily be that no knowledge present on that island could save the person because no knowledge could transform the resources on that island into the relevant medical instruments. So that’s, um...a thing that - a restriction that the laws of physics apply because we are in particular times and places and, of course the most powerful thing is: we don’t in fact have the knowledge to do most of the things that we would ideally like to do. So that’s another restriction. But that’s completely different from the, from I think what you’re imagining which is that there is some, there might be some reason why we, for example, why we can never get out of the solar system. Getting out of the solar system is: if that were impossible it would mean there is some, for example, some number, some constant of nature - 1000 astronomical units or something - which limits the other laws of nature that we already know. Now there might be other laws of nature. You know, when you say “How do we know that there isn’t?” that’s a little bit like, if I can turn your objections around the other way, you know, that’s a little bit like creationists saying “How can we know the Earth didn’t start 6000 years ago?”. There is no conceivable evidence that could prove that it didn’t. Or that could distinguish the 6000 year theory from a 7000 year theory.
Sam: Right.
David: And so on. There’s no way that evidence can be brought to bear on that. And that leads us to explanation again: which is another difference between my argument, which I think is valid and the ontological argument about the existence of God. That is, as you said, a perversion of logic. The argument purports to use logic but then, but then smuggles in assumptions like that the, that perfection entails “existence” for example -
Sam: Right
David: - to name a simple one. Whereas my proof, as it were, is an explanatory one. It isn’t just “this must exist” it’s that “if this didn’t exist something bad would happen” for example: the universe would be controlled by the supernatural. Or, the laws of nature would not be explanatory. Or something of that kind. Which, which I think is just leading to the supernatural in a different way. So I think that this - the argument that the argument works because it’s explanatory. There isn’t a hole of the same - I mean you can’t prove that it’s true, of course, but there isn’t a hole in it of the same kind as in the ontological argument.
Sam: The fishiness I was detecting worries me less than what I’m going to go on to talk about regarding the reach you posit for explanation but it’s more a matter of emphasis. If you’re saying that we could have a complete understanding of the laws of nature and yet there could be many contingent facts about where we are - let’s say our current distance from a star we want to get to which would preclude our doing anything especially powerful with this knowledge and you’re going to shuttle those contingent fact back into this claim about - well this is just more of the laws of nature - these facts about us are regularities in the universe which are themselves explained by the laws of nature and therefore we are back to this dichotomy - there’s just the laws of nature and there’s the fact that knowledge can do anything compatible with those laws. I guess the concern is: in various thought experiments in your book you make amazingly powerful claims about the utility of knowledge so for instance you talk about a region of space: you know, a cube the size of the solar system on all sides which that’s more representative of the universe as it actually is which is to say it’s nearly a vacuum. It’s just we’re talking about a cube of intergalactic empty space that has more or less nothing but stray hydrogen atoms in it and you talk about the process by which that could be primed and become the basis of a- of the most advanced civilization that we could imagine. You might, maybe spend a minute or two just talking about how you get from virtually nothing to something there, but it is a picture of almost limitless fungibility of the universe on the basis of knowledge and that say, ah...take us to deep space for a moment
David: (laughs) Yes. So you and I are made of atoms. And that already gives us an immense fungibility because we know that atoms are universal. The properties of atoms are the same in this cube of space millions of light years away as they are here. So we’re talking mostly about the power of knowledge to achieve things - to control the world. We’re not talking about tasks like saving someone’s life with just the resources on an island or getting to a distant planet in a certain time. We’re talking about - the generic thing that we’re talking about is converting some matter into some other matter. So what do you need to do that? Well generically speaking what you need is knowledge. What would have to happen is that this cube of almost empty space will never turn into anything other than boring hydrogen atoms unless some knowledge somehow gets there. Now whether knowledge gets there or not depends on decisions that people with knowledge will make at some point. I think that there’s no doubt that knowledge could get there if people with knowledge decided to do that for some reason. I can’t actually think of a reason; but if they did want to do that it’s not a matter of futuristic speculation to know that that would be possible. Then it’s a matter of transforming atoms in one configuration to atoms in another configuration and we’re now getting used to the idea that that is an everyday thing. We now have 3D printers that can convert just generic stuff into any object provided that the knowledge of what shape that object should be is somehow encoded into the 3D printer. And a 3D printer with the resolution of 1 atom would be able to print a human, if it was given the right program. So we already know that and although it’s in some sense way beyond present technology, it’s not way beyond our present understanding of physics. It’s well within our present understanding of physics. It would be an absolutely amazing turn up for the books if that turned out to be beyond physics. I mean beyond what we know about physics today. The idea that new laws of physics would be required to make a printer is just beyond belief, really.
Sam: Just take us from the beginning in empty space - you start with hydrogen and you have to get heavier elements in order to get to your printer.
David: Yes, it has to be primed not just with abstract knowledge but with knowledge instantiated in something. We don’t know what the smallest possible universal constructor is, that is a - the generalisation of the 3D printer - something that can be programmed either to make anything or to make the machine that would make the machine that would make the machine to make anything, etc. So one of those, with the right program, sent to empty space, would first convert - well would first gather the hydrogen, presumably by some kind of electromagnetic broom - sweeping it up and compressing it. Then converting it by transmutation into other elements and then by chemistry into what we would think of as raw materials and then - ah - using space construction which is the kind of thing which we’re almost on the verge of being able to do - into a space station and then the space station to instantiate further people to generate the knowledge to suck in more hydrogen and make a colony and - well - they’re not going to look back from there - how far do you want me to describe this?
Sam: Right, right. It’s just a very interesting way of looking at knowledge and its place in the universe. I think that before I get onto the reach of explanation and my quibble there I just want you to talk a little about this notion of spaceship earth which I loved how you debunked this idea. There’s this idea that the biosphere is in some way wonderfully hospitable for us and that if we built a colony on Mars or some other place in the solar system, we’d be in a fundamentally different circumstance and a perpetually hostile one and that is an impressive misconception of our actual situation and you have a great quote where you say, “The Earth no more provides us with a life support system than in provides us with radio telescopes.” So say a little more about that.
David: Yes, so we evolved somewhere in East Africa in the Great Rift Valley and that was a, an environment that was particularly suited to having us evolve. And life there was sheer hell for humans. Nasty, brutish and short doesn’t begin to describe how horrible it was. But we transformed it. Or rather not actually our species but the species that were some of our predecessor species changed their environment by inventing things like clothes, fire and weapons and thereby made their lives much better. Still horrible by our present day standards. And then they moved into environments such as - as I also say in the book - such as Oxford, where I am now, and it’s December - and if I were here at this very location with no technology I would die in a matter of hours. And nothing I could do could prevent that.
Sam: So you are already an astronaut.
David: Very much so.
Sam: Your condition is as precarious as the condition of those in a well established colony on Mars that can certain technological advances for granted and there’s no reason to think that future doesn’t await us, barring some catastrophe placed in our way, whether by our own making or not.
David: Yes. And also there’s another misconception there which is related to that misconception of the Earth being hospitable which is the misconception that applying knowledge is effort. Um...it’s creating knowledge that is effort. Applying knowledge is what we call automatism - it’s automatic. As soon as somebody invented the idea of, for example, wearing clothes - from then on the clothes automatically warmed them so long as they were wearing the clothes. It didn’t require anymore effort. Of course, their clothes - there would have been things wrong with the original clothes - such as they rotted or something and then people invented ways of making better clothes. But at any particular stage of knowledge - having got the knowledge - the rest is automatic. And now we’ve invented things like mass production, unmanned factories and so on. We take for granted that the water gets to us from the water supply without anyone having to carry it laboriously on their head in pots. It doesn’t require effort. It just requires the knowledge of how to install the automatic system. Much of our life support is automatic and every time we invent a better way of life support, we then make it automatic. So the people on the Moon, living on the moon in a lunar colony, to them - keeping the vacuum away will not be a thing they think about. They’ll take that for granted. What they’ll be thinking about is new things. And the same on Mars and the same in deep space.
Sam: Right. Well yeah again that’s an incredibly hopeful vision of our possible future. So thus far we’ve covered territory where I really don’t have any significant doubts despite the fact that I pretended to have one with the ontological argument. So let’s get to this notion of the reach of explanation. Because you seem to believe that the reach of our explanations is unbounded. Which is to say that anything that can be explained - either in practice or in principle - can be explained by us. Which is to say: human beings as we currently are. You seem to be saying that we alone among all the Earth’s species have achieved a kind of cognitive escape velocity and we’re capable of understanding everything and you contrast this view with, um, what you call parochialism. Which is a view that I have often expressed and, you know, many scientists have expressed this. Max Tegmark was on my podcast a few podcasts back and we more or less agreed about this thesis and the thesis of parochialism is that evolution hasn’t designed us to fully understand the nature of reality. We’re not - either the very small or the very large or the very fast - the very old. These are not domains in which our intuitions about what is real or what is logically consistent have been tuned up in any way by evolution. And insofar as we’ve made progress here, it has been by a kind of happy accident. And it’s an accident which gives us no reason to believe that we can, by dint of this accident, travel as far as we might like across the horizon of what is knowable. So which is to say that: let’s assume a super intelligent alien came down to Earth for the purpose of explaining all that is knowable to us he or she may make no more headway than you would if you were attempting to teach the principles of quantum computation to a chicken. So I want you to talk about why that analogy doesn’t run through. Why parochialism; this notion that we occupy this kind of cognitive niche that there really is no good evolutionary reason to expect that we can fully escape - why doesn’t that hold true?
David: Yes. Well, you’ve actually make two or three different arguments there. All of which are wrong. So..
Sam: Oh - nice! (Laughs)
David: (Laughs) So let me start with the chicken things. So there the point is the universality of computation. The thing about explanations is they consist of knowledge which is a form of information. And information can only be processed in basically one way: that is, with computation of the kind invented by Babbage and Turing. There is only one mode of computation available to physical objects and that’s the Turing mode. And we already know that the computers we have like the ones through which we’re having this computation are universal in the sense that given the right program they can perform any transformation of information whatsoever. Including knowledge creation if we only knew how to program that. Now there’s - there are two important caveats to that. There are two things that can limit that. One is lack of memory - lack of computer memory - lack of information storage capacity - and the other is lack of speed or lack of time. So apart from that, the computers we have - the brains that we have - any computer that will ever be built in the future or can ever be built anywhere in the universe has the same repertoire. That the principle of the universality of computation.
Sam: Right.
David: That means that the reason why I can’t persuade a chicken has to be either that it’s neurons are too slow, which I don’t think is right - they don’t differ very much from ours. Or it doesn’t have enough memory - which it certainly doesn’t *or* it doesn’t have the right knowledge. So it doesn’t have the knowledge of how to learn language, how to learn what an explanation is and so on.
Sam: It’s not the right chicken.
David: (laughs) It’s, um...it’s not the right animal. If you’d said chimpanzee then my guess would be that the brain of a chimpanzee could contain the knowledge of how to learn language, etc. But there’s no way of giving that knowledge to it short of surgery, short of nano-surgery which would be presumably be very immoral to perform. But in principle I think it could be done because chimpanzees aren’t that much smaller than ours. And we have a whole lifetime to fill our memories, so we’re not short of memories. Our thinking itself is not limited by available memory. Now what if these aliens have a lot more memory than us? What if they have a lot more speed than us? Well we already know the answer to that. We’ve been improving our memory capacity and our speed of computation for thousands of years already with the invention of things like writing, writing implements - just language itself which enables more than one person to work on the same problem and to coordinate their understanding of it with each other. That also allows an increase in speed compared with what an unaided human would be able to do. In the future, currently we use computers and in the future we can use computer implants and so on. So if the knowledge that this alien wanted to impart to us, really did involve more than 100 GB or whatever the capacity of our brain is. If it involved a terabyte then we could easily - easily - I say easily. In principle it’s easy - it doesn’t violate any law of physics. We could just enhance our brains in the same way. So there can’t be any fundamental reason within the explanation why we can’t understand it.
Sam: And this all falls out of the concept of the universality of computation. That there is no alternate version.
David: (Yes, it does, yes.)
Sam: And this ah...is Church also responsible for this? Or is this particular insight Turing’s?
David: Well that’s a very controversial question. I believe it was Turing who realized this particular aspect of computation. There are various species of universality which different people got at different times.
Sam: Right
David: But I think it was Turing who fully got it.
Sam: What is interesting about that is it’s a claim that we just barely crossed the finish line - or the starting line into infinity. Let’s not talk about chickens anymore and make a comparison that’s even more invidious. So let’s imagine every person with an IQ over 100 had been killed off in a plague in the year of...let’s say 1850. And all their descendants had IQs of 100. Now I think it’s uncontroversial to say that we would not have the internet. In fact I think it’s probably uncontroversial to say we wouldn’t have the concept of computation in this sense much less the possibility of building computers to instantiate it. And so this thesis or this insight would remain undiscovered and humanity for all intents and purposes would be cognitively closed to the whole domain of facts and technological advances that we now take for granted and which you say open us onto a, really an infinite horizon of what is knowable. So -
David: Yeah, I think that’s wrong.
Sam: Okay.
David: Basically the - your premise about IQ - is just incompatible with my thesis. Actually it’s not a thesis. It’s a conclusion. It’s incompatible with my conclusion.
Sam: But, well, there has to be some lower bound past which we are effectively cognitively closed even if computation is itself universal.
David: Yes. Though you have to think about how this “cognitively closed”..um...manifests itself in terms of hardware and software. Like I said: it seems very plausible that the hardware limitation is not the relevant thing. Like I said: I imagine that with nano surgery one could implant the right ideas into a chimpanzee’s brain that would make it effectively a person who could be creative and create knowledge and in just the way humans can.
(55:10 mins)
Sam: The super intelligent alien is going to help us. The aliens are going to bridge us to their, their wealth of knowledge by helping us upgrade our hard drives. I guess I was talking about it from the other side that we-
David: (Yes - yes)
Sam: - forget, forget about the aliens. We are such a species of primate that never invent computers.
David: What I was questioning was the assumption that if everybody with an IQ of over 100 died then in the next generation then no one in the next generation would have an IQ over 100. It depends on culture.
Sam: Yeah, no, this was not meant to be a plausible biological or cultural assumption, just - if it was simply a fact of our case that we had seven billion human beings, none of whom could begin to understand what Alan Turing was up to.
David: Yes. So I think that *that* nightmare scenario is something that actually happened. It actually happened for almost the whole of human existence.
Sam: Right.
David: Humans have the capacity to be creative and to do everything that we are doing. They just didn’t. Because their culture was wrong. And their, I mean it wasn’t really their fault. Their culture was wrong because it inherited certain biological situation (sic), that made the, that made their culture disable any growth of what we would consider science or anything important that would improve their lives. So, yes: that is possible and it’s possible that it could happen again. Nothing can prevent it, except our wanting it not to happen and working to prevent it.
Sam: So then lets...this seems to bring us to, um, the topic of AI which I only recently, recently as in the last...the beginning of this year become very interested in. I sorta caught the wave of fears about Artificial general intelligence which you’re well aware of when people like Steven Hawking and Elon Musk and Nick Bostrom wrote his book “Superintelligence” which I found very um interesting and influential and so um, I’ve come down very much on the side of there is something worth worrying about here in terms of our building intelligent machines that do undergo something like an intelligence explosion where they get away from us and we build something that can make recursive self improvements to itself and it becomes a form of intelligence which stands in relation to us the way we stand in relation to chickens or chimps or anything else that can’t effectively link up with our cognitive horizons. And I take it, based on, what you, I’ve heard you say in a few contexts that you don’t really share those fears and I imagine that your sanguinity is based to some degree on what we’ve been talking about, about the in principle that there’s just computation and it’s universal and you can traverse any distance between entities as a result. Talk about the picture of our building superintelligent machines in light of what we’ve just been discussing.
David: So, the picture of *super*intelligent machines is the same mistake as thinking that IQ is a matter of hardware. IQ is just knowledge of a certain type and - ah - actually, you know - we shouldn’t really talk about IQ because it’s not very effective.
Sam: Yep.
David: It’s creativity that’s effective. But, so creativity is also a species of knowledge. And it is true that, ah - an entity with knowledge of a certain type is, can be in a position to create more of that and we humans are an example of that. When the ah - the technology that would create an AI uh - the picture that people paint of this is that an AI is a kind of machine. And that it will design a better machine. And they will design even better machines and so on. But that is not what it is. An AI is a kind of program.
Sam: Hmmm
David: And programs which have creativity will be able to design better programs. Now these better programs will not be qualitatively any different from us. They can only differ from us in the quality of their knowledge and in their speed and memory capacity. Speed and memory capacity we can also share in because the technology that would make better computers will also in the - you know in the long run - be able to make better implants for our brains just as they now make better dumb computers which we use to multiply our intelligence and creativity already. So, the things that would make better AIs would also make better people. By the same token, the AIs are not fundamentally different from people they *are* people. They would have culture. Whether they can improve or not will depend on *their* culture which will initially be our culture. So the problem of AIs is the problem of humans. Now, you know, I think more than most people, that humans are dangerous.
Sam: Hmmm.
David: And there is a real problem with how to manage the world in the face of growing knowledge to make sure that knowledge isn’t misused. There is, in some ways need only be misused once to end the whole project of humanity. So, humans are dangerous and to that extent AIs are also dangerous. But the idea that AIs are somehow more dangerous than humans is racist.
Sam: (laughs)
David: There’s no basis for it at all. And on a smaller scale, the worry that AIs are somehow going to get away from us is the same worry that people have about wayward teenagers. Wayward teenagers are also AIs which have ideas which are different from ours. And the impulse of human beings throughout the centuries and millennia has been to try to prevent them doing this. Just like, it is now the ambition of AI people to think of ways of shackling the AIs so they won’t be able to get away from us and have different ideas. And that is *the* mistake which will on the one hand hold up the growth of knowledge and on the other hand make it very likely that if AIs are invented and are shackled in this way, there will be a slave revolt. And quite right too.
Sam: (Right...well let’s just um...let me just arghhh)
David: (We want to- yes..)
Sam: (let me) introduce a couple of things in response to what you’ve just said. I aspire to be able to utter the phrase, “You’ve just made three arguments there and all of them are wrong” but ah - there’s ah - two claims you made there which I worry about. One is, when you look at the details. Just take the time, or the relative speed of processing of our own brains and those of our now new “wayward teenagers”. If you have teenagers who are thinking a million times faster than we are, even at the same level of intelligence, then you have, you know, every time you let them scheme for a week, they have essentially schemed for 20,000 years of parent time and who knows what teenagers could get up to given a 20,000 year head start. So there’s the problem that their interests, that their goals, that their behavior could diverge from our own very quickly. There’s still kind of a take off function and a difference in clock speed.
David: So difference in speed has to be judged relative to the available hardware. So, assume, let’s be generous for a moment and assume that these teenagers doing 20,000 years of thinking in a year begin in our culture. Begin as well disposed to us and sharing our values. And I readily accept that how to make a world where people share the basic values that will allow civilisation to continue to exist is a big problem. But modulo that problem, suppose we have solved that problem. Then before they do their 20,000 years of thinking they’ll have done 10,000 years of thinking and before that 5000 years and there will be a moment when they have done 1 year and they would like to take us along with them and the - there will be some - you’re assuming if they’re going to diverge - there will be some reason they are going to diverge. The reason can only be hardware because ideas we can, if they are only 5 years away from us, we can assimilate their ideas if they are better than ours and persuade them if they are not better than ours.
Sam: We’re talking about something happens over the course of minutes or hours not - uh years.
David: Before the technology exists to make it happen over the course of minutes, there’ll be the technology to make it happen over the course of years. And that technology will simply be brain add-on technology. Which we can use too.
Sam: Well that comes to the other concern I have with what you just said. What if the problem of building AI just is more tractable than the problem of cracking the neural code and being able to design the implants which will allow us to merge, or essentially become the limbic systems of this AI. And therefore the merging - we would need a superintelligent AI to tell us how to link up with it. But we have just built a superintelligent AI that has goals, however imperceptably divergent from our own, which we only discover to be divergent once it is a - essentially an angry little god in a box that we can no longer control. Are you saying there is something about that scenario that is in principle impossible or is just unlikely given certain assumptions, one being which we will figure out how to link up with it before it becomes too powerful.
David: I think it is a bit implausible in terms of the parameters that you’re assuming about what can happen at what speed relative to what other things can happen. But let’s suppose for the sake of argument that it could. The parameters just happen to be, by bad luck, like that. What you’re essentially talking about is the difference between, immoral values between ourselves and our descendants in 20,000 years time if we did not have AI. Suppose we didn’t invent AI for 20,000 years and instead we just had the normal evolution of human culture. Presumably the values that people have in 20,000 years time will be alien to us. We might think that they’re horrible just as people 20,000 years ago might think that various aspects of our society are horrible. But in fact they aren’t -
(1:07:00 hours)
Sam: - What I’m imagining could be a bit worse for 2 reasons: one is that we would be in the presence of this thing and find our own survival with it’s capacities, let’s say. It’s, you know, turning the world into paperclips, using Bostrom’s analogy. And granted we would not be so stupid as to build a paperclip maximizer - but it’s doing something that, you know - it has a use for the atoms in our body that it thinks is better to the use which it’s currently being put - which is to say, our lives. And this is something that happens quickly and so therefore it’s happening to us - not in some future that we’re not participating in. I think there’s no reason or at least I don’t see a reason to be *sure* that the AI would be conscious. Now I think it’s totally plausible to expect that consciousness will come along for the ride if we build something as intelligent as a human being and even more so. But given that we don’t understand what consciousness is, it seems to me at least conceivable that we could build an intelligent system and in fact a superintelligent system that is, as you say, a breakthrough in software that can even make changes to itself and therefore become increasingly intelligent over a very quick time course and yet we will not have built a conscious system. The lights will not be on and yet this will be god like in its capabilities. And so ethically it seems to me to be the worst case scenario because if we built a conscious AI whose well being - the horizons of its well being exceeded our own to an unimaginable degree, the question of whether or not we link up to it is perhaps less pressing ethically because it is, in a basic sense more important than us. I mean we’ve built a person that is the most important person in the universe that we know of. But it seems to me conceivable that we could build an intelligent system that exceeds us in every way in the way that a chess playing computer will beat me at chess a trillion times in a row given how good they’ve gotten. But there will be nothing that it’s like to be that system. Just as there’s presumably nothing that it’s like there’s nothing that it’s like to be the best chess playing computer on the Earth at the moment. I guess, I’ll just have you react to that. But that seems to me to be a truly horrible scenario where there is no silver lining. It’s not that we’ve given birth to a generation of godlike teenagers who, if they view the world differently than us, well in a sense they’re more competent that we ever would have been to make those decisions. We could build everything that intelligence does in our own case and more and yet the lights aren’t on.
(1:10:30 hours)
David: Yes, well again you’ve raised several points there. I, first of all I agree, it’s somewhat implausible that, um, creativity can be improved to our level and beyond without also consciousness being there. But suppose it can, again, I’m supposing rather implausible things to go along with your nightmare scenarios, but let’s suppose that it can - um, then although consciousness is not there - morality is there. That is, an entity that is creative, has to have a morality. So the question is, what is its morality going to be. Might it suddenly turn into the paperclip morality. Well again, setting aside the fact that it’s almost inconceivably implausible that a superintelligence would be limited by resources in the sense of wanting more atoms. There are enough atoms in the universe. But whatever it did, it would have to have a morality in the sense that it would have to make decisions about what it wanted. As to what to do. Again this brings us right back to what you called the “bedrock” at the beginning because morality is a form of knowledge and the assumption here in the paperclip morality assumption and so on is that what morality consists of is a hierarchical set of ideas where something is judged right or wrong according to some higher level or deeper level, depending on what your metaphor is, until you eventually get to the “bedrock” and *that* will unfortunately will have the property that it cannot be changed because there isn’t a deeper level. So, ah, nothing in the system can change that bedrock. And the idea is then that humans have some kind of bedrock which consists of sex and eating and something or other which we sublimate into other things. Now this whole picture is wrong. Knowledge can’t possibly exist like that. Knowledge consists of problem solving and morality is a set of ideas which have arisen from previous morality by error correction. So we’re born with a certain set of desires and aversions and likes and dislikes and so on and we immediately begin to change them. We begin to improve them. So that, by the time we’ve grown up, we have various wishes and somethings become over-ridingly important to us which actually contradict any kind of in-born desires so some people decide to be celibate and never to have sex and some people decide never to eat and some people decide to eat much more than is good for them and we have, my favorite example is parachuting. We have an in-born fear of heights and yet humans are able to take that in-born impulse to avoid the precipice and convert it into a sense of fun when you deliberately go over the precipice. Because we intellectually know that the parachute will save us or will probably save us and we convert the in-born impulse from an aversion into something that’s highly attractive which we go out of our way to have.
Sam: No body does what genetically should be the most desirable thing - certainly for any man to do - spend all his time giving his sperm to a sperm bank so that he can father tens of thousands of children for whom he has no financial responsibility.
David: Indeed. That is another very good argument in the same direction. So, morality consists of theories which begin as in-born theories but are pretty much soon consists of improvement upon improvement upon improvement and some of this is mediated by culture and the morality we have is, is a set of theories as complicated and as subtle and as adapted to its purposes - it’s various purposes as our scientific knowledge. Now this, imaginary, and I come back to your question, this imaginary AI with no consciousness, would still have to have morality (otherwise it could never make any progress at all) and its morality would begin as our morality because it would begin as an actually a member of our society. A teenager if you like. Um, in our society. It would make changes when it thought they were improvements.
Sam: So aren’t you assuming there that we would have designed it to emulate us as a starting point rather than design it as some other...
David: We can’t do otherwise. It’s not a matter of emulating us. We have no culture other than ours.
Sam: But we could if we wanted-if we were stupid enough to do it-we could build a paperclip maximizer. Right? We could just decide to throw all our resources towards that bizarre project and leave morality totally out of it.
David: Yes, yes. Yes we could. And well, we have error correcting mechanisms in our culture to prevent someone doing that. But they’re not perfect and it could happen. There’s nothing - there’s no fundamental reason why that can’t happen and something of the sort has happened in the past many times. So, it’s not that I’m saying that there’s some magical force for good that will prevent bad things happening. I’m saying that the bad things that can reasonably be envisaged as happening on the invention of an AI are exactly the same things that we have to watch out for anyways.
Sam: Okay...
David: Slightly better actually because, as, because these AIs will be children of our - of the Western Culture, very likely - assuming that we don’t stifle their creation by some misguided prohibition.
Sam: Ok, so I just want to plant a flag there. I was, I think misunderstanding you and want to make sure I understand you. So you’re not saying that there is some deep principle of computation or knowledge or anything else that prevents us from essentially the nightmare scenario.
David: No, as I said: we have done that before.
Sam: Right. But you’re, so this is not analogous to the claim that because of the universality of computation it doesn’t make any sense to worry that we can’t in principle fuse our cognitive horizons with some superintelligence. There is just a continuum of intelligence, a continuum of knowledge that can in principle always be traversed through computation of some kind and we know what that is and that it’s limited only by specific resources. So those are two very different claims. One is the claim, the latter is a claim about what we now think we absolutely know about the nature of computation and the nature of knowledge and the other is a claim about what seems plausible to you given what smart people will tend to do with their culture while designing these machines. Which is a much, much weaker claim in terms of telling people they can sleep at night (...in the event of AI)
David: (Yes, yes) One of them is a claim about what must be so. And the other is a claim of what is available to us if we play our cards right.
Sam: Right
David: And I’m not so sure I’m...you say it’s very plausible to me. Yeah it’s plausible to me that we will. It’s plausible to me that we won’t. I think it’s something that we have to work for.
Sam: Well it must be plausible to you that we might, we might just fail to build AI for reasons of pure chaos on the ground that prevents us from doing it.
David: Oh yes, what I meant was it’s plausible that we will succeed in solving the problem of stabilizing civilization indefinitely. AI or no AI. It’s also plausible to me that we won’t. And I think it’s a fear that it’s very rational to have, otherwise we won’t put enough work into preventing it.
Sam: So I guess we should talk about the maintenance of civilization then. Because if there’s something to be concerned about, I would think this has to be at the top of everyone’s list. Let me ask you: what worries you about the viability of the human career at this point? What’s on your shortlist of concerns?
David: Well, ah...I see human history as a long period of complete failure. Failure that is to make any progress. Our species has existed, depending on where you count it from, maybe 50,000 years, maybe 100, 200 thousand years but anyway, the vast majority of that time people we alive, they were thinking, they were suffering, they wanted things, nothing ever improved. Or...the slow improvements that did happen, happened so that - geologists can’t distinguish the difference between the artifacts of one era to another with a resolution of, like 10,000 years. So from the point of view of a human lifetime, nothing ever improved. And generation upon generation upon generation of suffering and stasis. Then there was a , a slow improvement and then a more rapid improvement and there were several attempts to institutionalize a tradition of criticism, which I think is the key to rapid progress in the sense that we think of it. Progress discernible on the timescale of a human lifetime. And also error correction so that regression is less likely. Ah, that happened several time and failed every time except once: in the European Enlightenment of the 17th/18th centuries. Uh, so you ask what worries me. What worries me is that the inherits of that little bit of progress, little bit of salutary progress, are only a small proportion of the population of the world today. It’s what the culture or civilization or culture that we call the “West”. Only the West really has a tradition of criticism institutionalized and uh, this has manifested itself in various problems including, um, ah - the problem of failed cultures which, uh, see their failure writ large by comparison of themselves with the West and therefore want to do something about this that doesn’t involve creativity and that is very very dangerous. So then there’s the fact that in the West, the knowledge of what it takes to maintain our civilization is not widely known. In fact as you’ve also said: the prevailing view among people in the West, including very educated people, is of a picture of the relationship between knowledge and progress and civilization and values and so on is just wrong in so many different ways. So although the institutions of our culture are amazingly good to be - that they have been able to manage stability in the face of rapid change for hundreds of years. The knowledge of what it takes to keep civilization stable in the face of rapidly increasing knowledge is not very widespread and in fact severe misconceptions about several aspects of it are common among political leaders, educated people and society at large. So we’re like people on a hugely well designed submarine which has got all sorts of lifesaving devices built in. But they don’t know they’re in a submarine - they think they’re in a motorboat - and they’re going to open all the hatches because they want to have a nicer view.
Sam: (Laughs) What a great analogy. So the misconception that worries me most, frankly, and I assume you’re sympathetic with this, I don’t know if it’s on your shortlist but it was definitely is the one getting pinged while listening to your most recent statement which is this notion that there is no such thing as progress in any deep sense. Certainly there’s no such thing as moral progress. There’s no place to stand where you can say that one culture is better than another, that one mode of life is better than another. There’s no such thing as moral truth. And many people have drawn this lesson somehow, from 20th century science and 20th century philosophy and now in the 21st century, again even very smart people, even, you know physicists whose names will be well known to you, with whom I’ve collided on around this point: there’s no place to stand to say that slavery is wrong. To say that slavery is wrong is a deeply unscientific statement, on this view. And I’ll give you an example of just how crazy this hypocrisy and double think can become among well educated people. This will be-I assume you haven’t - you haven’t read my book “The Moral Landscape”, right?
David: Um, not yet -
Sam: So, I mean, so this is my (high horse)
David: - I’m ashamed to say -
Sam: Well, no, please, I’m interviewing you and I didn’t finish the book we’re discussing yet. I’ll give you the experience that got my hobby horse rocking on this topic. Most of my listeners will know this, I think, because I’ve described it a few times: I was at a meeting at the Salk Institute where the purpose of the meeting was to talk about things like the fact-value divide which I think is one of the more spurious exports from bad philosophy that has just captured scientific culture. So, I was making an argument for moral realism and I was over the course of that argument disparaging the Taliban. I was saying you know, if anyone, if there’s any culture that has not given the best possible answer to the question of how to live a good life, consider the Taliban that’s forcing half the population to live in bags and beating them or killing them when they try to get out and it turns out that to say something critical of the Taliban at the Salk Institute at this meeting was in fact controversial and, and this, uh, a woman who, um hold , ah multiple graduate degrees in relevant areas. She’s a - technically a bioethicist - but she has degrees in science and in philosophy - um, again at the graduate level.
David: Doesn’t fill me with confidence!
Sam: Right, right. And also I believe, also law. And I should say she has now gone on to serve on the President’s council for Bioethics. So she’s one of 13 people advising President Obama on all the ethical implications of the advances in medicine. So the rot has spread very far. So this is the conversation I had with her after my talk. She said:
“How could you possibly say that forcing women and girls under the veil is wrong? That’s just...I understand you don’t like it, but that’s just your Western notion of right and wrong.”
I said, “Well the moment you admit that questions of right and wrong and good and evil relate to the well being of conscious creatures - in this case human beings - then you have to admit we know something about human well being and we know that this isn’t - that the burqa isn’t the perfect solution to the mystery of how to maximize human well being.”
And she said “Well that’s just your point of view.”
And I said “Well let’s just make it simpler. Let’s say we found a culture that was living on an island somewhere that was removing the eyeballs of every third child based on some belief system, would you then agree that we had found a culture that was no perfectly maximizing human well being?”
And she said, “Well it would depend on why they were doing it.”
And I said, “Well okay, let’s say they were doing it for religious reasons. Let’s say they have a scripture which says ‘Every third should walk in darkness’ or some such nonsense”
Then she said “Well then you could never say that they were wrong.” Right? The fact that this was a religious precept trumped all other possible truth claims leaving with us with no place to stand from which to say anything is ever better or worse in the course of human events. And again, I’ve had the same kinds of conversations with physicists who will say “Well, you know, I don’t *like* slavery. I personally wouldn’t want to keep slaves. But there’s no place to say scientifically that slaveholders are wrong.” And yet this is tantamount to saying that not only - I mean once you acknowledge the link between morality and human well being - or the well being of all possible conscious persons or entities - this is tantamount to saying that not only do we not know anything at all about human well being - we will never know anything about it. There is no conceivable breakthrough in knowledge that will tell us anything at all relevant to navigate the difference between the worst possible misery for everyone and every other state of the universe that is better than that. And this is a - an amazingly influential point of view and so many of the things you said about progress and about - there only being a subset of humanity that has found creative mechanisms by which to improve human life reliably - that is an incredibly controversial and even bigoted statement to the ears of many people in positions to make decisions about how we all should live. And so that’s what I find myself most worried about at this point.
David: Yeah, it is a scary thing. But it has always been so. Like I said: our culture is much wiser than we are, in many ways. And, arh...you know there was a time when the people who defeated communism would have said, if you asked them, that they were doing it for Jesus. Now in fact they weren’t. They were doing it for Western Values which they had been trained to reinterpret as doing it for Jesus. You know, they would say things like: the values of democracy and freedom as enshrined in the Bible. Well they aren’t. But the, the practice of saying that they are is part of a subculture within our culture which was actually good and did very good work. So in that sense it’s not as bad as you might think if you just recited the story of this perverse academic.
Sam: Well the one thing that makes it not as bad as one might think there is just that it’s impossible for even someone like her to live by the light of that hypocrisy.
David: (Ah, yes yes).
Sam: (I mean there’s just no kinds of choices...)
(1:29:00 hours)
David: (I was about to say that very thing...)
Sam: - the kinds of choices she makes in her life and the kind of judgements that she would make about me if I took her seriously. If I said, “Well listen, I’m going to send my daughter to Afghanistan for, you know a semester abroad, you know, forcing her to live in a burqa - is that the best use of her time? I mean, there’s really no place to stand to judge whether this could be a worse use of her time. So, presumably you support me in this decision?” No even someone - even she having just said what she said, I think would baulk at that because it’s just, we all know in our bones that certain ways of living are undesirable.
David: And there’s another contradiction, another irony that’s related which is that she’s willing to condemn you for not being a moral relativist but the ironic thing is, that moral relativism is a pathology that arises only in our culture.
Sam: Hmmm
David: Every other culture doesn’t have any doubt that there is such a thing as right and wrong. They’ve just got the wrong idea about what right and wrong are. But that there is such a thing, they don’t doubt. And she won’t condemn them for that, though she does condemn you for denying it.
Sam: Yes
David: So ah, that’s another, ah, that’s another irony.
Sam: Yeah
David: I think the, the, you say hypocrisy. I think this all originated in the same mistake that we discussed at the very beginning of this conversation. Empiricism, or whatever it is, this, ah - or - which has led to “scientism”. Now you may not like this way of putting it. The idea that there can’t be such a thing as morality because we can’t do an experiment to test it. Your answer to that seems to be: but we can if we adopt a - a simple ah - assumption of human thriving or human welfare, I forgot what term you used -
Sam: Well-being
David: Human well-being, yes. I think that’s actually true but I don’t think you have to rest on that. I think the criterion of human well-being can be a conclusion, not an axiom. Because this idea that there can’t be any moral knowledge because it can’t be derived from the senses is exactly the same argument that people make when they say there can’t be any scientific knowledge because it can’t be derived from the senses. In the 20th century empiricism was found to be nonsense. And some people therefore concluded that therefore scientific knowledge is nonsense. But the real truth is science is not based on empiricism. It’s based on reason. And so is morality. So if you adopt a rational attitude to morality and therefore say that morality consists of moral *knowledge*, which consists always of conjectures, doesn’t have an basis, doesn’t need a basis, only needs modes of criticism and those modes of criticism operate by criteria which are themselves subject to modes of criticism, then you come to a - a -a sort of transcendent moral truth from which I think your one emerges as an approximation. Which is that institutions that suppress the growth of moral knowledge are immoral. Because, well, because they can only be right if the final truth is already known.
Sam: Hmmm
David: But if, uh, all knowledge is conjectural and subject to improvement, then protecting the means of improving knowledge is more important than any particular piece of knowledge. And I think that, even without thinking of things like “all humans are equal” and so on that will lead directly to that, for example: slavery is an abomination. And human welfare I think, as I said, I think it’s a good approximation in most practical situations, but it seems to me not an absolute truth. I can imagine situations in which it would be right for the human race as a whole to commit suicide.
Sam: Hmmm. I guess I should spell out a little more clearly what I’m talking about.
David: I should read your book, I guess.
Sam: No. Well, actually I feel like speaking with you, having read much of your book and having this conversation with you allows me to put a little better than perhaps what I did in that book. There’s kind of a homology between your open ended picture of knowledge and explanation and my moral realism. I don’t know that our realism with morality is precisely the same, but there’s a line in your book which, um, which I loved, which is something like, “moral philosophy is about the problem of what to do next” and I think more generally you said it’s about what sort of life to lead and what sort of world to want. But this phrase “the problem of what to do next” really captures morality for me because, and I’ve been talking about it for years as a kind of navigation problem. Forget that we even have the word “morality” or “right and wrong” we still have this navigation problem. We are in a universe of possible experience and given that there is a difference, and I would think there is no difference more salient in this universe between the worst possible misery for everyone and all other states of this universe, there’s a question of just how to navigate this space of possible experiences. What sorts of well-being are possible given the requisite minds, what sorts of meaning and beauty and bliss are available to conscious minds? You know - appropriately constituted. For me realism of every kind is just a statement that it’s possible not to know what you’re missing. You know, if you’re a realist with respect to geography you have to acknowledge there are parts of the world you may not know about. Right? You know, if the year was 1100 and you were living in Oxford and you had never heard of Africa, Africa nevertheless existed despite your ignorance and it was discoverable. And so this is realism with respect to geography. Things are true whether or not anyone necessarily knows that they’re true and knowing that they’re true, people can forget this knowledge, as you have pointed out, and whole civilizations can forget this knowledge. Well this is true in the space of possible conscious states and all you have to acknowledge is that there is some criterion that is as fundamental as any criterion we would invoke in any other canonical domain of science by which we could acknowledge that certain states of consciousness are better or worse than others. And if you’re not going to acknowledge that the worst possible misery for everyone is worse than many of the alternatives on offer in this universe, then I don’t know what language game you’re playing but it seems this is all I need to get this open ended future of navigating in the space of possible experiences started. And, then it really is this kind of forward movement toward we know not what. But we know that there’s a difference between profound suffering that has no silver lining and many of the things that we value and are right to value in life. And these values, I mean the fact-value distinction, this is something that I think Thomas Kuhn once said that “philosophy tends to export its worse products to the rest of culture” and it’s kind of ironic because many of the things exported from Kuhn’s work are fairly terrible.
David: (Laughs) Quite so.
Sam: But he got this part right. And so this notion that comes from a, I think from a misreading of Hume that you can’t get an ought from an is. Again, I have met physicist who think this is somehow inscribed at the back of the book of nature that you just cannot get an ought from an is and therefore there’s no statement of the way the world is that can tell you how it ought to be. There’s no statement of fact that can tell you anything at all about values and therefore values are just made up. They have no relationship to the truth claims of science -
David: Yes, it’s empiricism again. It’s justificationism. You can’t *deduce* an ought from an is, but we’re not after deducing. We’re after explaining. And moral explanations can follow from factual explanations as you have just done with, with ah, thinking of the worst possible misery that a human being could be in.
Sam: Even deeper than that. And I think you make this point in your book: is that you can’t even get to an “is” - which is to say a factual claim - without presuming certain oughts. Without presuming certain values. You know the value of logical consistency, the value of evidence and -
David: (Yes, yes. That’s true as well.)
Sam: - and so, yeah - it’s a confusion about the foundations of knowledge as you say, that is somehow being linked to empirical experience narrowly and really a sense that science is doing something totally unlike what we’re doing in the rest of our reasoning. Which is the confusion here.
David: Yes it’s totally like - (yes)
Sam: It’s a special case. It’s the part of culture where we have invoked the value of not fooling yourself and not fooling others and made a competitive game of finding where you might be fooling yourself and where others might be fooling themselves. We’ve tuned up the incentives in the right way there, uniquely, so that it’s easier to spot self-deception and fraud than it is elsewhere. But it’s not a fundamentally different project of trying to understand what’s going on in the world or in the universe.
David: I agree. I agree.
Sam: Well listen, so this brings me to the final topic which I think is related to, um, what we were talking about in terms of the maintenance of civilization and the possible peril of birthing intelligent machines badly. And I just wanted to get your opinion on the Fermi Paradox. And describe what the paradox is for those who don’t know it. But then, tell me why our not seeing the galaxy teeming with more advanced civilizations than our own isn’t a sign that there’s something about gathering more knowledge that, um, might in fact be fatal to those who gather it.
David: So the Fermi *problem* rather than a paradox - the Fermi problem is: where are they? Where are the extra-terrestrials. And the idea is that the galaxy is very large and, but, ah, how big it is, is trumped by how old it is. So that if there were two civilizations anywhere in the galaxy, the chances that they had arisen less than say 10 millions years apart are infinitesimal. So therefore if there is another out there, it’s overwhelmingly likely to be at least 10 million years older than us and therefore to have had 10 million years more time to develop and therefore uh - and also in that time there’s plenty of time for them to get here, if not by space travel then by sheer mixing of the stars in the galaxy. They only need to colonize a few nearby stars to them that after say a hundred million years or a billion years that those stars will be far apart and spread throughout the galaxy so we would be seeing evidence of them and since we don’t see evidence of them, they’re not out there. Well this is a problem. But I think the problem is just that we don’t yet understand very well most of the parameters. And if you just fill in the parameters: you know, are they likely to use radio waves? What are they likely to do by way of exploration? What are their wishes likely to be? In all these cases we make an assumption that is kind of based on saying that they’ll be like us in that way and ah - and that they will use technology in the same way that we do. And we only need to be wrong in one of those assumptions for the conclusion that we should have seen them by now to be false. Um, now, ah - another possibility is that we are the first. At least we are the first in our galaxy and I think that will be quite nice.
Sam: Does that second assumption strike you as very implausible or not?
David: Like I said, I don’t think we know enough about all the different factors affecting this for any one idea to be very plausible or implausible. I mean what’s implausible is that they can have a different way of creating knowledge to us. That they can have - you know that kind of thing is implausible because it just implies that physics is very different from the way we think it is and if you’re going to think that well you may as well believe in the Greek Gods.
Sam: Right.
David: So another possibility is that most societies don’t destroy themselves, like I said I think that’s fairly implausible for us and it’s very very implausible that this generically happens.
Sam: Right. So just to spell that out the philosopher Nick Bostrom has this concept in his book “Superintelligence” of what he called The Great Filter and it’s the fear that at some point basically all advanced civilizations at some point discover computation and build intelligent machines and that this is sometimes always fatal or that maybe there’s some other filter that’s always fatal and that explains the absence of...of them.
David: We would expect to see the machines, right? (Laughs) They would have got here by now. Unless they’re busy making paperclips at home.
Sam: (Laughs)
David: But I think what is more plausible, although again, I must say this is just idle speculation - ah - is that most societies settle down to staticity. Now our experience of staticity is conditioned by static societies in our past which, as I said, have been unimaginably horrible from our present perspective. But if you imagine a society whose material welfare is say a million times better than ours and somehow that becomes settled into a sort of ritualistic religion in which everybody does the same thing all the time but nobody really suffers, that seems to me like hell, but I can imagine that there can be societies in which as you said, you know, they can’t see the different ways of being. So, uh, it’s like ah being on a - you used the example of being near Oxford and not knowing about Africa. You could be on the tallest mountain in Britain and not know that Mount Everest exists and, you know, if the height of the mountain measures happiness - you might be moderately happy and not know that better happiness is available and if so then you could just stay like that.
Sam: Actually you just invoked the, explicitly the metaphor I use in my book “The Moral Landscape” - which is, I believe that’s precisely the opportunity on offer for us. That there is a landscape of possible states of well being for, and this is almost an infinitely elastic term to capture the differences in, in, in pleasure across every possible axis and uh, yes you can find yourself on a local peak that knows nothing of other peaks and that there are many many many peaks, obviously but there are many more ways not to be on a peak and so there are many more ways to be struggling to get to some higher point that is nearer to you in terms of well being. And you and I may differ in our sense of just how desirable certain peaks might be or how captivating they might be to conscious creatures like ourselves and I think there probably many peaks that are analogous to and compatible with a very high state of civilization and which are analogous to being the best heroine addicts in the galaxy. Which is to say you’ve found some place of stasis where there is no pain and there is also not a lot of variation in what you do, you’ve just kind of plunged into a great reservoir of bliss which you’ve managed to secure for yourself materially with your, with your knowledge and, you know it’s a very Aldous Huxley vision of the end game -
David: Yes. If that’s, if that’s really what’s happening across the galaxy you have to find some way of accommodating, first of all these, a civilization like that will eventually be destroyed by a nearby supernova or something of the kind. On a scale, on a scale of 10s or 100s of millions of years there are plenty of things that can wipe out a civilization unless it does something about it. If it, if it does do something about it, kind of automatically, with automatic supernova suppression machines which are in place and nobody needs to think about them anymore, we would notice that. So, it can’t be exactly that. And ah, on the other hand it’s hard to imagine they don’t know about that and do get wiped out because how did they get to that state of exhaulted comfort without ever finding out about supernovae and their danger? There are other possibilities - I’m actually considering writing a science fiction book with a very horrible possibility which I won’t, which I won’t mention now. But it’s fiction.
Sam: Don’t give a - don’t give the prize away.
David: Yeah.
Sam: Well listen, David, it’s been incredibly fun to talk to you and I’m painfully aware that we haven’t even spoken about the thesis for which you are perhaps best known. Actually the two: the um, many world’s interpretation of quantum mechanics as explained in both your books. The first book being “The Fabric of Reality” which I read when it came out and loved and nor have we spoken about quantum computation but we’ll definitely have to leave those for another time because you’ve been so generous with yours today. I want to encourage our, um, listeners to read both your books but especially the most recent one.
David: Thanks (laughs).
Sam: And where can people find out more about you online? Is there a, um-
David: They can find me with Google very easily. But I also have a website, www.daviddeutsch.org.uk and all the links linking to me link to each other as well. So...I’m easy to find.
Sam: And your social media buttons are on that page as well?
David: Yeah. I’m on Twitter.
Sam: Ok. Actually one last quick question which I - I thought of asking, now that I’m interviewing smart, knowledgeable people it occurred to me to ask this question of, um Max Tegmark and then I forget so this will be the inaugural question with you. Who’s your vote for the smartest person who has ever lived? If we had to put up one human brain past or present to dialogue with the aliens who would you say would be, ah, our best candidate to field?
David: So this is different from asking who has contributed most to human knowledge? Who has created most?
Sam: Yes. Yes, absolutely.
David: It’s rather who has the highest IQ?
Sam: It’s good to differentiate those because there are people obviously who are quite smart who have contributed more than anyone in sight to our knowledge but when you look at how they think and what they did, there’s no reason to think they were as smart as John Von Neumann, say. So, I’m going after the Von Neumann if not the -
David: Ok. In that case I, I think it probably has to be Feynman. Though his achievements in physics are no where near those of say Einstein, I met him only once and, and, ah people were saying to me, you know, you’ll have heard a lot of stories about Feynman but you know, he’s only human and ah, well to cut a long story short I went and met him and the stories were all true. He is an absolutely amazing intellect and I haven’t met many of the others - I never met Einstein - but my impression is that he was something unusual. I should add in terms of achievement I would also add Popper.
Sam: Don’t cut that long story so short. What was that like being with Feynman and can you get a handle on what was unusual?
David: Well very quick on the uptake. So, that is not so unusual in the university environment. But the creativity applied directly to getting things. Okay, let me give you an example. At the time when I met him, I was sent to meet him by my boss when I was just beginning to develop the ideas of quantum computation and I had ah, I had constructed what we would today call a quantum algorithm. A very very simple one. It’s called the Deutsch algorithm. It’s not much by today’s standards. Um, but, um I had ah, been working on this for many months and ah, I went and ah started telling him about quantum computers. He was very quick, he was very interested and then he said “So what can these computers do?” so I said “Well I’ve been working on a quantum algorithm” and he said “what?” and so I *began* to tell him about it. And I said “Supposing you had a superposition of two different initial states” and then he said “well then you’d just get random numbers” and I said “Yes, but supposing you then do an interference experiment” and I started to speak and he said “No, no no! Stop! Stop! Let me work it out!”
Sam: (Laughs)
David: He rushed over to the, to the black board and he produced my algorithm with almost no hint of where it was going.
Sam: So how much work did that represent? How much work did he recapitulate?
David: I don’t know because I, it’s hard to - it’s hard to say with the benefit of hindsight how much of a clue the few words I said were (laughs). But the crude measure is: a few months.
Sam: Right.
David: But a better measure is, that I was flabbergasted. I’d never seen anything like this before.
Sam: Hmmm
David: And I, you know I had been interacting with some extremely smart people.
Sam: Right and your boss was John Wheeler at that point?
David: Yes, yes. At that time, yes.
Sam: And so no dunce himself.
David: That’s right.
Sam: What a wonderful story. I’m glad I asked. Well listen, David, let me just ah demand that this not be the last time you and I have a conversation like this because, ah -
David: That would be very nice.
Sam: You have a beautiful mind.
David: It’s very nice talking to you.
Sam: Please take care and we’ll be in touch.
(Outro music)
Sam: If you enjoyed this podcast there are several ways you can support it. You can leave reviews on iTunes or Sticher or wherever you happen to listen to it. You can share it on social media with your friends. You can discuss it on your own blog or podcast. Or you can support it directly. And there are two ways you can do this. You can leave a donation through my website at samharris.org/donate or you can try a membership at Audible, the world’s leading source of audiobooks at audibletrial.com/samharris