In this episode, Brian Beckcom speaks with Professor Robin Hanson about the unconscious motives that drive human behavior and their impact on our everyday lives. Brian and Professor Hanson talk about how to confront our hidden motives, examine them, and see clearly so that we can better understand ourselves and our fellow human beings.
Robin Hanson is the co-author of The Elephant in the Brain: Hidden Motives in Everyday Life.” In his book, Robin explains how our minds actually work. He explains how and why we deceive ourselves and others. And he describes how our unconscious motives impact more than just our private behavior; they influence our institutions, art, medicine, schools, and politics.
Robin Hanson’s work is relevant today considering the bizarre place we find ourselves in history.
Watch this episode on YouTube
Brian and Professor Robin Hanson discuss:
- How he transitioned from STEM fields into social science
- Predetermined human behavior and the duplicity of free will
- Why humans act based on hidden motives and why we fail to detect them
- How our unconscious motives have shaped the political landscape we see today
- The essence of science and the differences between “experts” and “elites”
- How to effectively deal with disagreements on difficult topics
- Why so many spouses hate cryonics (the low-temperature freezing and storage of a human corpse or severed head)
- Why we haven’t seen aliens and when to expect them!
- And other things
Robin D. Hanson is an economics professor at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from the California Institute of Technology, a master’s degree in physics and philosophy from the University of Chicago, and nine years of experience as a research programmer, at Lockheed Martin and NASA. Professor Hanson has 4510 citations, a citation h-index of 33, and over ninety academic publications ranging from Algorithmica and Information Systems Frontiers to Social Philosophy and Social Epistemology. Robin has diverse research interests, with papers on spatial product competition, product bans, evolutionary psychology, voter information incentives, incentives to fake expertise, self-deception in disagreement, wiretaps, image reconstruction, the origin of life, the survival of humanity, and interstellar colonization. To learn more about Professor Robin Hanson, please visit his bio at https://www.overcomingbias.com/bio.
Read the transcript:
Brian Beckcom: Welcome to the Lessons from Leaders podcast. I'm your host, Brian Beckcom.
Okay. Well, I did not think that in 2021, the first week of 2021, we would literally have people dressed up like Vikings with animal horns in our Nation's Capitol. What a bizarre, bizarre world we live in.
I have a lot of thoughts about all this stuff and how we got to the place that we got to in this country. I'm sure like a lot of you, when I see on TV these people storming our Capitol, I just think to myself, “What happened? These people have lost their fucking minds. This is ridiculous. How can people possibly believe that it would be a good idea to dress up with a Camp Auschwitz t-shirt or get weapons and zip ties or dress up and all this cosplay nonsense and storm our Nation's Capitol? How could anybody possibly think that would be a good idea?”
Well, my next guest has some good explanations for that. I'm talking about Robin Hanson. Robin is an associate professor of economics at George Mason University, as well as a research associate at the Future of Humanity Institute at Oxford University. He has physics degrees from the University of Chicago, and he worked for DARPA as well as Lockheed and NASA.
Professor Hanson has 4,510 citations of his work. He's published multiple books, including a book called Elephant in the Brain: Hidden Motives in Everyday Life, a book that I read a couple of years ago that changed the way I saw the world. Professor Hanson's book basically demonstrates that in all different walks of life, religion, politics, art, you name it. Sports. We behave in ways that we don't even realize fundamentally why we are behaving in those ways. We fool ourselves all the time.
So, I thought Robin would be a perfect guest given the circumstances we find ourselves in as a country. In the podcast, Robin and I talk about how our minds actually work. How we deceive ourselves and others. Why we deceive ourselves. What the incentives are behind deceiving ourselves.
We talk about some other topics, as well. Robin is very interested in a broad array of topics including how to have better conversations, how to be less disagreeable. He's got some thoughts on cryonics and consciousness. Robin has signed up to have his brain frozen when he dies. And we talk about aliens and Robin's belief that we for sure will ultimately come in contact with alien civilizations and why he thinks that.
This is a perfectly-timed podcast. I shot it and released it as fast as I could because I think Robin has a lot to say that gives some structure and some explanation for why our country has turned into a bunch of raving lunatics when it comes to politics. Or at least what many of us would perceive as raving lunatics.
Now I give you Professor Robin Hanson.
Getting to Know Robin Hanson
Brian Beckcom: Hey everybody. Brian Beckcom here, and I have got Robin Hanson. I got to tell everybody, I am so fired up about this episode. It's hard to overstate. Robin and I have a lot of the same interests. We have a lot of interests in a lot of different things. The problem we're going to have, I can tell already in this podcast, is we have so many things we're interested that we'd like to talk about that we're not going to be able to get to all of them.
We're shooting this podcast a few days after the siege or assault of the Capitol. Robin has some thoughts about – and he's actually written a book called The Elephant in the Brain, which I talk about in the introduction, that tries to explain – and Robin, I don't want to misstate the book, but, to my mind, your book is an effort to explain how people act on motives that they often don't even know themselves they're acting upon. And so, I want to relate that, if we can, Robin, to kind of the craziness of the public sphere or the political sphere that we're seeing right now.
Before we get into that – and I also want to talk to you a little bit about, you've got some really cool thoughts about aliens and The Great Filter. You've got some ideas about legal reforms. I mean, it's just, the ideas you have are unbelievable. But before we get into that, there will be a lot of people listening to this podcast who know exactly who Robin Hanson is, but there will be some people who don't. So, give us just, like, a brief biography: who you are, where you came from, and how you ended up where you are today.
Robin Hanson: Alight, well. I'm an economics professor, but I've been tenured for a while. So, I don't have to focus on economics. The nice thing about tenure is you can do whatever you want. And I'm one of those people who was pretty inclined to study a wide range of topics. So, I had to make myself focus a while so that I could get tenure. And I just squeaked by, luckily. But tenure is a great deal.
I'm now 61 years old. So, I've been around a while. And I've been across a lot of different fields in my life and I’ve had a lot of different moves. One of the most dramatic moves I made was at the age of 34, with two kids aged zero and two, I returned from my career as a research programmer to graduate school to get a PhD in social science at Caltech. And from then I got a postdoc and then the job I have now, which eventually got tenure.
So, I'm really just interested in a wide range of topics, but I try to hold myself to the standard of if I'm going to go into a new topic, I should stay with it long enough to make a contribution. Maybe something that's an actual spatial thing that we add to what the world knows. Otherwise, it's just dilettante and, you know, goofing around and it's not valuable. So, I think I can meet that standard on most of the topics I've gone into. I've added something.
Brian Beckcom: You've got two degrees in physics and then a social science PhD. That's an interesting combination. You and I share that in common. I have an undergraduate degree in engineering, computer science, plus philosophy, and then a law degree. What – and you did some work for DARPA for the FutureMap program. Did some work for Lockheed Martin before you went into the Academy. Tell us a little bit about how you went from studying physics, working for DARPA, Lockheed Martin, into what you're doing today.
Robin Hanson: Well, it's a long history, so there's a lot of things to say that I don't want to take up all the time here with, but I guess I’ll riff on some key points.
When I was a physics undergraduate, I remember sort of standing in the elevator next to some other physics professors. I wasn't talking to them, I was just listening, and they were talking to each other about how if only they would bother, they could go over to those social science buildings across campus and straighten them all out. Because obviously that was all bullshit and obviously those people just didn't know how to think. And physicists knew how to think.
So, coming from a hard science or engineering background, STEM sort of thing, those people tend to be taught that social science is bullshit. It's just – it doesn't make sense. There's no content there. It's all just made-up fluff. And they're pretty arrogant about that. And they're wrong. Although social scientists are often wrong, but it's not bullshit. There's a lot of knowledge there. But it’s a harder subject. And in some sense, the main reason it's harder is that people care.
In physics, people – actually hardly anyone fundamentally cares emotionally about physics in terms of one theory being proved or another. But they care a lot about social science. So that means social scientists have to do a lot more resisting all the temptation to believe what you want to believe or what somebody else pushes you to believe and that's what makes social science hard.
But, I think a great advantage of coming from physics is the expectation that there will be some simple answers. Simple enough to understand yet powerful enough to explain a lot of things. Some people only ever grew up in social science. They just have this sense, “Well, it's all in, you know, intractably complicated and you can never really figure anything out except maybe a glimpse from a distance.” And it's all sort of all swirled up and mixed up with your own priors and emotions and politics and everything. And so, you know, they don't really expect to get a clear answer and to get strong evidence and to wait for strong evidence.
So, I think my physics background leads me to expect – to wait for clearer answers. To expect to be able to prove things to a wider range of people in a wider range of contexts. And I think that's served me well, because if you look for them, they are there.
Simplified Models, Causes, and Determination
Brian Beckcom: You know, I remember when I was studying computer science, I took an economics class. And this was back in the early ‘90s. And I remember thinking, this is before behavioral economics became a big thing. This was kind of more classical economics about the rational man and the rational actor. And I remember having this intuition, Robin. I couldn't put it in words, but as an engineer and a philosopher, I had this intuition that the entire foundation of classical economics was based on bullshit. Like, there is no rational actor. People don't act rationally. And that always kind of bothered me.
And like I said, at the time, Kahneman and cognitive biases and all these things that have become a big deal lately, behavioral economics, were not really that big a thing. So, talk to us a little bit about that issue. Like, in other words, you've done a lot of work on how – classical economics kind of tried to make things real clean and economics turns out to be much more messy than the –
Robin Hanson: So, let’s talk about physics for a moment. The physical world is obviously enormously complicated. And obviously almost anything around you, like the elephant standing behind you, according to what I see here, is enormously complicated. And so, the way we can only ever reason about anything in the physical world is through simplifications and simplified models which we know to be false because the world isn't that simple. But quite often, simplified models are quite productive and help us think about things.
So, almost all theoretical reasoning in any field is a trade-off and practice between, you know, getting it exactly right and getting it simple enough so that you can reason about it and do something with it. And that's true in every field, not just economics or physics, but every field whatsoever. And so, I think people get too hung up on the, you know, the simplification and say, “But the simplification isn't true.” But it's never true.
You know, we remodel a planet as a sphere. That's pretty good. Modeling a person as a sphere, not so good. But we often level people as spheres when we're doing something quick and dirty. And as you know in computer science, even, you know, most programs are really complicated, but we still make models of simple parts of programs and that gives us some insights, even if the programs themselves are far more complicated than these models.
And so, I would say, in fact, behavioral economics doesn't actually change economics that much. You know. If you believe that, “Ah, they were all wrong before, but now they've got behavioral so, phew, you know, they must’ve fixed their problems.” Well, no. Behavioral economics really doesn't change that much. So, what you really do need is some structure to cut through all the vast possibilities. Some simplifying structured. And the rationality assumption is actually not crazy as a simplifying assumption for a lot of things.
That is, you know, one of the slight variations on the simple rationality model is people do the rational profit or utility maximizing thing, plus noise. So, honestly, in almost every field of science, one of the standard things you do with an oversimplified model is add noise. You say, “Our model predicts this, plus noise.” And then you can fit it to data and you can, you know, get a chance of how well it fits. You almost never have any model without noise, because that never works. That's completely wrong. But models with noise are actually often pretty good, even if they're pretty simple models with noise.
So, the economist's model of rational people plus noise actually does a pretty good job in a wide range of scenarios. Now, but, relevant to my book, the key thing is rationality about achieving particular ends. What are the ends? So, the claim in our book, The Elephant in the Brain, which I co-authored with a wonderful coauthor, Kevin Simler, is that the way economists and other social scientists have often gone wrong is right at the beginning and making the wrong assumptions about what the end is. What the purpose of it is. What are the motives? That's where we go wrong.
But once we offer a new assumption about the motives, we're still gonna offer a pretty simple model of the motives and assume some sort of basic rationality and achieving those motives and we're going to understand a lot of things in the world. So, we're not rejecting rationality, mostly. We're rejecting your assumption about what your ends are, what your goals are.
Brian Beckcom: And I don't want to get into it to any kind of tautological reasoning, but sometimes, like you say in your book, it's rational to be irrational. Or it's rational to be perceived to be thinking irrationally.
Robin Hanson: Right. So, right up at the beginning, when you're explaining human behavior, you need to make a distinction between proximate and distal causes. Everything in the universe that's caused has an immediate cause. The closest thing to that thing very nearby in space and time that caused it. And then there are other causes farther away. I mean, the reason why you're hot at the moment might be that the heat is on or something, but behind that, there's why the heat's turned on. You're on a planet with a sun nearby. All those are more distal causes that contribute to your current temperature at the moment.
So, anytime we're talking about causes, we have to think about what level or distance from the phenomenon we're talking about. So, in our book, Elephant in the Brain, we're mostly talking about pretty distant causes, and that's a thing to be clear about.
So, the reason why you do something right now is probably in the next moment you eat a bite of a cookie or something. It's probably pretty related to thoughts you had a few moments ago about, “Do I want to eat this cookie? I guess I do.” And you start chomping. And so, you know, often your stories about what you're doing at that moment have a sensible relationship to the thing you do the moment later.
So, we're not so much questioning that kind of connection between your actions. We're questioning more farther back to the more distant explanations. “Well, why did you eat the cookie?” Or, “Why did you go to the protest?” Or, “Why did you, you know, have the conversation?” And so, the farther back you go, the more – you're not really paying attention to whether the explanation is correct, because you usually just make some assumptions in the Sioux there [11:39], right? And that's the place where you are the most wrong.
Brian Beckcom: How far does that go? How far back do you look for people's intentions? So, for instance, you have chapters on art. You have chapters on politics. You have chapters on religion. You have chapters on a lot of different – where you give specific examples of people thinking they have one intention, but in actuality have much deeper incentives or intention. So, how far back do you go on that, Robin?
Robin Hanson: Well, fundamentally, humans are a biological species who exist on earth, and we have some distinguishing features from other species, and we have some characteristic behaviors that we tend to do that other species don't. And those characteristic behaviors ought to be understood in terms of some evolutionary process that honed our species to be a certain kind of species with a certain lifestyle and a certain pattern of the kinds of things we eat and when we go to sleep, etc. And that's sort of the typical place where you might want to ground an explanation for human behavior.
You would say, you know, “Why do we have sex?” Well, sex produces children and, you know, it's good for a species to make children. And you might think, well, you know, in your head at the moment while you have sex, it’s about your pleasure and what you hope to experience the next few minutes. But the fundamentally evolutionary explanation is evolution had to find a way to get you to do that to produce the next generation.
So, in our book, we're more focused on these more distant explanations. Just why is there any payoff or reinforcement of doing that sort of thing? What is the payoff or reinforcement process that tended to produce creatures who have a habit of doing that sort of thing? Even if they don't know exactly why.
Brian Beckcom: How much of our behavior do you think is determined? Like, are you a determinist? Do you think everything is predetermined? Do you believe – man, I hesitate to open this door, but how much free will do you think we actually have?
Robin Hanson: So, I think when people use the words “free will,” or phrases like that, they mean a wide range of different things. And sometimes, you know, if they want to make a philosophy argument go to some extreme, they pick some particular definition, but that's not necessarily the definition other people have in mind. So, the two obvious definitions that make sense are – one is a very subjective definition. Could I eat the cookie? Could I choose to pick up my hand and eat the cookie if I wanted to, because something's holding my hand down or some usual circumstance that I have the freewill to eat the cookie. There's a correlation between my perception about what I wanted to do – I eat the cookie – and what actually happens. I eat the cookie. Right?
Now, there's this larger philosophical perspective from what you might say, “Oh, but what if there was a common cause behind your wanting to eat the cookie and then you're eating cookie. Then that might've determined those things and you didn't really have the freewill not eat the cookie or not to want the cookie because something determined that.”
And I, you know, by physics – so, physics basically says it's all deterministic. It's all completely deterministic to a certain level. So, I buy that.
Brian Beckcom:. Maybe deterministic plus chaos?
Robin Hanson: No, not even plus chaos. It’s really just fundamentally deterministic. So, our best theories in the universe say it's all actually completely deterministic. And I buy those series. But that's not in contradiction with my perception that I can eat the cookie or not. Because usually when I ask, “Can I do what I want?” I don't really care what the cause of what I want was. To take what I want is given and I ask, “Can I do what I want?”
I think it's completely reasonable to think that you might be preacher where there was a process that caused your wants. Your wants didn't float out from some ethereal other place than the universe. You're in the universe. Your causes are in the universe. And you're, I mean, your wants are in the universe. And your wants were caused by other things.
The Elephant in the Brain
Brian Beckcom: Well, let's talk now a little bit more about your book. So, everybody, the book again is The Elephant in the Brain. I read this book – I was talking to Robin before we went on the air. I read this book a couple of years ago. I read tons of books every year and my favorite books I always make notes on. And I had a lot of notes on your book, Robin. I went back and looked at them and I think your book – basically, my notes at the very beginning say that the major point in your book is essentially, “We act on hidden motives all the time in public just like we act on hidden motives in private when we deceive ourselves.” So, that's kind of how I summarized the book to me.
But what I'd like to do, Robin, given the current state of affairs, is I'd like to talk about your book, some of the specific examples of how we deceive ourselves and others, and then I want to get your thoughts on how this applies to what is going on politically right now. All this craziness. I mean, who would have thought there would be an antler-wearing half-naked bearded face-painted man sitting on Nancy Pelosi's desk a week or so ago. There's something going on that we really, I think, need to figure out and get to the bottom of it.
Robin Hanson: If you read a little history, stranger things really have happened. That's really not anywhere close to the peak of strangeness of human behavior.
Brian Beckcom: Fair point. But, tell us a little bit, so, the basic idea behind the book, what's the main thrust of your book?
Robin Hanson: So, I'm an economist. And we economists tend to make the mistake of taking people at their word for why they do things. And so, we give elaborate theories, but we make that fundamental mistake. Over a career, I've realized that that seems to be the fundamental mistake.
And so, it's late in my career that I started to write books. I've only written two. I've written books that, like, I've been wanting to write for a long time because I thought were important. So, this I thought was important. So, the key point is that we are very social creatures, we humans. We humans evolved as very social creatures. So, the main environment that mattered for us was the people around us. And it was really important that they think well of us and not think badly of us. And humans have norms. There's rules about what you're supposed to do and not supposed to do.
And so, one of the most important things that happens to us is that we might get accused of violating a norm. And in that case, we want to be ready to defend ourselves to say that we aren't violating a norm. And that's overwhelmingly important. So important that your conscious mind is not really the president or king of your mind. It's the press secretary. Its job is mainly to keep track of what you're doing and always have a story about why what you're doing was okay and not violating norms.
That is your job. Your conscious mind, that's your job. You think you are running things, but you're not. You're the press secretary. So, that’s why that's so important. So. That's means you don't know why you actually do things. What you know is a good story about why you do things that you can defend yourself with against norm violations now.
And the essential elements of a lot of norms is intention. So, if I accidentally hit you, that's a lot more okay than if I hit you on purpose. So, what I intended in hitting you matters a lot. And that's why your motives matter a lot. That's why your conscious mind is not only looking at what you're doing and asking, you know, “Why was my arm anywhere near your face?” It's asking, “What were my motives?” So, I need to be very ready all the time to tell you what my motive in doing any particular thing is to justify that I didn't have the wrong motives that violate the norms.
So, this is why you don't know your motives. Because you aren't choosing most of your actions. You're explaining them or justifying them. And you're looking for good-looking motives. Good-enough-looking motives that will take away the accusations. So, that's the key story here about why you don't know your motives.
Now. In principle, that story could be consistent with people who are usually right, but just wrong every once in a while. And so, when we look out in the world, we do usually know that people are sometimes wrong about why they do things. It's a common-knowledge thing about human-made peers [20:04]. Sometimes people are self-deceived, right? Sometimes people are in denial, right? You might be in denial about whether your spouse is having an affair. So, you kind of know, you see the clues, but you look away, for example. That could be, you know, something that’s going on.
So, we know that's a thing that happens for people. The question is how often does it happen? And of course, does it happen for you? And so, our main message isn't just that it's possible sometimes that you might be self-deceived and wrong about your motives. Our main story’s going to be you’re self-deceived and wrong about your motives a lot. All through your lives.
Now, the only way to really convince you of that is to go through area by area one at a time and say, “Let's look at this area. This is what you say why you're doing it. And let's look at some other possible theories.” And so, the general structure of our book after the first third where we go through the abstract theory about why this could make some sense is to go through 10 areas of life and say in each one, “What's their standard story about why we're doing this? What are some puzzles that don't make that much sense from the point of view of that standard theory? And what's an alternative theory that makes more sense to these puzzles?”
And for each one, we come up with a different motive that makes sense of the puzzles, and it does make sense as something people might do, and we hope that with 10 of those areas, you'll start to believe that maybe you're wrong about a lot of things. Not just a couple.
Brian Beckcom: You know, one thing that I – so, I've done a lot of work on cognitive bias. A lot of reading. A lot of study. And I got to the point, Robin, earlier this year where I felt like I was essentially impervious. And I recognized that I was subject to cognitive biases, but I felt as if – and, you know, I recognize also that no matter what I did, I would –
Robin Hanson:, That's kind of the problem with that literature. The only way to convince people to use it is to try to convince themselves that it's not a problem for them.
Brian Beckcom: And that's exactly what I was about to say. So, I got to the point where I was like, “Okay, I have cognitive biases. I know I have cognitive biases. I can correct and do certain things to check those cognitive biases.” And then I remembered, because I read this somewhere a month or so ago, that kind of the meta point that Kahneman and Tversky make in that book is the people that think that they can take care of their cognitive biases or be more aware of their cognitive biases almost always overestimate their ability to do that. Right? So, no matter how well informed you are, no matter how well you know about this stuff, it almost doesn't matter. Right?
Robin Hanson: Well, it makes some differences, but not the difference you want it. So, humans have this ancient habit of disagreeing with each other. And all through, you know, the last million years of human history, when somebody disagrees with you, what we usually do is try to identify some theory of how they're wrong. And our story of them being wrong usually has some version of a mental mistake they made or a tendency of mistakes they made that they're unaware of.
And that is, in fact, our usual way of making sense of disagreement. We say, “Well, I'm right and they're wrong, but they have screwed up somehow and it could have been one of these mess ups.” So, in that sense, people have long been aware of the concept of bias or the concept of mental mistakes and they've invoked it systematically in their lives to explain other people. And they're happy with being able to more easily see other people's mistakes than their own because that lets them continue to decide, “I'm right and they're wrong and I don't need to change my mind when I hear someone disagree with me.”
So, the problem is, of course, you're wrong on average as much as they are. So, you need to think about different approaches to figuring out who's right or wrong. So, we can go down, like, how to handle disagreement if you'd like later on. But I think first you want to, like, sort of get to the core, basic theory here and you wanted to talk about recent political events.
Politics and Hidden Motives
Brian Beckcom: So, talk about – so, you have a chapter on politics and you talk about hidden motives in people's political behavior. Talk about that chapter a little bit and then apply the analysis to what we see going on right now.
Robin Hanson: Okay. So, first I have to say, some areas of life, we all sort of do the same things. And in other areas of life, we do different things. And so, when we do different things, we're more aware of the fact that we do different things and we're more eager to explain other people's differences via their mistakes. And so, hidden motives is one class of mistakes.
So, in areas where we do different things and have conflicts, we are quite ready to open to the idea that other people are making mistakes and other people are biased and other people have hidden motives. And that's going to be the case in politics. Politics is obviously a scenario where we have conflicts. And so, the other side is going to be a plausible candidate to us of people who are falling for biases and mistakes and who have motives that they aren't aware of because we're happy to attribute the other side to their terrible mistakes and motives. Whereas for our side, we don't think that needs to be invoked because we're doing the reasonable thing and they're doing the unreasonable thing.
Now, in other areas of life, like, say, medicine, we all do kind of the same thing. And this makes us much more blind to our hidden motives there because we can all be following a hidden motive that we're denying and then not really notice it because we don't see people doing the different thing. So, for example, in medicine, the usual thing we say is we go to the doctor or the hospital to get well. That is, we get sick and they can make us well and that's why we go. And since we all say that, and we all want to support everybody else for saying that, we don't question it very much.
What if that's not why we go? And so, in the book we say, “Actually, why you go to the doctor is to show that you care about other people and let them show they care about you.” Which isn't a terrible thing, but it's not the thing you admit and talk about. So. But you're really blind to that and it'll take a bit to convince you, on this podcast, if you've probably just heard the last few seconds and said, “What? Hanson's crazy.” Because that sounds crazy.
Brian Beckcom: I've read the book and I was thinking, “Man, that sounds crazy.”
Robin Hanson: Right? But because you're – so, in politics, you're going to be more open to the idea. At least some other people have hidden motives. And that's why you have to be careful here to be too eager to jump on the other people instead of yourself.
But, so, let's walk through politics. Politics, we say, “Why do you do politics?” So, if you were to ask people why they're involved in politics, the easy answer they will give is they're trying to help. Their nation or their city or their world. You know, they could just ignore politics and go about their lives, but they are altruistic, caring people who are willing to put a little time and energy into helping the world through their political action.
That's the simple story. And that's the story most people would want to tell about politics. But there's a number of puzzles that don't fit so well with that story. And just like in all the other chapters, we have the puzzle. So, we want to walk through the puzzles that don't fit so well with the simple story in politics. Now, obviously one thing is people are pretty gullible about politics. You know, try to sell them a crappy used car, they're going to look at that askance and perhaps, you know, not buy it. But in politics, they lower their standards often, and they're willing to believe a lot of pretty tenuous, loose reasoning, etc. They’re often pretty emotional about it.
People care a lot about the politics of people they’re associated with. So, many, just a survey I saw recently saying, you know, a large fraction of parents would be disappointed if their child married someone from the other political party, or even had a romantic relationship with somebody from the other political party. So, people have enormous, you know, caring and concern about having people around them share their politics.
Politicians mostly have to take positions. Now, you know, politicians, like, work behind the scenes. They craft deals. They make compromises. They can work the system. And voters care almost nothing about that. They don't reward politicians for being good behind the scenes and making things happen. They just want to know what positions they've taken. Even on positions on topics they can't do anything about. So, like, in the United States, the President can't do much about education. But, you know what? Everybody cares about the President's position on education or presidential candidates’ position on education.
And another puzzling thing about politics is that there's a remarkable correlation across topics. So, you know, there are thousands of political and policy dimensions in the world. And yet you can explain a large fraction of the variation of those opinions by just one dimension of political position: left versus right. But why should there be this one dimension that explains so much variation in politics? That's kind of crazy, cause it's just a big, complicated world.
And so, we'd say in the book that you are sort of being a Dudley-Do-Right in politics. That's your pretense. I am a Dudley-Do-Right. I go out and make the world better. And we’re saying. “Well, you're actually a little closer to what we'd call an apparatchik.” That's a name for an old Soviet politician who worked for the system.
Brian Beckcom: A bureaucrat, right?
Robin Hanson:, Well. Not just a bureaucrat, but a faithful, more political loyalist who supports the party. So, there's an old story about how there was a meeting and they were discussing many things. And then at one point the name of Stalin came up and he was alive at the time and important, and everybody was eager to show, “Yay Stalin.” And so, they all stood up and started clapping. “Yay, Stalin.” He's not in the room, but they're clapping for Stalin.
They keep clapping for 10 minutes. And of course, near the end, people wonder, “Is it time to stop clapping for Stalin?” And of course, they used to think, “Well, I don't want to be the first one to sit down and stop clapping for Stalin because that will make me less loyal than the other guys.” So, one person was the first person to sit down, stop clapping. Then the rest of them would, “Phew, I can sit down and stop clapping.” And, of course, that guy went to Siberia.
Brian Beckcom: I think there's actually – I went back. I, you know, people constantly are talking about things being Orwellian and blah, blah, blah. I actually went back and read 1984 a couple months ago. I think there's a scene in 1984 about exactly what you're describing. They're having some sort of political rally and they're literally, the leaders are scanning the crowd to see who is not clapping and celebrating proudly enough.
Robin Hanson:, Absolutely. So, that's the key idea that in politics, what your main motivation, even if you're not aware of it, is to show loyalty to your allies. To your sides. So, now you may have multiple tribes that overlap that you're part of, but for each one, you're trying to show loyalty. And that means you're not that rational about it. You're a bit irrational about your loyalty because even if your side believes something that doesn't quite make sense, it's more important for you to show loyalty to your side than to try to, like, help your side make more sense.
“Well, you know this thing we've been saying? That doesn't make so much sense. How about we switch to that?” That doesn't sound so loyal even though of course you're actually helping your side. And so, you're more eager to just, you know, my country, right or wrong. Take your side. And so, that explains a lot of these puzzles in politics. Explains why we care so much about the people around us sharing our political views, why we get emotional. Why we're not very careful about analyzing these things. Explains why there's this one-dimension spectrum in politics. And, you know, that's the story here.
So. That means we are not very careful and reasonable in politics. My ex-co-blogger once called this – said the phrase, “Politics is the mind killer,” which is evoking a phrase from the Dune novels.
Brian Beckcom: Yep. Great, great, great novels. Some of my favorite books of all time.
Robin Hanson: Right. And so, that means all of us sort of lose our minds a bit when we talk about politics. Now, there's other topics we also lose our minds about it. It's not just politics. We all lose our mind a bit about romance, for example. Go a little crazy there. But, you know, we aren't in as much conflict there. So, we don’t –
Brian Beckcom: Sports. People lose their mind about sports.
Robin Hanson: Right. But in politics, there's the clear opponents, and so we're more willing to notice that they've lost their mind and point it out.
Brian Beckcom: Let me ask you this question. Let me see if you've experienced this phenomenon. So, I've experienced this phenomenon mainly on social media and online. I'll see people who I know personally, who I spend time with, who our kids play sports together, go to school together. I know these people are tremendous, nice, compassionate, reasonable people. And then I go on social media and I look at it and I go, “That person's fucking insane.”
Now, the funny thing about that is, those people are going, “Man. I know Beckcom. I know him personally. He's compassionate. He's nice. He's a friendly guy. But on Facebook, Beckcom is fucking insane.” So, they think the same thing about me as I think about them.
But the question I have for you, Robin, is why is it, do you think, that when you and I are interacting like we are now, or even better, face-to-face over a cup of coffee or lunch or something, we behave one way. And then we go online, and we feel like we have license to behave in a completely different way. Like where does that dichotomy come from?
Robin Hanson: So, the first thing to notice is we are all very complicated creatures. And we are certainly smart enough to make our behavior depend on context. So, it doesn't make sense to always act the same way in all contexts because different contexts have different payoffs and different incentives, and so, you should change your behavior with context. That's an issue.
However, we are wary of doing it for the reason that it actually helps to have an identity that we project to people. So, we want other people around us to think they can understand us and predict what we do. That's important to us because otherwise, if it were just a huge, complicated, random thing that does random things in situations, they can't rely on us and they can't put us in sensitive roles. So, in order to convince the people around us that they can rely on us and trust us, we have to simplify ourselves. We have to take this vast variety, potential that we have and squash it down to a smaller space so that we can show the people around us, “Hey, you can trust me.”
Now, if that dependence, of course, is the same way everybody's dependent, well, that's okay. So, if I'm, you know, reasonable when I'm cooking and crazy about politics, and you are too, and everybody around us is, well, that doesn't make me harder to predict because I'm very predictable. I’m following the same, you know, a way in which my behavior depends on context of all the other people.
So, we need to be predictable. That doesn't mean we have to be the same as each context as long as the way we change with context is very predictable. So, now the key thing is we don't notice this so much about ourselves, but we're more able to notice about other people. But, you know, depending on the topic I could bring up right now, your reasonableness would change dramatically, and so would mine. But I wouldn't notice it at the time. And so, I would remember having a moment ago feeling reasonable about myself. From that point, I would feel reasonable about it myself.
And so, this is our key blind spot. As soon as we go into these other more sensitive topics, we're usually not aware of how we're switching modes. We are changing the standards we apply and how careful we are and how attentive we are to being loyal. But we don't feel it.
How our unconscious motives have shaped the political landscape we see today
Brian Beckcom: What do we do about – so, offer some prescriptions if you would about – so, when I look at these people that storm the Capitol, for instance, especially the people that were dressed up in this weird cosplay stuff that they, you know, the different uniforms and animal outfits and stuff like that. When I look at these people, I think they've lost their minds. They've completely lost their minds. What do we do – first of all, do you agree with that, or is it a little bit deeper than that? And second of all, what do we do about it, Robin?
Robin Hanson: Again, you know, we're much more critical of the other side in politics than our own side, and so, you know, I think you should just be careful about presuming, you know, who's more crazy than you about politics. Because it's really hard to judge your own craziness. And, of course, you should expect a lot of craziness.
And of course we're telling you there's a lot of craziness, just not just about areas where you know about some craziness. There's a lot of craziness about, say, medicine where you don't feel crazy and you don't think anyone else is crazy. But I'm telling you, you are. You're still crazy. Even there. In the sense that what you're doing doesn't fit with what you say you do. But what you're doing is a thing that makes sense to do.
So, I might even say, you know, if people are wearing funny costumes in a protest, that can make complete sense as a way to show loyalty to their associates. It's a functional thing. It's not crazy. It's deviant from what you might expect, and it doesn't show loyalty to you and you don't feel very tied to them, but they may well be succeeding and producing –
Brian Beckcom: So, maybe one way to put it, Robin, would be, you know, what's the difference between the guy in the horns and the antlers and the face paint at the Capitol and the guy at the football stadium who's got his shirt off when it's 28 degrees and he’s got his team there and he's, you know, looks like a completely insane person. But, I'll tell you what, if he's rooting for the same team we are., we're going to go, “Man, that guy is awesome. I love that guy,” right?
Robin Hanson: What a fan, you say.
Brian Beckcom: What a fan. So, kind of the same thing, maybe, politically. I bet you the people that believe in this Capitol attack are looking at those folks going, “Man. That's awesome.”
Robin Hanson: “I wish I had the courage to do that. I wish I had thought of that first.” So, I mean, a lot depends on whether you think they're helping or hurting, of course. Whether you've you praise or criticize them.
So, our book is primarily about telling you how the world is different than you thought. Its focus is not how to fix the world, or even how to fix your world. It's about telling you that things aren't what you feel, and you need to update your beliefs about how the world works and how you work. So, we think that's enough for a book, and so we think we succeeded at that.
But we do have some words about the topic of what to do, it just may not be enough for you. So, first of all, there's the question of whether we're doing you a favor by telling you this stuff, right at the beginning. The idea is humans evolved and your ancestors evolved to turn a blind eye to this stuff. Evolution decided that for its purposes, you are better off not knowing. So, if evolution was still right today about its purposes equaling your purposes on this being the best strategy, then we are doing you a disfavor by telling you about this. Or we're making you see something maybe you can't unsee so well.
Now, fortunately, most people are able to take a podcast or a book like this, and if it's not what they want to see, look the other way and forget about it quickly. And so, that's probably an option for you. But, you know, the first question to ask is, well, how could we be doing you any sort of favor here? So, we'd say, well, it might be that, you know, evolution's purpose for you doesn't equal your purpose for yourself, or that the environment has changed.
So, in particular, you might be, say, nerdy like me, who doesn't have the right intuitions about self-report [39:20]. We nerdy people don't just sail smoothly through the social world. We come somewhat clumsily through it. And so, consciously thinking about things can often help us in ways that are less useful for other people because our intuitions are just bad.
You might be a salesperson or manager for whom, you know, being able to read other people's motives is especially important to doing your job well and so you might be willing to pay the price of making it a little harder to self-deceive to learn about these things. Or you might be a policy maker or a social scientist whose job it is to describe the world and what's going on. You know, this is pretty central to your job. If you are getting people's motives wrong, you're just going to be completely misjudging whole areas of the world that may be the specialty that you're focused on. That may be your thing.
If you study medicine for example, and medical behavior, and you assume people are going in to get healthy, then you're just wrong, right from the get-go. So, you know, it's important for you to figure this stuff out.
Brian Beckcom: You know, I had a personal experience with this – and I will say, I consider myself to be a nerd. I'm sure you consider yourself to be a quote nerd. And I agree with you, Robin, too, to a great extent, having learned about all this stuff, I can tell you, I'm not sure it's made my life – I know it hasn't made my life any easier. And in many ways, it's made my life harder because, like you said, once you've seen it, you can't unsee it. You literally see it everywhere. When I say “it,” I'm talking about hidden motives, hidden biases, things like that. So, I'm not, I'm not sure it's made my – I think it has made my life more difficult. But, you know, my personal belief is I'd rather know the truth than be deceived, even if the truth is a little bit painful.
Robin Hanson: Now, unfortunately, that might be another topic you'd be self-deceived on, right? This belief that we want to know the truth, I'm not so sure it's always true. I think one of the things you learn when you confront the truth is that maybe you weren't as interested in the truth as you thought.
Brian Beckcom: Exactly. Especially if the truth doesn't get you what you want. Like, oftentimes, the truth will push you away from what you want. But let me ask you this question, because I've seen this, especially during the pandemic. Everybody's become an epidemiologist and an immunologist. Everybody's an expert on this stuff.
Now, what I've seen a lot on social media is people talking about, “Well, this model or that model turned out not to be exactly right. So, why are we trusting science to do all this stuff?” And I want to bang my head against the wall when I hear that, because in my mind, that's exactly what science is supposed to do. Science is supposed to falsify hypothesis. Science is not physics. Science is not math. Science is not chemistry. Science is a way of thinking. And so, the idea behind the scientific – one of the ideas behind the scientific way of thinking, I think, is to check these biases. But maybe the scientists are deceived about that, too, right?
Robin Hanson: I mean, square one is to say if we want to reason about something, we have to at least entertain two possible theories of the world. So, in this context, let's identify two possible theories. One is that these epidemiologists, etc. are doing a good job of having reasonable guesses and then updating them in the context of learning more. That's one theory about them and who they are and what they're doing.
Another theory about who they are and what they are doing is that they take on the image and prestige of science, but they aren't actually doing a good job of science. They are basically parroting whatever somebody wants them to say or whatever elites around them are saying, and whatever seems politically, you know, in their interest to say, and they are just using the appearance and format and style of scientific reasoning without actually using what you might say is the key process of being careful and honest about their, you know, deciding if their theories are correct or incorrect and updating them.
So, those are the two theories on the table, right? And so now when we see them out there, okay. They have the theory, and it didn't do so well. The key thing we have to ask is under which theory was that more likely? And that's the Bayesian way to do Bayesian reasoning to try to slightly move in the direction of one or the other. And I got to say that particular data point doesn't really tell you much about either of those directions. Merely the fact that some of their models weren't so good is consistent with both of these stories.
So, if you want to draw a conclusion about one story or the other, you're going to have to do it on some other evidence than the mere fact that sometimes the models were a bit off. If you say, “The models have consistently been way off,” okay. That's more consistent with the story that they're bullshitting and don't know what they're doing, and they're just spitting out, you know, things that look nice that don't make sense. On the other hand, if you say, “Well, look. A lot of the things they predicted, they got it right and those things were kind of surprising. So, I'm impressed.” Well, that supports the other theory that says, well, it looks like they know some stuff and they're telling you some stuff, but you can't expect them to get it all right.
But, you know, the key point is for most people, you're not going to know enough to distinguish between these two theories merely on the basis of that sometimes they got some things wrong. I mean, come on. That's just going to be true no matter what.
Experts vs. Elites
Brian Beckcom: This is a perfect segue, I think, into talking about something that you've been interested in – we were talking about it right before the show – you've been interested in this recently. The difference between expertise, experts, and elites. Talk about that a little bit, Robin, if you don't mind, because I think this is a really fascinating topic.
Robin Hanson: Well, I just, like, looked up a couple of Google pages where I, you know, “expert elites” I put it on the top and basically a number of pages conflate them. Most of the times when people make the two constants [45:05], they say they're basically the same. They use them interchangeably. And they are not interchangeable.
So, let me give you some of the distinctions where it makes a difference. So, for example, there's a difference between a newspaper reporter and an op-ed or column. The reporter is more an expert, and the op-ed is more of an elite. Or think about the difference between boards of directors and boards of advisors. Advisors are the experts. The directors are the elites. Or think about on a conference, the difference between a talk and a panel. In the talk, you're in an expert mode and you are speaking as an expert. In the panel, you are supposed to sort of be an elite. You are, you know, you're going to open up to questions that you hadn't thought about ahead of time and maybe you haven't done research on but you're still going to opine about them and you're still going to talk about them as if you were an elite.
And if you think about, like, most – even a Nobel prize winner, what's the first thing most of the Nobel prize winners do? Well, they try to make a bid to be an elite. They have just reached the peak of expertness and they've decided this gives them an entree maybe to become an elite. So, they start to write op-eds and they start to express opinions on a wider range of topics because they think maybe now they can be treated as an elite.
Now in most firms, the structure is that employees at the bottom are in sense the experts. They know the most about each part of the job, and the manager at the top is the least expert on any particular job, but they are the elite. They make the key decisions. Now the managers will usually, if you have a decision the company makes and somebody challenges the company about it, then the manager or their PR person will usually point to the experts and say, “Hey, I've got all these good employees and I just do what they tell me.”
And so, but of course the manager really is making decisions, but they'd rather hide behind the expertise of their employees. And even when you think about promotion, usually the story is going to be well, we're going to take people who are the best at their jobs and promote them because promotion goes by expertise. But, of course, we know that in fact promotion often goes by eliteness. And eliteness is more your general suitability for being a general all-around respected person.
Brian Beckcom: How good you are at the water cooler, basically.
Robin Hanson: Right. For example, right. There's a whole bunch of complicated things going into choosing elites, but basically they are two different games played by different rules that overlap. And so, one of the more interesting things is elites often try to hide the elite game they play and pretend to be experts or pretend to be something else. And, but often, you know, and in some sense, the Nobel prize winner shows you that, in fact, the elite game is the game most people would like to join if they could. Even Nobel prize winners say, “Too bad I'm only an expert. I'm not an elite. Because I want to go try to be one of these elites.”
And so, the biggest claim to say is that in society, in general, we're following the same pattern. So, if you look at like government agencies or newspaper media or something, almost all the major institutions in our society present themselves as if they were trying to follow the experts. Newspaper reporters just trying to interview the experts, the government agencies trying to hire the experts and do what they decide, right? All these institutions are basically presenting themselves as, “If we have elites, it's only that these are people who are experts and that's what makes them elite.” And they're denying that there's any difference between elites and experts. It's all the same thing, you see, from the image they present.
Now, if we realize, “No, no, no. Elites are a different group of people than experts, and they have a different process and a different game,” then we see that, in fact, in our world, it's elites who make the key decisions, not experts. And I think this was dramatically shown early in the pandemic. This is what really highlighted it for me, made me think about it over the last nine months.
Pandemic experts have had their standard story about what to do in a pandemic that goes back decades. And, you know, you can look on all their standard writings about, you know, what to do about travel bans or what to do about masks or what to do about quarantines and all those sorts of things. And they've had their standard story about what to do in a pandemic. And there was no particularly new information that showed up except as soon as we had a pandemic, all the experts, all the elites in the world suddenly decided, “That's a subject to talk about.” The elites went wild talking to each other about pandemics and the elites decided that they did want masks and they did what quarantines and lockdowns, and they did want travel bans. And so, the elites declared that was the better thing. And they, the experts, caved immediately. As soon as the elites declared that that was better, the experts changed their mind about what the expert judgment was just like in 1984.
Of course, I was just reading this section last night where in the middle of a speech, they declare that they are no longer at war with Eurasia, we’re now at war with East Asia.
Brian Beckcom: Yeah, now we’re at war with the other ones, yeah. And the narrative completely –
Robin Hanson: But the speaker didn't even acknowledge that the change had happened midsentence and, you know, similarly in the pandemic, you know, people didn't really acknowledge that suddenly all these pandemic experts had caved and changed their minds entirely about what to do. And this shows that fundamentally, you know, it's also sort of a feature of politics and many other things.
When the elites don't care about a topic, the experts rule. The experts do their expert things and they decide. But as soon as elites turn their attention to a topic and talk about it and come to some sort of elite consensus, that’s the elites rule. That's the decision of what happens.
Brian Beckcom: So, I've been thinking about this and tell me if this is kind of similar to what you're talking about. I've been thinking about this for a year or so, and here's what it is. So, you hear, for instance, like, LeBron James starts opining on certain political issues and he's told, “Just go dribble the ball.” Or Kim Kardashian is talking about criminal justice reform. Or you sit and listen to Rush Limbaugh or whoever you want to talk about on cable news is offering all these opinions about things that they demonstrably know almost nothing about. Like, so, in other words, why would anybody listen to Rush Limbaugh about any topic at all that he has no –
Robin Hanson: But you’re asking, “Why are there elites at all?” And so that's –
Brian Beckcom: Yeah, that's what I'm getting at. Yeah. That's what I'm getting at. It seems to me like, Robin, what you're saying is we don't really care that much about expertise. Or at least we don't care as much about expertise as we pretend we do.
Robin Hanson: Right. So, what we fundamentally care about is status and affiliation with status. So, humans, like many other animals, have status hierarchy, and we are really eager to be seen as high as status as we can. And so, eliteness is about this prestige and status. There's a lot of things that go into being considered a prestigious person, but, you know, having elite education or having elite credentials of academic or having a TV show or being, you know, in the news a lot, those are all things that add to your eliteness.
And so, we're mostly looking at people's overall status when we judge how high an elite are they, and then what happens is elites talk among each other. They gossip. And that gossip forms an opinion that's a strong anchor for everybody else. We want to agree with the elites. We want to follow the elites. And, in fact, you know, there's many studies that suggest elite opinion drives most policy in most countries. It's not popular opinion that drives policy. It's elite opinion. And it's the opinion that the elites form by talking among themselves. That is the thing that, you know, the actual policy follows.
So, we are each mainly interested in showing that we are a good candidate for elitehood. That is, we should be considered high status. So, one way to do that is to agree with the elites and to try to share as many features as we can with them and to try to associate with them. And that's the game the humans have played for a very long time. And we continue to play it, but it's a complicated game because not only are we just trying to show that we're smart or pretty or rich or whatever else goes into that, but the elites have all these coalition dynamics among them. So, one side of elites is going to try to knock out another side of the elites and make some more room for people like themselves.
So, that happens politically, for example, where, you know, one – left elites try to knock out and cancel the right elites so that all the elites are left. Or, for example, elites who are academics will try to get rid of non-academic elites, etc. And so basically there's all these different factions among the elites who are vying for a position. But fundamentally they want to disagree somewhat to knock down the other team. But if there's any strong consensus among the elites, they want to seem like they're agreeing with that.
Brian Beckcom: So, and maybe an example in my own personal life would be, you know, I profess to be very interested in logic and reason and science and stuff like that. But maybe part of that is because the people that I want to impress, i.e. the Robin Hansons of the world, the Ben Hunts of the world, the Robert Wrights of the world, they're interested in that too, right? And so, I'm basically speaking the same language that my peer group and the people that I want to impress speak.
Robin Hanson: So, the most obvious thing to notice is, take any group of experts. I don't know, chemists. When they get together, they try to switch into elite mode. They try to be like they were a panel at a conference. They talk about things way outside their area of expertise. They try to talk about them smartly and with the agreement with others, but they're trying to convince the people around them, “Hey, I'm an elite candidate. I'm elite material here. You should think of me not just as a chemist, but as a potential elite.”
And it's clearly, you know, elites very rarely try to convince you they're good chemists. So you very rarely – so, for example, say a reporter makes a book on a subject. I don't know, it might be about COVID or something. Well, if they write this book, they will interview many experts on COVID and then their book will have some expertise on COVID and then they might give a talk and somebody might ask them questions about COVID and then in that quick Q&A, they will kind of pretend to be an expert. If the question is a question they know the answer to because they wrote this book, they will be happy to tell that, but they're not that eager to be seen as an expert because, hey, they're the journalists who wrote this book everybody likes. It's much more that the experts are eager to be seen as elites than the elites are eager to be seen as experts. Except that in some sense, the elites all want to pretend that they're expert enough at whatever it is.
How to effectively deal with disagreements on difficult topics
Brian Beckcom: Talk about – you mentioned something earlier. Cause I think this is very germane to, again, the times we live in now. You talked about how to deal with disagreement, or how to disagree. Talk about that a little bit, if you don't mind, Robin.
Robin Hanson: It's a very tough subject. It's a subject I spent a lot of years researching. And so, the key point is that it's in fact not reasonable to knowingly disagree. That is –
Brian Beckcom: I disagree with that! No, I'm just kidding.
Robin Hanson: We do disagree a lot. But, there’s this literature on what a rational creature would do in the context of disagreement and consistently what rational creatures do is not disagree. They try hard not to disagree. They – basically, the main reason they don't disagree as they take what other people say very seriously as a lot of evidence. They are persuaded by what other people say to a great degree more than we are.
So, we are insufficiently persuaded by what other people say. We discount what they say relative to what we thought a moment before. And so that's fundamentally, on average, the reason why we disagree, and so that, you know, if you want to be more accurate in your beliefs, the most simple thing to do is just, you know, be more reluctant to disagree. Be more persuaded by what people around you say. Now, in our world where people disagree, of course you can't agree with all of them because they say different things. And so, you are somewhat forced to make choices, although you could try to sit in the middle.
But of course the key thing to realize is you have incentives to disagree. That's the reason you were built to disagree. So, if you seem to be too agreeable, then people think, first, you're low status, and second, you're stupid. Because maybe the stupid, low status people would be submissive and let other people push them around in terms of opinions. But the, you know, high status, dominant, smart people will less often listen to other people and more often, you know, go with their own thoughts. So, we are in some sense more eager to show that we are, you know, dominant or at least not submissive and that we're smart and that we're creative and that we are thoughtful than we are to be right. So that you can know about yourself, but now the question is, well, what are you going to do about that, exactly?
So, it seems to me like one thing some people do is they try, as you mentioned before, to have a long list of biases and to learn them and to try to check everything they say against their list of biases to try to fix them against the biases. I think that works a bit, but not a lot.
Brian Beckcom: Yeah, I agree with you on that.
Robin Hanson: Your subconscious is just too good at fooling you.
Brian Beckcom: I'll give you a good example of that, okay? So, and one of the reasons I think that people have such a hard time when people challenge their beliefs is because we build up these identities, egos, whatever you want to call them. And people feel often as if when you're challenging somebody’s idea about something, you’re literally challenging their identity. And that’s harmful, right?
But I'll give you a good example of this. So, my dad voted for Trump two times. He's a conservative Republican. He's a great, great guy. I just released a podcast with him, as a matter of fact. And when we talk about politics – I will literally, before I call my dad, I'll tell myself, “Don't sit there and preach and argue. Just ask questions.” And what do I do two minutes into the conversation? I started telling my dad why he's wrong about everything, right? Like, I can't help myself. So, but, what you’re saying is –
Robin Hanson: I do have a fix. It's not a great – you might not like it, but I have a fix. Instead of collecting lists of disagreements, of topics of biases, or just making a note to yourself to disagree less, here's the fix: Change your incentives. So, for example, one way to change your incentives about almost any topic is to make a bet about it. As soon as someone says to you, after something you said, “Do you want to bet,” your mental process immediately switches. You suddenly – well, from the moment you said it, it sounded clear and clean and believable and obvious even, and as soon as someone says “wanna bet,” you immediately start to wonder how you could be wrong. How the words you use might be ambiguous and how bad it would look if you didn't go through with this bet and it turned out you were wrong. Do you really want to take that chance?
That's what happens the moment you start to think about betting on something. And so, a habit of betting on things and being in an environment where people challenge other people to bet or where you even challenge other people to bet, will change your incentives to be more honest. And similarly, you know, in an environment where there are experts on something and you express an opinion, if the experts are the sorts who are likely to like call you on it if you're wrong, you will suddenly get a little more cautious about what you say.
Brian Beckcom: You know, you’re right about that. Although I've experienced – I've had a couple experiences – like, so, the other day I posted something on Facebook about a legal issue. And every single non-lawyer on my comment thread was offering all these, you know, amazingly complicated legal arguments and, you know, near the end of the thread, I mentioned to these people. I said, “You know I've been a practicing board certified lawyer for over 20 years.” It's almost like people – It's almost like they don't care. They just don't. They have their own opinions and they don't care if they’re talking to –
Robin Hanson: So, one interesting thing I've seen – I don't know what these people are, your friends [1:00:51], but let's test the theory. When people are in a cocktail party conversation or something, there's different kinds of experts. A lot depends on the relative status of the kinds of experts.
So, for example, if there's a physicist and a lawyer, at least in many contexts, the physicist won't mind expressing opinions on a law. The lawyer would be pretty cautious about expressing opinion on physics. It's not about that the lawyer actually knows less about physics than the physicist knows about law, it’s just physics is higher status. And so, in some sense, higher status people are allowed to opine on other topics more and get away with it because that goes along with their status.
Brian Beckcom: For sure. Here's another idea I wanna run by you about how to deal with disagreement. And before I run this by you, I gotta tell you, I'm not very good at it, but, so, I'll ask you a question: Did you watch the football game last night? The national championship?
Robin Hanson: No. I didn't.
Brian Beckcom: Okay. But you thought about that for a second, didn't you? When I asked you that question?
Robin Hanson: I wanted to make sure I understood you enough to answer the question.
Brian Beckcom: Yeah. So briefly, very briefly, I just took control of your mind by asking you a question. I led you in a direction that I wanted you to go. And so maybe one way to deal with disagreements – and again, I'm really bad at this. I'm trying to get better. But rather than tell people what I think and what they should think and this argument or that argument, just ask better questions.
Robin Hanson: Or make statements that are harder – that are less disagreeable. So, for example, a standard advice in an interpersonal conversation where people are tempted to be sensitive is to keep saying, “I feel.” Because your feelings about something is something less likely they're going to disagree with. You know, if they said, “You neglected me,” then they are going to disagree with that. If you say, “I feel neglected,” then, you know, they can more accept that.
Brian Beckcom: “Well, why do you feel neglected? You shouldn't feel that way,” right?
Robin Hanson: Well, yeah, but the point is, yes. And so, make weaker statements is one thing. So, another thing – so I, probably the, you know, the biggest podcast I ever did. A little bigger than you are, sorry. Is Sam Harris.
Brian Beckcom: Yeah, for sure. Love Sam Harris. Love Sam Harris.
Robin Hanson: Okay. And you know, a thing that came up that a lot of audience members liked, which is an important thing, is just, I said, “Just have fewer opinions on topics. You don't need as many opinions as you usually have.” You should pick the topics on which you're going to be somewhat expert and you're going to invest in those and you're going to tell people what you know there.
On other topics, you don't necessarily need opinions. You can just go with what other people say, and that can be okay. Have fewer opinions. And in each topic, ask yourself, “Do I need an opinion on this? Am I, you know, especially good at this?” And if you don't need an opinion or you don't have any special expertise compared to other people you could rely on, then don't have an opinion on it. That's a way to disagree less is just halve your opinions. Certainly fewer poorly thought out opinions, poorly considered opinions. All the more reason to get rid of those.
Brian Beckcom: There's another, I think this is Shane Parrish from The Knowledge Project. I think I got this from Shane. But one of my favorite sayings kind of along the lines of what you're talking about is, “I have strong opinions loosely held.” Like, I have some strong opinions about things, but when presented with evidence to the contrary, I'm willing to abandon those.
And, you know, I tell people one of my favorite things in the world is finding out that I was wrong about something. Because when I find out I was wrong about something, I can correct and get closer to the truth. Of course, you know, this gets back to what we've been talking about basically the entire podcast. I'm probably, to some extent, deceiving myself by thinking –
Robin Hanson: The problem is that you can say that you're going to hold up opinions loosely and that you're going to allow yourself to change your mind easily, but how do you actually do that? What knob do you turn in your head for that? I think the knob of not having an opinion on the subject is a little easier to check.
Brian Beckcom: For sure. For sure. Well, Robin, we've been gone for a little bit over an hour now. There's a couple more things that I wanted to cover with you. Do you have a couple more minutes or so?
Robin Hanson: Sure.
Brian Beckcom: Okay. We were talking before the podcast, I feel like I really want to talk – so you are, you've been into a cryonics. Basically, you’ve signed up to have your brain frozen. When you reached out, there, you're pulling out your, what is that? Your cryonic –
Robin Hanson: Medical alert tag, yes. That tells them somebody should die, call them up if there's a need.
Brian Beckcom: So, talk to it – you're going to have your brain frozen when you die. I think Ted Williams did the same thing, by the way. So did a lot of other people. So, talk to us about –
Robin Hanson: Well, not a lot, actually. Not very many at all. That's a surprising thing.
Brian Beckcom: So, talk to us about that.
Robin Hanson: So, the key idea is that when current medical science gives up on you and we can let worms eat you, or we can freeze you in the hope that future medical technology will be able to revive you. Now, we're not talking future in five years. We're talking 50 or a hundred years later.
That might not happen. But, you know, the continued growth of the world economy and technology suggests that it's a good bet. The more likely failure is that somehow your frozen brain won't be preserved until that future date when you could be revived, perhaps, and fixed.
I have this book called The Age of Em: Work, Love, and Life when Robots Rule the Earth. And it's a scenario about brain emulations and you'd have a shot at becoming a brain emulation in that future world if you are cryonically frozen. It's not going to be easy to unfreeze you and maybe, in fact, turning you into brain emulation is the main thing that would be possible later. But the key point is it costs a modest amount of money, and it gives you this chance of a future revival.
Now. There's another thing you can do that costs about the same thing that about the same number of people do which is to have your ashes thrown into space. And we all know that that's not going to bring you back. And some people do that. It's kind of weird. And their spouses are okay with it. It's kind of quirky, but okay. But a lot of people's spouses really hate cryonics. And that's kind of interesting because, again, it costs about the same amount of money, about the same number of people do it. And it, you know, has very little chance of working. So, apparently, what bothers people about cryonics is the thought that you think it might work.
Brian Beckcom: Either that, or maybe your spouse doesn't want you to come back.
Robin Hanson: I think it’s sort of a betrayal on amendment and [1:07:19] thing. You're willing to come back without them, is the story. And in some sense, that's not true for the ashes into space. But, you know, that's an interesting fact about human behavior.
So, the most striking thing about cryonics is if you do surveys, large fractions of people think, “Yeah, that kind of makes sense.” If you look at the actual number of people who've done it and say – even the number of people who signed up for it is less than 3,000. And the number of people it’s ever been done to is less than 300. And it's gotten free international publicity for 50 years.
So, something is really big as a difference between the number of people who say they'd be willing to do it, or the number of people who actually do it. There's this enormous chasm there.
Brian Beckcom: Two books I want to recommend to you, one of which the title is not coming to my mind right now, but both approaching this from a kind of a scientific perspective and more of a philosophical perspective. So, are you a Neal Stephenson fan? He's a science fiction writer?
Robin Hanson: Sure.
Brian Beckcom: Okay. He wrote a –
Robin Hanson: I read Reamde long ago, but I read many of his books, yes.
Brian Beckcom: All right. He’s got a book that he wrote pretty recently. And when I say that, I think within the last two years. And it's called Fall – I think it's called Fall, or Dodge in Hell. And it's about –
Robin Hanson: I've read that one, yes.
Brian Beckcom: So, isn't that an interesting scientific exploration of kind of what you're talking about. Like what happens – could you upload consciousness into a computer?
Robin Hanson: So, there was also a recent TV show that came out on Netflix, I think, or Amazon, called Upload. And so, it had a similar theme to that book which is it focused on if you could come back and have your brain be emulated, it would be like a heaven or hell. So, the usual story framing is that this is sort of a replacement for a religious heaven or hell. It's a way to sort of actually make a heaven or hell for people's rights. And I think that just misses the main thing, which is this isn't the heaven or hell, it's a real world.
And so, for example, in the book – in the TV show Upload, they make it a rule that you're not allowed to work in this opposite world. So, you have to pay for it ahead of time, and then it's luxury all the way afterwards. But of course, in reality, I think what happens is that they work and it becomes the world, the main world where people do things at work. It's not assigned heaven or hell. It's the new world.
Brian Beckcom: There's another book by, I think he's a Dutch philosopher. I'll find the book and I'll send you an email about it. But basically, the idea behind this book from a philosophical standpoint is death, the ceasing of life, is what gives a meaning to everything. And without death, there's essentially no meaning for – there's no reason to do anything. Like, in other words, if we were immortal, we would have no motivation to do anything. Like, being immortal would be terrible. And that's the basic philosophical argument on that.
Robin Hanson: I mean, there isn't any realistic chance of eliminating death. It's all about the rate of death. So obviously, you know, there are mayflies through the last month and they die quickly. We aren't mayflies. Would our life be more valuable as mayflies? I don't think so. We live a long time relative to mayflies, but a short time compared to cosmology.
So, even if we lived a million years, we would still die and life can still have meaning because there's an eventual death. So, it seems to me, if you – I don't want to argue about whether death gives meaning to life, because there really is no prospect of not having death. The lack of death scenario is just not a scenario that can work. It's just all about how long is plausible to go before you die.
The Hard Problem of Consciousness
Brian Beckcom: Do you have any thoughts about – because I think this is, to me, this is the most important question in science right now. Do you have any thoughts about consciousness and, in particular, the hard problem of consciousness? In other words, let's say you're frozen –
Robin Hanson: I do, but I'm going to disappoint you. So, I'm a pretty hardcore physics person. And so, you know, I'm just going to take the standard physics story pretty literally, and be done with it, basically, you know? Clearly physical stuff is capable of consciousness because we are physical stuff and we're conscious. So, QED, if we rearrange our physical stuff or make new physical stuff, that stuff could be conscious, too, because it's more physical stuff.
There is no other extra thing that would be the key point. There is just physical stuff, which obviously can be conscious, and that's all there is to say. There is nothing more to say. There's never going to be anything we will learn more than that. We will never know anything else. We will just know that there's physical things and some of them claim to feel to be conscious. And that's it.
Brian Beckcom: Do you think there's anything special about the biology of the human brain as it relates to consciousness, or do you think that if we had the right information and the right processors and put them in the right order that consciousness could arise outside the context of the human brain?
Robin Hanson: So, of course it could. So, the thing that's distinctive about the human brain compared to other sorts of computer organizations is that we have this PR person, as we talked about earlier. We go to the trouble to have a central narrative about what we're doing and why. Now, there are many computer systems in the world that don't bother with that. They just do stuff, and they don't construct a central narrative of what they're doing and why. We construct that central narrative.
So, that's the place from which we describe our consciousness is that central narrative and that's – we organize our discussions of consciousness around that narrative because that's a narrative we have in our heads. If you are creature without the narrative, then if you were going to be conscious, it would have to be in some other way than having that central narrative be conscious.
So, that's the main thing that's different about our brains is they are organized around this PR person with a central story, what they've been doing. But any other computer program that also organized itself around such a central narrative could certainly also be conscious in the same way that it remembers being conscious. It tells you it's conscious. It tells you it's important for it to be conscious. Tells you why it wants to be conscious. It tells you it remembers being conscious. Etc.
Brian Beckcom: Do you have any – have you thought about or do you have any ideas about the point of consciousness? Like, the evolutionary point? Like, it seems to me like evolutionarily, consciousness is a bad thing in many ways, right?
Robin Hanson: No, it's tied into the central narrative, again. So many animals –
Brian Beckcom: Well, why do we need a central narrative? Like, what's the point of that?
Robin Hanson: So, that’s what we talked about before. We have norms. Norms are expressed in terms of motives. We need a story about our motives to defend ourselves from accusations of norm violation. So, that's the point. The point is to tell our story in terms of the motives we want to tell about why we did things which hopefully protect us from accusations that we broke rules.
Brian Beckcom: But a zebra doesn't need that. A zebra doesn't need to explain why he ran away from the lion. Why do we need to have this narrator? What's the point of the narrator?
Robin Hanson: So, that’s the key point again. We have norms. Zebras don't have the norms. So, we have rules that we've worked out among ourselves about what we're supposed to do and not supposed to do. And we enforce those norms by watching what each other are doing. And if we see a norm violation, calling attention to it and having a discussion about what to do. That's the distinction of human strategy, social strategy. Because we have that social strategy, then we need to protect ourselves against the accusations by having this narrative, and therefore, that's why we have consciousness and the PR agent who is ready to defend ourselves there.
Aliens and the Great Filter
Brian Beckcom: Nice. Last topic. We got to talk about this. Aliens.
Robin Hanson: We don’t have much time here.
Brian Beckcom: I know. I know we don’t have a lot of time, but you do have some ideas about aliens. And also, I don't know if the Great Filter fits in here perfectly.
Robin Hanson: It does. It does fit, actually.
Brian Beckcom: It does. Okay. So, talk about it briefly -- first of all –
Robin Hanson: We’ll have to do it very briefly.
Brian Beckcom: Yeah. Tell people real quick what the Great Filter is and what your ideas about aliens are.
Robin Hanson: All right. So, once upon a time, the entire universe was dead. Then each little part slowly evolved over time, moving through levels of evolution, becoming simple life, and then more complicated life, then intelligence, then civilization. And then eventually we might become a space-faring civilization that spreads out in the universe.
The Great Filter is about that whole process and how hard it is. And so, it seems pretty obvious that whole process is, in general, hard. So, if we say on a planet like Earth, what's the chance that something could go from the beginning to the end of that process in only 4 billion years that we've seen so far, you have to say, well, that's pretty unlikely. On most planets it doesn't happen. It happens every once in a while somewhere.
So, the Great Filter is how hard that is and, like, whether there are more steps ahead of us, because if so, then we’re probably not going to make it. And we can think of that filter not only per planet, we can think about it per galaxy even. What's the chance that eventually a galaxy will give rise to an advanced civilization that would spread out? And the universe has basically been going through this process in each galaxy in each place, slowly having a chance to maybe advanced to the next step.
Eventually, it produces civilizations, which you might call aliens, except here. And the question is, well, how many of them are there and where are they and at what level are they? So, I want to make the key distinction between quiet aliens who don't expand very far, don't last very long, don't do very much, and loud aliens, which are big and do a lot.
And so, in particular, I want to define grabby aliens. Grabby aliens are aliens who, as soon as they can, expand as fast as they can and wherever they go and whatever they control, they prevent other grabby aliens from showing up there and doing the same. If this is what happens in the universe, then eventually the universe is full of grabby aliens and no new grabby aliens can show up. And this is what I offer as an explanation for why we seem to be early in the history of the universe. This great filter process rewards things happening at some random time in the universe, not just then, but farther into the future so that this process can have achieved its X. So, the chance for time actually goes as a power law which really pushes toward later times.
So, we're really early compared to this distribution. So, if we didn't exist, would there be another alien around here eventually in this part of the universe? If the answer is no, then we're really rare and we should have appeared at some random time. But if there would be likely to be other aliens around if we weren't here, then eventually they're going to fill it up. And eventually there'll be a deadline after which we couldn’t appear. And that's why we're as early as we are. So, that's our explanation for our survey [1:17:58].
And so in this model of grabby aliens, actually, we say roughly 40% of the universe out there at this moment is full, controlled by grabby aliens. A lot. And if we can see them in the sky, they'd be huge. And the reason we don't see them here is there's a selection effect. Because they expand at near the speed of light, that is almost all of our entire past volume, our past light cone, is excluded because if they had appeared there, then they would have shown up here and prevented us from existing. And that's why we don't see them in this past light cone, and that's why the universe looks so empty, but it's not.
Brian Beckcom: This reminds me of a Fermi’s paradox. That's what this brings to mind.
Robin Hanson: Well, it’s the same subject, yes. Why don’t we see anything?
Brian Beckcom: Yeah. Why do we not see aliens? It could be either because there are no aliens or because they haven't contacted us or because we're not sophisticated enough to receive the signals or something like that.
Robin Hanson: Well, that would work for the quiet ones. Well, what about the really big loud ones? So, in our analysis, if there was an alien civilization, the sky would be much bigger than the full moon. It would be huge. And so you really couldn’t miss it. But we don't see it. And so, the selection effect is our story, that the reason we don't see it is that if it was there, we wouldn't be here. We'd only really see things that are just at the edge of visibility and that's not very many, you know, that's rare thing.
And so, the point is most of the universe right now is full of aliens. These aliens have taken over. Within a billion years, they will get here. Then, you know, we will have contact. But, so, we've got maybe a hundred million to a billion years left before we make contact. So, it's not an urgent thing, but they're out there and we will see them.
Brian Beckcom: They’re coming. Well, what a great way to end the podcast, Robin. We went a little bit longer over our time. I really, really appreciate it, all of this. You're a really, really bright guy with a lot of really good things to say.
Tell everybody, before I let you go, if they're interested in learning more about you, your writing and things like that, where they can find you online.
Brian Beckcom: Nice. Well Robin, have a great day, and again, thank you so much. This was a beautifully intellectual conversation. I really liked it a lot. We could have talked for three hours.
Robin Hanson: Maybe we will someday.
Brian Beckcom: Maybe we will. Thank you very much, Robin.
Robin Hanson: Okay. Take care.