POPULARITY
Are we alone in the universe—or already living alongside an ancient alien intelligence? In this mind-bending exploration, Professor Robin Hanson (George Mason University & Oxford's Future of Humanity Institute) breaks down the statistical odds that alien life exists and why it may have already been found in our own solar system. From AI-driven extraterrestrials silently observing us, to the chilling theory that humans are being domesticated by advanced alien civilizations, Hanson reveals where alien life is most likely to emerge, why UFO sightings might actually be real, and how our understanding of “quiet” vs. “loud” aliens could change everything we know about our future. Robin Hanson's Book, The Elephant in the Brain: Hidden Motives in Everyday Life: https://www.elephantinthebrain.com/ BialikBreakdown.comYouTube.com/mayimbialik
In this episode of Faster, Please! — The Podcast, I talk with economist Robin Hanson about a) how much technological change our society will undergo in the foreseeable future, b) what form we want that change to take, and c) how much we can ever reasonably predict.Hanson is an associate professor of economics at George Mason University. He was formerly a research associate at the Future of Humanity Institute at Oxford, and is the author of the Overcoming Bias Substack. In addition, he is the author of the 2017 book, The Elephant in the Brain: Hidden Motives in Everyday Life, as well as the 2016 book, The Age of Em: Work, Love, and Life When Robots Rule the Earth.In This Episode* Innovation is clumpy (1:21)* A history of AI advancement (3:25)* The tendency to control new tech (9:28)* The fallibility of forecasts (11:52)* The risks of fertility-rate decline (14:54)* Window of opportunity for space (18:49)* Public prediction markets (21:22)* A culture of calculated risk (23:39)Below is a lightly edited transcript of our conversationInnovation is Clumpy (1:21)Do you think that the tech advances of recent years — obviously in AI, and what we're seeing with reusable rockets, or CRISPR, or different energy advances, fusion, perhaps, even Ozempic — do you think that the collective cluster of these technologies has put humanity on a different path than perhaps it was on 10 years ago?. . . most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years.That's a pretty big standard. As you know, the world has been growing exponentially for a very long time, and new technologies have been appearing for a very long time, and the economy doubles roughly every 15 or 20 years, and that can't happen without a whole lot of technological change, so most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years. So to say that we're going more than that is really a high standard here. I don't think it meets that standard. Maybe the standard it meets is to say people were worried about maybe a stagnation or slowdown a decade or two ago, and I think this might weaken your concerns about that. I think you might say, well, we're still on target.Innovation's clumpy. It doesn't just out an entirely smooth . . . There are some lumpy ones once in a while, lumpier innovations than usual, and those boost higher than expected, sometimes lower than expected sometimes, and maybe in the last ten years we've had a higher-than-expected clump. The main thing that does is make you not doubt as much as you did when you had the lower-than-expected clump in the previous 10 years or 20 years because people had seen this long-term history and they thought, “Lately we're not seeing so much. I wonder if this is done. I wonder if we're running out.” I think the last 10 years tells you: well, no, we're kind of still on target. We're still having big important advances, as we have for two centuries.A history of AI advancement (3:25)People who are especially enthusiastic about the recent advances with AI, would you tell them their baseline should probably be informed by economic history rather than science fiction?[Y]es, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.By technical history! We have 70-odd years of history of AI. I was an AI researcher full-time from '84 to '93. If you look at the long sweep of AI history, we've had some pretty big advances. We couldn't be where we are now without a lot of pretty big advances all along the way. You just think about the very first digital computer in 1950 or something and all the things we've seen, we have made large advances — and they haven't been completely smooth, they've come in a bit of clumps.I was enticed into the field in 1984 because of a recent set of clumps then, and for a century, roughly every 30 years, we've had a burst of concern about automation and AI, and we've had big concern in the sense people said, “Are we almost there? Are we about to have pretty much all jobs automated?” They said that in the 1930s, they said it in the 1960s — there was a presidential commission in the 1960s: “What if all the jobs get automated?” I jumped in in the late '80s when there was a big burst there, and I as a young graduate student said, “Gee, if I don't get in now, it'll all be over soon,” because I heard, “All the jobs are going to be automated soon!”And now, in the last decade or so, we've had another big burst, and I think people who haven't seen that history, it feels to them like it felt to me in 1984: “Wow, unprecedented advances! Everybody's really excited! Maybe we're almost there. Maybe if I jump in now, I'll be part of the big push over the line to just automate everything.” That was exciting, it was tempting, I was naïve, and I was sucked in, and we're now in another era like that. Yes, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.I like that you mentioned the automation scare of the '60s. Just going back and looking at that, it really surprised me how prevalent and widespread and how serious people took that. I mean, you can find speeches by Martin Luther King talking about how our society is going to deal with the computerization of everything. So it does seem to be a recurrent fear. What would you need to see to think it is different this time?The obvious relevant parameter to be tracking is the percentage of world income that goes to automation, and that has been creeping up over the decades, but it's still less than five percent.What is that statistic?If you look at the percentage of the economy that goes to computer hardware and software, or other mechanisms of automation, you're still looking at less than five percent of the world economy. So it's been creeping up, maybe decades ago it was three percent, even one percent in 1960, but it's creeping up slowly, and obviously, when that gets to be 80 percent, game over, the economy has been replaced — but that number is creeping up slowly, and you can track it, so when you start seeing that number going up much faster or becoming a large number, then that's the time to say, “Okay, looks like we're close. Maybe automation will, in fact, take over most jobs, when it's getting most of world income.”If you're looking at economic statistics, and you're looking at different forecasts, whether by the Fed or CBO or Wall Street banks and the forecasts are, “Well, we expect, maybe because of AI, productivity growth to be 0.4 percentage points higher over this kind of time. . .” Those kinds of numbers where we're talking about a tenth of a point here, that's not the kind of singularity-emergent world that some people think or hope or expect that we're on.Absolutely. If you've got young enthusiastic tech people, et cetera — and they're exaggerating. The AI companies, even they're trying to push as big a dramatic images they can. And then all the stodgy conservative old folks, they're afraid of seeming behind the times, and not up with things, and not getting it — that was the big phrase in the Internet Boom: Who “gets it” that this is a new thing?I'm proud to be a human, to have been part of the civilization to have done this . . . but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.Now it would be #teamgetsit.Exactly, something like that. They're trying to lean into it, they're trying to give it the best spin they can, but they have some self-respect, so they're going to give you, “Wow 0.4 percent!” They'll say, “That's huge! Wow, this is a really big thing, everybody should be into this!” But they can't go above 0.4 percent because they've got some common sense here. But we've even seen management consulting firms over the last decade or so make predictions that 10 years in the future, half all jobs would be automated. So we've seen this long history of these really crazy extreme predictions into a decade, and none of those remotely happened, of course. But people do want to be in with the latest thing, and this is obviously the latest round of technology, it's impressive. I'm proud to be a human, to have been part of the civilization to have done this, and I'd like to try them out, and see what I can do with them, and think of where they could go. That's all exciting and fun, but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is. The tendency to control new tech (9:28)Not to talk just about AI, but do you think AI is important enough that policymakers need to somehow guide the technology to a certain outcome? Daron Acemoglu, one of the Nobel Prize winners, has for quite some time, and certainly recently, said that this technology needs to be guided by policymakers so that it helps people, it helps workers, it creates new tasks, it creates new things for them to do, not automate away their jobs or automate a bunch of tasks.Do you think that there's something special about this technology that we need to guide it to some sort of outcome?I think those sort of people would say that about any new technology that seemed like it was going to be important. They are not actually distinguishing AI from other technologies. This is just what they say about everything.It could be “technology X,” we must guide it to the outcome that I have already determined.As long as you've said, “X is new, X is exciting, a lot of things seem to depend on X,” then their answer would be, “We need to guide it.” It wouldn't really matter what the details of X were. That's just how they think about society and technology. I don't see anything distinctive about this, per se, in that sense, other than the fact that — look, in the long run, it's huge.Space, in the long run, is huge, because obviously in the long run almost everything will be in space, so clearly, eventually, space will be the vast majority of everything. That doesn't mean we need to guide space now or to do anything different about it, per se. At the moment, space is pretty small, and it's pretty pedestrian, but it's exciting, and the same for AI. At the moment, AI is pretty small, minor, AI is not remotely threatening to cause harm in our world today. If you look about harmful technologies, this is way down the scale. Demonstrated harms of AI in the last 10 years are minuscule compared to things like construction equipment, or drugs, or even television, really. This is small.Ladders for climbing up on your roof to clean out the gutters, that's a very dangerous technology.Yeah, somebody should be looking into that. We should be guiding the ladder industry to make sure they don't cause harm in the world.The fallibility of forecasts (11:52)I'm not sure how much confidence we should ever have on long-term economic forecasts, but have you seen any reason to think that they might be less reliable than they always have been? That we might be approaching some sort of change? That those 50-year forecasts of entitlement spending might be all wrong because the economy's going to be growing so much faster, or the longevity is going to be increasing so much faster?Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.It was just a little over two centuries ago when the world saw this enormous revolution. Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.So you might say we can't trust these trends to continue maybe more than 10 doublings, and then who knows what might happen? You could just say — that's 200 years, say, if you double every 20 years — we just can't trust these forecasts more than 200 years out. Look at what's happened in the past after that many doublings, big changes happened, and you might say, therefore, expect, on that sort of timescale, something else big to happen. That's not crazy to say. That's not very specific.And then if you say, well, what is the thing people most often speculate could be the cause of a big change? They do say AI, and then we actually have a concrete reason to think AI would change the growth rate of the economy: That is the fact that, at the moment, we make most stuff in factories, and factories typically push out from the factory as much value as the factory itself embodies, in economic terms, in a few months.If you could have factories make factories, the economy could double every few months. The reason we can't now is we have humans in the factories, and factories don't double them. But if you could make AIs in factories, and the AIs made factories, that made more AIs, that could double every few months. So the world economy could plausibly double every few months when AI has dominated the economy.That's of the magnitude doubling every few months versus doubling every 20 years. That's a magnitude similar to the magnitude we saw before from farming to industry, and so that fits together as saying, sometime in the next few centuries, expect a transition that might increase the growth rate of the economy by a factor of 100. Now that's an abstract thing in the long frame, it's not in the next 10 years, or 20 years, or something. It's saying that economic modes only last so long, something should come up eventually, and this is our best guess of a thing that could come up, so it's not crazy.The risks of fertility-rate decline (14:54)Are you a fertility-rate worrier?If the population falls, the best models say innovation rates would fall even faster.I am, and in fact, I think we have a limited deadline to develop human-level AI, before which we won't for a long pause, because falling fertility really threatens innovation rates. This is something we economists understand that I think most other people don't: You might've thought that a falling population could be easily compensated by a growing economy and that we would still have rapid and fast innovation because we would just have a bigger economy with a lower population, but apparently that's not true.If the population falls, the best models say innovation rates would fall even faster. So say the population is roughly predicted to peak in three decades and then start to fall, and if it's falls, it would fall roughly a factor of two every generation or two, depending on which populations dominate, and then if it fell by a factor of 10, the innovation rate would fall by more than a factor of 10, and that means just a slower rate of new technologies, and, of course, also a reduction in the scale of the world economy.And I think that plausibly also has the side effect of a loss in liberality. I don't think people realize how much it was innovation and competition that drove much of the world to become liberal because the winning nations in the world were liberal and the rest were afraid of falling too far behind. But when innovation goes away, they won't be so eager to be liberal to be innovative because innovation just won't be a thing, and so much of the world will just become a lot less liberal.There's also the risk that — basically, computers are a very durable technology, in principle. Typically we don't make them that durable because every two years they get twice as good, but when innovation goes away, they won't get good very fast, and then you'll be much more tempted to just make very durable computers, and the first generation that makes very durable computers that last hundreds of years, the next generation won't want to buy new computers, they'll just use the old durable ones as the economy is shrinking and then the industry that commuters might just go away. And then it could be a long time before people felt a need to rediscover those technologies.I think the larger-scale story is there's no obvious process that would prevent this continued decline because there's no level at which, when you get that, some process kicks in and it makes us say, “Oh, we need to increase the population.” But the most likely scenario is just that the Amish and [Hutterites] and other insular, fertile subgroups who have been doubling every 20 years for a century will just keep doing that and then come to dominate the world, much like Christians took over the Roman Empire: They took it over by doubling every 20 years for three centuries. That's my default future, and then if we don't get AI or colonize space before this decline, which I've estimated would be roughly 70 years' worth more of progress at previous rates, then we don't get it again until the Amish not only just take over the world, but rediscover a taste for technology and economic growth, and then eventually all of the great stuff could happen, but that could be many centuries later.This does not sound like an issue that can be fundamentally altered by tweaking the tax code.You would have to make a large —— Large turn of the dial, really turn that dial.People are uncomfortable with larger-than-small tweaks, of course, but we're not in an era that's at all eager for vast changes in policy, we are in a pretty conservative era that just wants to tweak things. Tweaks won't do it.Window of opportunity for space (18:49)We can't do things like Daylight Savings Time, which some people want to change. You mentioned this window — Elon Musk has talked about a window for expansion into space, and this is a couple of years ago, he said, “The window has closed before. It's open now. Don't assume it will always be open.”Is that right? Why would it close? Is it because of higher interest rates? Because the Amish don't want to go to space? Why would the window close?I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy.There's a demand for space stuff, mostly at the moment, to service Earth, like the internet circling the earth, say, as Elon's big project to fund his spaceships. And there's also demand for satellites to do surveillance of the earth, et cetera. As the earth economy shrinks, the demand for that stuff will shrink. At some point, they won't be able to afford fixed costs.A big question is about marginal cost versus fixed costs. How much is the fixed cost just to have this capacity to send stuff into space, versus the marginal cost of adding each new rocket? If it's dominated by marginal costs and they make the rockets cheaper, okay, they can just do fewer rockets less often, and they can still send satellites up into space. But if you're thinking of something where there's a key scale that you need to get past even to support this industry, then there's a different thing.So thinking about a Mars economy, or even a moon economy, or a solar system economy, you're looking at a scale thing. That thing needs to be big enough to be self-sustaining and economically cost-effective, or it's just not going to work. So I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy. Space economy needs to be big enough just to support itself, et cetera, and that's a problem because it's the same humans in space who are down here on earth, who are going to have the same fertility problems up there unless they somehow figure out a way to make a very different culture.A lot of people just assume, “Oh, you could have a very different culture on Mars, and so they could solve our cultural problems just by being different,” but I'm not seeing that. I think they would just have a very strong interconnection with earth culture because they're going to have just a rapid bandwidth stuff back and forth, and their fertility culture and all sorts of other culture will be tied closely to earth culture, so I'm not seeing how a Mars colony really solves earth cultural problems.Public prediction markets (21:22)The average person is aware that these things, whether it's betting markets or these online consensus prediction markets, that they exist, that you can bet on presidential races, and you can make predictions about a superconductor breakthrough, or something like that, or about when we're going to get AGI.To me, it seems like they have, to some degree, broken through the filter, and people are aware that they're out there. Have they come of age?. . . the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored.In this presidential election, there's a lot of discussion that points to them. And people were pretty open to that until Trump started to be favored, and people said, “No, no, that can't be right. There must be a lot of whales out there manipulating, because it couldn't be Trump's winning.” So the openness to these things often depends on what their message is.But honestly, the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored. Twenty-five years ago, I invented this concept of decision markets using in organizations, and now in the last year, I've actually seen substantial experimentation with them and so I'm excited to see where that goes, and I'm hopeful there, but that's not so much about the presidential markets.Roughly a century ago there was more money bet in presidential betting markets than in stock markets at the time. Betting markets were very big then, and then they declined, primarily because scientific polling was declared a more scientific approach to estimating elections than betting markets, and all the respectable people wanted to report on scientific polls. And then of course the stock market became much, much bigger. The interest in presidential markets will wax and wane, but there's actually not that much social value in having a better estimate of who's going to win an election. That doesn't really tell you who to vote for, so there are other markets that would be much more socially valuable, like predicting the consequences of who's elected as president. We don't really have much markets on those, but maybe we will next time around. But there is a lot of experimentation going in organizational prediction markets at the moment, compared to, say, 10 years ago, and I'm excited about those experiments.A culture of calculated risk (23:39)I want a culture that, when one of these new nuclear reactors, or these nuclear reactors that are restarting, or these new small modular reactors, when there's some sort of leak, or when a new SpaceX Starship, when some astronaut gets killed, that we just don't collapse as a society. That we're like, well, things happen, we're going to keep moving forward.Do you think we have that kind of culture? And if not, how do we get it, if at all? Is that possible?That's the question: Why has our society become so much more safety-oriented in the last half-century? Certainly one huge sign of it is the way we way overregulated nuclear energy, but we've also now been overregulating even kids going to school. Apparently they can't just take their bikes to school anymore, they have to go on a bus because that's safer, and in a whole bunch of ways, we are just vastly more safety-oriented, and that seems to be a pretty broad cultural trend. It's not just in particular areas and it's not just in particular countries.I've been thinking a lot about long-term cultural trends and trying to understand them. The basic story, I think, is we don't have a good reason to believe long-term cultural trends are actually healthy when they are shared trends of norms and status markers that everybody shares. Cultural things that can vary within the cultures, like different technologies and firm cultures, those we're doing great. We have great evolution of those things, and that's why we're having all these great technologies. But things like safetyism is more of a shared cultural norm, and we just don't have good reasons to think those changes are healthy, and they don't fix themselves, so this is just another example of something that's going wrong.They don't fix themselves because if you have a strong, very widely shared cultural norm, and someone has a different idea, they need to be prepared to pay a price, and most of us aren't prepared to pay that price.If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.Right. If, for example, we have 200 countries, if they were actually independent experiments and had just had different cultures going different directions, then I'd feel great; that okay, the cultures that choose too much safety, they'll lose out to the others and eventually it'll be worn out. If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.At the beginning of Covid, all the usual public health efforts said all the usual things, and then world elites got together and talked about it, and a month later they said, “No, that's all wrong. We have a whole different thing to do. Travel restrictions are good, masks are good, distancing is good.” And then the entire world did it the same way, and there was strong pressure on any deviation, even Sweden, that would dare to deviate from the global consensus.If you look about many kinds of regulation, it's very little deviation worldwide. We don't have 200, or even 100, independent policy experiments, we basically have a main global civilization that does it the same, and maybe one or two deviants that are allowed to have somewhat different behavior, but pay a price for it.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Micro Reads▶ Economics* The Next President Inherits a Remarkable Economy - WSJ* The surprising barrier that keeps us from building the housing we need - MIT* Trump's tariffs, explained - Wapo* Watts and Bots: The Energy Implications of AI Adoption - SSRN* The Changing Nature of Technology Shocks - SSRN* AI Regulation and Entrepreneurship - SSRN▶ Business* Microsoft reports big profits amid massive AI investments - Ars* Meta's Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything' Else - Wired* Apple's AI and Vision Pro Products Don't Meet Its Standards - Bberg Opinion* Uber revenues surge amid robust US consumer spending - FT* Elon Musk in funding talks with Middle East investors to value xAI at $45bn - FT▶ Policy/Politics* Researchers ‘in a state of panic' after Robert F. Kennedy Jr. says Trump will hand him health agencies - Science* Elon Musk's Criticism of ‘Woke AI' Suggests ChatGPT Could Be a Trump Administration Target - Wired* US Efforts to Contain Xi's Push for Tech Supremacy Are Faltering - Bberg* The Politics of Debt in the Era of Rising Rates - SSRN▶ AI/Digital* Alexa, where's my Star Trek Computer? - The Verge* Toyota, NTT to Invest $3.3 Billion in AI, Autonomous Driving - Bberg* Are we really ready for genuine communication with animals through AI? - NS* Alexa's New AI Brain Is Stuck in the Lab - Bberg* This AI system makes human tutors better at teaching children math - MIT* Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games - Arxiv▶ Biotech/Health* Obesity Drug Shows Promise in Easing Knee Osteoarthritis Pain - NYT* Peak Beef Could Already Be Here - Bberg Opinion▶ Clean Energy/Climate* Chinese EVs leave other carmakers with only bad options - FT Opinion* Inside a fusion energy facility - MIT* Why aren't we driving hydrogen powered cars yet? There's a reason EVs won. - Popular Science* America Can't Do Without Fracking - WSJ Opinion▶ Robotics/AVs* American Drone Startup Notches Rare Victory in Ukraine - WSJ* How Wayve's driverless cars will meet one of their biggest challenges yet - MIT▶ Space/Transportation* Mars could have lived, even without a magnetic field - Big Think▶ Up Wing/Down Wing* The new face of European illiberalism - FT* How to recover when a climate disaster destroys your city - Nature▶ Substacks/Newsletters* Thinking about "temporary hardship" - Noahpinion* Hold My Beer, California - Hyperdimensional* Robert Moses's ideas were weird and bad - Slow Boring* Trading Places? No Thanks. - The Dispatch* The Case For Small Reactors - Breakthrough Journal* The Fourth Industrial Revolution and the Future of Work - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Why might our brains be keeping us in the dark about our own motives? What's the reason humans give to charity? How do cultural norms lead to continual efforts to signal to our potential allies?Robin Hanson is a professor of economics at George Mason University . His latest two books are titled, The Elephant in the Brain: Hidden Motives in Everyday Life, and The Age of Em: Work, Love, and Life when Robots Rule the Earth.Robin and Greg discuss the discrepancies between what we say and our true intentions.Robin shares how human interaction within our discussions is less about the content and more about social positioning and signaling. Robin talks about the intricate dance of conversations, where showing status, expressing care, and signaling allyship are at the forefront. They also wrestle with the concept of luxury goods and their role in consumer behavior, challenging the conventional wisdom about why we buy what we buy and the messages we're really sending with our choices.*unSILOed Podcast is produced by University FM.*Episode Quotes:On conscious mind and social norm23:39: Humans have rules about what you're supposed to do and not supposed to do, especially regarding each other. And we really care a lot about our associates not violating those norms, and we're very eager to find rivals violating them and call them out on that. And that's just a really big thing in our lives. And in fact, it's so big that plausibly your conscious mind, the part of your mind I'm talking to, isn't the entire mind, you have noticed. You've got lots of stuff going on in your head that you're not very conscious of, but your conscious mind is the part of you whose job it is mainly to watch what you're doing and at all moments have a story about why you're doing it and why this thing you're doing, for the reason you're doing it,isn't something violating norms. If you didn't have this conscious mind all the time putting together the story, you'd be much more vulnerable to other people claiming that you're violating norms and accusing you of being a bad person for doing bad things.Our individual doesn't care much about norms20:25: Sometimes norms are functional and helpful, and sometimes they're not. Our individual incentive doesn't care much about that. Our incentive is to not violate the norms and not be caught violating the norms, regardless of whether they're good or bad norms, regardless of what function they serve.Why do people not want to subsidize luxury items, but they do subsidize education?46:34: So part of the problem is that we often idealize some things and even make them sacred. And then, in their role as something sacred, we are willing to subsidize them and sacrifice for them. And then it's less about maybe their consequences and more about showing our devotion to the sacred. In some sense, sacred things are the things we are most eager to show our devotion to. And that's why people who want to promote things want us to see them as sacred. So, schools have succeeded in getting many people to see schools as a sacred venture and therefore worthy of extra subsidy. And they're less interested in maybe the calculation of the job consequences of education because they just see education itself as sacred.On notion of cultural drift47:55: So human superpower is cultural evolution. This is why we can do things so much better than other animals. The key mechanism of culture is that we copy the behaviors of others. In order to make that work, we have to differentially copy the behavior that's better, not the behavior that's worse. And to do that, we need a way to judge who is more successful so that we will copy the successful. So our estimate of what counts as success—who are the people around us who we will count as successful and worthy of emulation—is a key element of culture. And that's going to drive a lot of our choices, including our values and norms. We're going to have compatible and matching with our concept of who around us is the most admirable, the most worthy of celebration and emulation.Show Links:Recommended Resources:François de La RochefoucauldMicrosociologyPatek Philippe WatchesConsumptionParochialismThe Case against Education: Why the Education System Is a Waste of Time and MoneyEvolutionGuest Profile:Faculty Profile at George Mason UniversityBlog - Overcoming BiasPodcast - Minds Almost MeetingProfile on LinkedInSocial Profile on XHis Work:Amazon Author PageThe Elephant in the Brain: Hidden Motives in Everyday LifeThe Age of Em: Work, Love, and Life when Robots Rule the Earth
Read the full transcript here. What is futarchy? Why does it seem to be easier to find social innovations rather than technical innovations? How does it differ from democracy? In what ways might a futarchy be gamed? What are some obstacles to implementing futarchy? Do we actually like for our politicians to be hypocritical to some degree? How mistaken are we about our own goals for social, political, and economic institutions? Do we enjoy fighting (politically) more than actually governing well and improving life for everyone? What makes something "sacred"? What is a tax career agent?Robin Hanson is associate professor of economics at George Mason University and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master's degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer at Lockheed and NASA. He has over ninety academic publications in major journals across a wide variety of fields and has written two books: The Age of Em: Work, Love and Life When Robots Rule the Earth (2016), and The Elephant in the Brain: Hidden Motives in Everyday Life (2018, co-authored with Kevin Simler). He has pioneered prediction markets, also known as information markets and idea futures, since 1988; and he suggests "futarchy" as a form of governance based on prediction markets. He also coined the phrase "The Great Filter" and has recently numerically estimated it via a model of "Grabby Aliens". Learn more about Robin at his GMU page or follow him on the-website-formerly-known-as-Twitter at @robinhanson. [Read more]
BIO: In 2013, Zachary Resnick began to make a living from playing poker cash games and investing in other poker players, providing a unique understanding of risk management that is largely shaped through leveraging volatility to outperform others in the high-risk, high-reward situations of poker.STORY: Zach invested in two founders with a brilliant idea and overlooked the fact that they were not A+ founders. He ended up riding the company down by more than 80%.LEARNING: Back people that completely blow you away. People are super important, especially at the earlier stage of the business that you invest in. “When investing in early-stage companies, the qualities of the founders are paramount and almost inarguably the most important thing for that company.”Zachary Resnick Guest profileIn 2013 Zachary Resnick began to make a living from playing cash games and investing in other poker players, providing a unique understanding of risk management that is largely shaped through leveraging volatility to outperform others in the high-risk, high-reward situations of poker.In 2016 he made his first personal investment in Bitcoin and, by 2017, was focused on investing and trading crypto full-time.In 2018 he founded Unbounded Capital, an early-stage venture capital firm focused on payment infrastructure.He is also the founder of FlyFlat - a luxury concierge service that specializes in last-minute, heavily discounted business and first-class air travel.Worst investment everZach's company invested in these two founders, who loved the company's media content on the blockchain world. The founders were building a solution that Zach believed was A+. It would be a 100x improvement to existing solutions. There was one problem, though; the founders were not A+ founders. This became the first startup Zach's company rode down by more than 80% since he started the investment firm.Lessons learnedBack people that completely blow you away.People are super important, especially at the earlier stage of the business that you invest in.Know your investing style.Andrew's takeawaysWhen investing in a startup, you've got to trust the founders, believe in the idea, have a ready market and ensure the startup has the muscle to execute the vision.Actionable adviceIf you're in the startup investing business, especially in the early stage, meet with founders in-person before investing.Zachary's recommendationsFor frequent, flexible travelers who fly business class and want to save money, Zach recommends checking out Fly Flat.To enhance deeper thinking, Zach recommends reading great books such as The Elephant in the Brain: Hidden Motives in Everyday Life and Thinking Fast and Slow.Zach recommends reading his first e-book, How A Scalable Blockchain Will Win, to learn more about how scalable and efficient blockchains will transform the internet and how data and payments operate worldwide.No.1 goal for the next 12 monthsZachary's number one goal for the next 12 months is to have more spaciousness in his life so he can spend more quality time with his amazing partner....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum. Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits. After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you'll find a large percentage of lessons can be cross-applied. Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here. Below are the best ~50 out of the ~500 books I read in the past few years. I'm an entrepreneur so they're mostly business-related. Bold = extra-recommended. If you'd like any more specific recommendations feel free to leave a comment and I can try to be helpful. Also - I'm hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :) The best ~50 books: Fundraising: Fundraising The Power Law: Venture Capital and the Making of the New Future Leadership/Management: The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers The Advantage: Why Organizational Health Trumps Everything Else In Business The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever Entrepreneurship/Startups: Running Lean The Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a Startup Zero to One: Notes on Startups, or How to Build the Future The Startup Owner's Manual: The Step-By-Step Guide for Building a Great Company Strategy/Innovation: The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to you Scaling Up: How a Few Companies Make It...and Why the Rest Don't Operations/Get Shit Done: The Goal: A Process of Ongoing Improvement The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win Making Work Visible: Exposing Time Theft to Optimize Work & Flow Statistics/Forecasting: How to Measure Anything: Finding the Value of Intangibles in Business Superforecasting: The Art and Science of Prediction Antifragile: Things That Gain from Disorder Writing/Storytelling: Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First Sentence The Story Grid: What Good Editors Know Product/Design/User Experience: The Cold Start Problem: How to Start and Scale Network Effects The Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer Feedback Psychology/Influence: SPIN Selling (unfortunate acronym) The Elephant in the Brain: Hidden Motives in Everyday Life Influence: The Psychology of Persuasion Outreach/Marketing/Advocacy: 80/20 Sales and Marketing: The Definitive Guide to Working Less and Making More Traction: How Any Startup Can Achieve Explosive Customer Growth How to learn things faster: Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Make It Stick: The Science of Successful Learning The Little Book of Talent: 52 Tips for Improving Your Skills Personal Development: The Confident Mind: A Battle-Tested Guide to Unshakable Performance The Almanack of Naval Ravikant: A Guide to Wealth and Happiness Atomic Habits Recruiting/Hiring: Recruiting Who: The A Method for Hiring Negotiating: Negotiation Genius Never Split the Difference: Negotiating As If Your Life Depended On It Secrets of Power Negotiating: I...
Mind Love • Modern Mindfulness to Think, Feel, and Live Well
We will learn: Why it is hard for us to think clearly about our nature and the explanations for our behavior. The 4 strands of research that all led to the same conclusion: that we are strangers to ourselves. How to better read signals to understand other people's hidden motives. No one wants to be called a liar, but what if I told you that you, and all of us, lie to each other and ourselves all the time? We are basically political animals and schemers that are constantly looking out for number one without even knowing that we're doing it. To better deceive others, we also deceive ourselves. So that's what we're talking about today. Our guest is Robin Hanson. He is an Associate Professor of Economics and received his Ph.D in 1997 in social sciences from Caltech. And he's also the author of one of the top best sellers in the last decade, “The Elephant in the Brain: Hidden Motives in Everyday Life.” Links from the episode: Show Notes: https://mindlove.com/272 Sign up for The Morning Mind Love for short daily notes from your highest self. Get Mind Love Premium for exclusive ad-free episodes and monthly meditations. Support Mind Love Sponsors Learn more about your ad choices. Visit megaphone.fm/adchoices
Mind Love • Modern Mindfulness to Think, Feel, and Live Well
We will learn: Why it is hard for us to think clearly about our nature and the explanations for our behavior. The 4 strands of research that all led to the same conclusion: that we are strangers to ourselves. How to better read signals to understand other people's hidden motives. No one wants to be called a liar, but what if I told you that you, and all of us, lie to each other and ourselves all the time? We are basically political animals and schemers that are constantly looking out for number one without even knowing that we're doing it. To better deceive others, we also deceive ourselves. So if we're lying to ourselves, why would we even want to learn this information? Aren't we just opening pandora's box? Kind of. But here's a little secret, You automatically have an edge in the game if you know how the game works. So that's what we're talking about today. Our guest is Robin Hanson. He is an Associate Professor of Economics and received his Ph.D in 1997 in social sciences from Caltech. And he's also the author of one of the top best sellers in the last decade, “The Elephant in the Brain: Hidden Motives in Everyday Life.” Links from the episode: Show Notes: https://mindlove.com/272 Sign up for The Morning Mind Love for short daily notes from your highest self. Get Mind Love Premium for exclusive ad-free episodes and monthly meditations. Support Mind Love Sponsors
Why do you go to the hospital, take medicine, or go to school? Robin Hason comes on the podcast to chat about motivations and why we often do get them wrong. Links from the show:* The Elephant in the Brain: Hidden Motives in Everyday Life* Link to Robin's papers* Connect with Robin on Twitter* Connect with Ryan on Twitter* Subscribe to the newsletterAbout my guest:Robin Hanson is an Associate Professor of Economics, and received his Ph.D in 1997 in social sciences from Caltech. He joined George Mason's economics faculty in 1999 after completing a two-year post-doc at U.C Berkely. His major fields of interest include health policy, regulation, and formal political theory.Dr. Hanson's personal homepage includes his work in academic economics, class materials, and a sampling of his broader interests in economics, philosophy, political theory, alternative institutions, and the economics of science fiction. Get full access to Dispatches from the War Room at dispatchesfromthewarroom.substack.com/subscribe
In this episode of Faster, Please! — The Podcast, I'm continuing last week's discussion with Robin Hanson, professor of economics at George Mason University and author of the Overcoming Bias blog. His books include The Age of Em: Work, Love and Life when Robots Rule the Earth and The Elephant in the Brain: Hidden Motives in Everyday Life.(Be sure to check out last week's episode for the first part of my conversation with Robin. We discussed futurism, innovation, and economic growth over the very long run, among other topics. Definitely worth the listen!)In part two, Robin and I talk about the possibility of extraterrestrial life. Earlier this year, the US House of Representatives held a hearing on what Washington now calls "unexplained aerial phenomena." While the hearing didn't unveil high-def, close-up footage of little green men or flying saucers, it did signal that Washington is taking UAPs more seriously. But what if we really are being visited by extraterrestrials? What would contact with an advanced alien civilization mean for humanity? It's exactly the kind of out-there question Robin considers seriously and then applies rigorous, economic thinking. In This Episode:* The case for extraterrestrial life (1:34)* A model to explain UFOs (6:49)* Could aliens be domesticating us right now? (13:23)* Would advanced alien civilization renew our interest in progress? (17:01)* Is America on the verge of a pro-progress renaissance? (18:49)Below is an edited transcript of our conversation.The case for extraterrestrial lifeJames Pethokoukis: In the past few years there have been a lot of interesting developments on the UFO — now UAP — front. The government seems to be taking these sightings far more seriously. Navy pilots are testifying. What is your take on all this?Robin Hanson: There are two very different discussions and topics here. One topic is, “There are these weird sightings. What's with that? And could those be aliens?” Another more standard, conservative topic is just, “Here's this vast empty universe. Are there aliens out there? If so, where?” So that second topic is where I've recently done some work and where I feel most authoritative, although I'm happy to also talk about the other subject as well. But I think we should talk first about the more conservative subject.The more conservative subject, I think, is — and I probably have this maybe 50 percent correct — once civilizations progress far enough, they expand. When they expand, they change things. If there were a lot of these civilizations out there, we should be able to, at this point, detect the changes they've made. Either we've come so early that there aren't a lot of these kinds of civilizations out there … let me stop there and then you can begin to correct me.The key question is: it looks like we soon could go out expanding and we don't see limits to how far we could go. We could fill the universe. Yet, we look out and it's an empty universe. So there seems to be a conflict there.Where are the giant Dyson spheres?One explanation is, we are so rare that in the entire observable universe, we're the only ones. And therefore, that's why there's nobody else out there. That's not a crazy position, except for the fact that we're early. The median star will last five trillion years. We're here on our star after only five billion years, a factor of 1000. Our standard best theory of when advanced life like us should appear, if the universe would stay empty and wait for it, would be near the end of a long-lived planet. That's when it would be most likely to appear.There's this power of the number of hard steps, which we could go into, but basically, the chance of appearing should go as the power of this time. If there are, say, six hard steps, which is a middle estimate, then the chance of appearing 1000 times later would go as 1000 to the power of six. Which would be 10 to the 18th. We are just crazy early with respect to that analysis. There is a key assumption of the analysis, which is the universe would sit and wait empty until we showed up. The simplest way to resolve this is to deny that assumption is to say, “The universe is not sitting and waiting empty. In fact, it's filling up right now. And in a billion years or two, it'll be all full. And we had to show up before that deadline.” And then you might say, “If the universe is filling up right now, if right now the universe is half full of aliens, why don't we see any?”We should be detecting signals, seeing things. We have this brand new telescope out there sitting a million miles away.If we were sitting at a random place in the universe, that would be true. But we are the subject of a selection effect. Here's the key story: We have to be at a place where the aliens haven't gotten to yet. Because otherwise, they would be here instead of us. That's the key problem. If aliens expand at almost the speed of light, then you won't see them until they're almost here. And that means if you look backwards in our light cone — from our point, all the way backwards — almost all that light cone is excluded. Aliens couldn't be there because, again, if they had arisen there, they would be here now instead of us. The only places aliens could appear that we could see now would have to be just at the edge of that cone.Therefore, the key explanation is aliens are out there, but everywhere the aliens are not, we can't see them because the aliens are moving so fast we don't see them until they're almost there. So the day on the clock is the thing telling you aliens are out there right now. That might seem counterintuitive. “How's the clock supposed to tell me about aliens? Shouldn't I see pictures of weird guys with antennae?” Something, right? I'm saying, “No, it's the clock. The clock is telling you that they're out there.” Because the clock is saying you're crazy early, and the best explanation for why you're crazy early is that they're out there right now.But if we take a simple model of, they're arising in random places and random times, and we fit it to three key datums we know, we can actually get estimates for this basic model of aliens out there. It has the following key parameter estimates: They're expanding at, say, half the speed of light or faster; they appear roughly once per million galaxies, so pretty rare; and if we expanded out soon and meet them, we'd meet them in a billion years or so. The observable universe has a trillion galaxies in it. So once per million galaxies means there are a lot of them that will appear in our observable universe. But it's not like a few stars over. This is really rare. Once per million galaxies. We're not going to meet them soon. Again, in a billion years. So there's a long time to wait here.A model to explain UFOsBased on this answer, I don't think your answer to my first question is “We are making contact with alien intelligence.”This simple model predicts strongly that there's just no way that UFOs are aliens. If this were the only possible model, that would be my answer. But I have to pause and ask, “Can I change the model to make it more plausible?” I tried to do this exercise; I tried to say, “How could I most plausibly make a set of assumptions that would have as their implication UFOs are aliens and they're really here?”Is this a different model or are you just changing something key in that model?I'm going to change some things in this model, I'll have to change several things. I'm going to make some assumptions so that I get the implication that some UFOs are aliens and they're doing the weird things we see. And the key question is going to be, “How many assumptions do you have to make, and how unlikely are they?” This is the argument about the prior on this theory. Think of a murder trial. In a murder trial, somebody says A killed B. You know that the prior probability of that is like one in a million: One in 1000 people are killed in a murder and they each know 1000 people. The idea that any one of those people killed them would be one in a million. So you might say, “Let's just dismiss this murder trial, because the prior is so low.” But we don't do that. Why? Because it's actually possible in a typical murder trial to get concrete, physical evidence that overcomes a one-in-a-million prior. So the analogy for UFOs would be, people say they see weird stuff. They say you should maybe think that's aliens. The first question you have to ask is, how a priori unlikely is that? If it was one in 10 to the 20 unlikely, you'd say, “There's nothing you could tell me to make me believe this. I'm just not going to look, because it's just so crazy.”There are a lot of pretty crazy explanations that aren't as crazy as that.Exactly. But my guess is the prior is roughly one in a thousand. And with a one-in-thousand prior, you've got to look at the evidence. You don't just draw the conclusion on one in a thousand, because that's still low. But you've got to be willing to look at the evidence if it's one in a thousand. That's where I'd say we are.Then the question is, how do I get one in a thousand [odds]? I'm going to try to generate a scenario that is as plausible as possible and consistent with the key datums we have about UFOs. Here are the key datums. One is, the universe looks empty. Two is, they're here now. Three is, they didn't kill us. We're still alive. And four is, they didn't do the two obvious things they could do. They could have come right out and been really obvious and just slapped us on the face and said, “Here we are.” That would've been easy. Or they could have been completely invisible. And they didn't do either of those. What they do is hang out at the edge of visibility. What's with that? Why do that weird intermediate thing? We have to come up with a hypothesis that explains these things, because those are the things that are weird here.The first thing I need to do is correlate aliens and us in space-time. Because if it was once randomly per million galaxies, that doesn't work. The way to do that is panspermia. Panspermia siblings, in fact. That is, Earth life didn't start on Earth. It started somewhere else. And that somewhere else seeded our stellar nursery. Our star was born with a thousand other stars, all in one place at the same time, with lots of rocks flying back and forth. If life was seeded in that stellar nursery, it would've seeded not just our Earth, but seeded life on many of those other thousand stars. And then they would've drifted apart over the last four billion years. And now they're in a ring around the galaxy. The scenario would be one of those other planets developed advanced life before us.The way we get it is we assume panspermia happened. We assume there are siblings, and that one of them came to our level before us. If that happened, the average time duration would be maybe 100 million years. It wouldn't have happened in the last thousand years or even million years. It would be a long time. Given this, we have to say, “Okay, they reached our level of advancement a hundred million years ago. And they're in the same galaxy as us; they're not too far away. We know that they could find us. We can all find the rest of the stellar siblings by just the spectra. We all were in the same gas with the same mixture of chemicals. We just find the same mixture of chemicals, and we've found the siblings. They could look out and find our siblings.We have this next piece of data: The universe is empty. The galaxy is empty. They've been around for 100 million years, if they wanted to take over the galaxy, they could have. Easy, in 100 million years. But they didn't. To explain that, I think we have to postulate that they have some rule against expansion. They decided that they did not want to lose their community and central governance and allow their descendants to change and be strange and compete with them. They chose to keep their civilization local and, therefore, to ban or prohibit, effectively, any colonists from leaving. And we have to assume not only that was their plan, they succeeded … for 100 million years. That's really hard.They didn't allow their generation ships to come floating through our solar system.No, they did not allow any substantial colonization away from their home world for a hundred million years. That's quite a capability. They may have stagnated in many ways, but they have maintained order in this thing. Then they realize that they have siblings. They look out and they can see them. And now they have to realize we are at risk of breaking the rule. If they just let us evolve without any constraints, then we might well expand out. Their rule they maintain for a hundred million years to try to maintain their precious coherence, it would be for naught. Because we would violate it. We would become the competitors they didn't want.That creates an obvious motive for them to be here. A motive to allow an exception. Again, they haven't allowed pretty much any expansion. But they're going to travel thousands of light-years from there to here to allow an expedition here, which risks their rule. If this expedition goes rogue, the whole game is over. So we are important enough that they're going to allow this expedition here to come here to try to convince us not to break the rule. But not just to kill us, because they could have just killed us. Clearly, they feel enough of an affiliation or a sibling connection of some sort that they didn't just kill us. They want us to follow their rule, and that's why they're here. So that all makes sense.Could aliens be sort of “domesticating” us right now?But then we still have the last part to explain. How, exactly, do they expect to convince us? And how does hanging out at the edge of our visibility do that? You have to realize whoever from home sent out this expedition, they didn't trust this expedition very much. They had to keep them pretty constrained. So they had to prove some strategy early on that they thought would be pretty robust, that could plausibly work, that isn't going to allow these travelers to have much freedom to go break their rules. Very simple, clean strategy. What's that strategy? The idea is, pretty much all social animals we know have a status hierarchy. The way we humans domesticate other animals is … what we usually do is swap in and sit at the top of their status hierarchy. We are the top dog, the top horse, whatever it is. That's how we do it. That's a very robust way that animals have domesticated other animals. So that's their plan. They're going to be at the top of the status hierarchy. How do they do that? They just show up and be the most impressive. They just fly around and say, “Look at me. I'm better.”You don't need to land on the National Mall. You just need to go 20 times faster than our fastest jet. That says something right there.Once we're convinced they exist, we're damn impressed. In order to be at the top of our status hierarchy, they need to be impressive. But they also need to be here and relatively peaceful. If they were doing it from light-years away, then we'd be scared and threatened. They need to be here at the top of our status hierarchy, being very impressive. Now it would be very impressive, of course, if they landed on the White House lawn and started talking to us, too. But that's going to risk us not liking something. As you know, we humans have often disliked other humans for pretty minor things: just because they don't eat the kind of foods we do or marry the way we do or things like that.If they landed on the White House lawn, someone would say, “We need to plan for an invasion.”The risk is that if they told if they showed up and they told a lot about them, they gave us their whole history and videos of their home world and everything else, we're going to find something we hate. We might like nine things out of 10. But that one thing we hate, we're going to hate a lot. And unfortunately, humans are not very forgiving of that, right? Or most creatures. This is their fear scenario. If they showed too much, then game over. We're not going to defer to them as the top of our status hierarchy, because they're just going to be these weird aliens. They need to be here, but not show very much to us. The main thing they need to show is how impressive they are and that they're peaceful. And their agenda — but we can figure out the agenda. Just right now, we can see why they're here: because the universe is empty, so they didn't fill it; they must have a rule against that, and we'd be violating the rule. Ta-da. They can be patient. They're in no particular rush. They can wait for us to figure out what we believe or not. Because they just have to hang around and be there until we decide we believe it. And then everything else follows from that.As you were describing that, it reminded me of the television show, The Young Pope. We have a young Pope, and he starts off by not appearing because he thinks part of his power comes from an air of mystery and this mystique. In a way, what you're saying is that's what these aliens would be doing.Think of an ancient emperor. The ancient emperor was pretty weird. Typically, an emperor came from a whole different place and was a different ethnicity or something from the local people. How does an emperor in the ancient world get the local people to obey them? They don't show them a lot of personal details, of course. They just have a really impressive palace and impressive parades and an army. And then everybody goes, “I guess they're the top dog.” Right. And that's worked consistently through history.I like “top dog” better than apex predator, by the way.Would advanced alien civilization renew our interest in progress?I wrote about this, and the scenario I came up with is kind of what you just described: We know they're here, and we know they have advanced technology. But that's it. We don't meet them. I would like to think that we would find it really aspirational. That we would think, “Wow. We are nowhere near the end. We haven't figured it all out. We haven't solved all we need to know about physics or anything else.” What do you think of that idea? And what do you think would be the impact of that kind of scenario where they didn't give us their gadgets, we just know they're there and advanced. What does that do to us?All through history, humans haven't quite dared to think that they could rule their fate. They had gods above them who were more in control. It's only in the last few centuries where we've taken on ourselves this sense that we're in charge of ourselves and we get to decide our future. If real aliens show up and they really are much more powerful, then we have to revise that back to the older stance of, “Okay, there are gods. They have opinions, and I guess we should pay attention.” But if these are gods who once were us, that's a different kind of god. And that wasn't the ancient god. That's a different kind of god that we could then aspire to. We can say “These gods were once like us. We could become like them. And look how possible it is.”Now, of course, we will be suspicious of whether we can trust them and whether we should admire them. And that's where not saying very much will help. They just show up and they are just really powerful. They just don't tell us much. And they say, “We're going to let you guys work that out. You get the basics.” I think we would be inspired, but also deflated a bit that we aren't in charge of ourselves. If they have an agenda and it's contradicting ours, they're going to win. We lose. It's going to be pretty hard.Is America on the verge of a pro-progress renaissance?We've had this stagnation relative to what our expectations were in the immediate postwar decades. I would like to think I'm seeing some signs that maybe that's changing. Maybe our attitude is changing. Maybe we're getting to more of a pro-progress, progress-embracing phase of our existence. Maybe 50 years of this after 50 years of that.There are two distinctions here that are importantly different. One is the distinction between caution and risk. The other is between fear and hope. Unfortunately, it just seems that fear and hate are just much stronger motives for most humans than hope. We've had this caution, due to fear. I think the best hope for aggression or risk is also fear or hate. That is, if we can find a reason, say, “We don't want those Russians to win the war, and therefore we're going to do more innovation.” Or those people tell us we can't do it, and therefore you can. Many people recently have entered the labor force and then been motivated by, “Those people don't think we're good enough, and we're going to show we're good enough and what we can do.”If you're frightened enough about climate change, then at some point you'll think, “We need all of the above. If that's nuclear, that's fine. If it's digging super deep into the Earth…”If you could make strong enough fear. I fear that's just actually showing that people aren't really that afraid yet. If they were more afraid, they would be willing to go more for nuclear. But they're not actually very afraid. Back in 2003, I was part of this media scandal about the policy analysis market. Basically, we had these prediction markets that were going to make estimates about Middle Eastern geopolitical events. And people thought that was a terrible sort of thing to do. It didn't fit their ideals of how foreign policy estimates should be produced. And one of the things I concluded from that event was that they just weren't actually very scared of bad things happening in the Middle East. Because if so, they wouldn't have minded this, if this was really going to help them make those things go better.And we actually saw that in the pandemic. I don't think we ever got so scared in the pandemic that we did what we did in World War II. As you may know, in the beginning of World War II we were losing. We were losing badly, and we consistently were losing. And we got scared and we fired people and fired contractors and changed things until we stopped losing. And then we eventually won. We never fired anybody in the pandemic. Nobody lost their job. We never reorganized anything and said, “You guys are doing crap, and we're going to hand the job to this group.” We were never scared enough to do that. That's part of why it didn't go so well. The one thing that went well is when we said, “Let's set aside the usual rules and let you guys go for something.”We got scared of Sputnik and 10 years later there's an American flag on the Moon.Right. And that was quite an impressive spurt, initially driven by fear.Perhaps if we're scared enough of shortages or scared enough of climate change or scared enough that the Chinese are going to come up with a super weapon, then that would be a catalyst for a more dynamic, innovative America, maybe.I'm sorry for this to be a negative sign, but I think the best you can hope for optimism is that some sort of negative emotion would drive for more openness and more risk taking.Innovation is a fantastic free lunch, it seems like. And we don't seem to value it enough until we have to.For each one of us, it risks these changes. And we'd rather play it safe. You might know about development in the US. We have far too little housing in the US. The main reason we have far too little housing is we've empowered a lot of local individual critics to complain about various proposals. They basically pick just all sorts of little tiny things that could go wrong. And they say, “You have to fix this and fix that.” And that's what takes years. And that's why we don't have enough housing and building, because we empower those sorts of very safety-oriented, tiny, “if any little things go wrong, then you've got to deal with it” sort of thinking. We have to be scared enough of something else. Otherwise those fears dominate. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Few economists think more creatively and also more rigorously about the future than Robin Hanson, my guest on this episode of Faster, Please! — The Podcast. So when he says a future of radical scientific and economic progress is still possible, you should take the claim seriously. Robin is a professor of economics at George Mason University and author of the Overcoming Bias blog. His books include The Age of Em: Work, Love and Life when Robots Rule the Earth and The Elephant in the Brain: Hidden Motives in Everyday Life.In This Episode:* Economic growth over the very long run (1:20)* The signs of an approaching acceleration (7:08)* Global governance and risk aversion (12:19)* Thinking about the future like an economist (17:32)* The stories we tell ourselves about the future (20:57)* Longtermism and innovation (23:20)Next week, I'll feature part two of my conversation with Robin, where we discuss whether we are alone in the universe and what alien life means for humanity's long-term potential.Below is an edited transcript of our conversation.Economic growth over the very long runJames Pethokoukis: Way back in 2000, you wrote a paper called “Long-Term Growth as a Sequence of Exponential Modes.” You wrote, “If one takes seriously the model of economic growth as a series of exponential … [modes], then it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.” Is that still your expectation for the 21st century?Robin Hanson: It's my expectation for the next couple of centuries. Whether it's the 21st isn't quite so clear.Has anything happened in the intervening two decades to make you think that something might happen sooner rather than later … or rather, just later?Just later, I'm afraid. I mean, we have a lot of people hyping AI at the moment, right?Sure, I may be one of them on occasion.There are a lot of people expecting rapid progress soon. And so, I think I've had a long enough baseline there to think, "No, maybe not.” But let's go with the priors.Is it a technological mechanism that will cause this? Is it AI? Is it that we find the right general-purpose technology, and then that will launch us into very, very rapid growth?That would be my best guess. But just to be clear for our listeners, we just look at history, we seem to see these exponential modes. There are, say, four of them so far (if we go pre-human). And then the modes are relatively steady and then have pretty sharp transitions. That is, the transition to a growth rate of 50 or 200 times faster happens within less than a doubling time.So what was the last mode?We're in industry at the moment: doubles roughly every 15 years, started around 1800 or 1700. The previous mode was farming, doubled every thousand years. And so, in roughly less than a thousand years, we saw this rapid transition to our current thing, less than the doubling time. The previous mode before that was foraging, where humans doubled roughly every quarter million years. And in definitely less than a quarter million years, we saw a transition there. So then the prediction is that we will see another transition, and it will happen in less than 15 years, to a faster growth mode. And then if you look at the previous increases in growth rates, they were, again, a factor of 60 to 200. And so, that's what you'd be looking for in the next mode. Now, obviously, I want to say you're just looking at a low data set here. Four events. You can't be too confident. But, come on, you've got to guess that maybe a next one would happen.If you go back to that late ‘90s period, there was a lot of optimism. If you pick up Wired magazine back then, [there was] plenty of optimism that something was happening, that we were on the verge of something. One of my favorite examples — and a sort of non-technologist example, was a report from Lehman Brothers from December 1999. It was called “Beyond 2000.” And it was full of predictions, maybe not talking about exponential growth, but how we were in for a period of very fast growth, like 1960s-style growth. It was a very bullish prediction for the next two decades. Now Lehman did not make it another decade itself. These predictions don't seem to have panned out — maybe you think I'm being overly pessimistic on what's happened over the past 20 years — but do you think it was because we didn't understand the technology that was supposedly going to drive these changes? Did we do something wrong? Or is it just a lot of people who love tech love the idea of growth, and we all just got too excited?I think it's just a really hard problem. We're in this world. We're living with it. It's growing really fast. Again, doubling every 15 years. And we've long had this sense that it's possible for something much bigger. So automation, the possibility of robots, AI: It sat in the background for a long time. And people have been wondering, “Is that coming? And if it's coming, it looks like a really big deal.” And roughly every 30 years, I'd say, we've seen these bursts of interest in AI and public concern, like media articles, you know…We had the ‘60s. Now we have the ‘90s…The ‘60s, ‘90s, and now again, 2020. Every 30 years, a burst of interest and concern about something that's not crazy. Like, it might well happen. And if it was going to happen, then the kind of precursor you might expect to see is investors realizing it's about to happen and bidding up assets that were going to be important for that to really high levels. And that's what you did see around ‘99. A lot of people thought, “Well, this might be it.”Right. The market test for the singularity seemed to be passing.A test that is not actually being passed quite so much at the moment.Right.So, in some sense, you had a better story then in terms of, look, the investors seem to believe in this.You could also look at harder economic numbers, productivity numbers, and so on.Right. And we've had a steady increase in automation over, you know, centuries. But people keep wondering, “We're about to have a new kind of automation. And if we are, will we see that in new kinds of demos or new kinds of jobs?” And people have been looking out for these signs of, “Are we about to enter a new era?” And that's been the big issue. It's like, “Will this time be different?” And so, I've got to say this time, at the moment, doesn't look different. But eventually, there will be a “this time” that'll be different. And then it'll be really different. So it's not crazy to be watching out for this and maybe taking some chances betting on it.The signs of an approaching accelerationIf we were approaching a kind of acceleration, a leap forward, what would be the signs? Would it just be kind of what we saw in the ‘90s?So the scenario is, within a 15-year period, maybe a five-year period, we go from a current 4 percent growth rate, doubling every 15 years, to maybe doubling every month. A crazy-high doubling rate. And that would have to be on the basis of some new technology, and therefore, investment. So you'd have to see a new promising technology that a lot of people think could potentially be big. And then a lot of investment going into that, a lot of investors saying, “Yeah, there's a pretty big chance this will be it.” And not just financial investors. You would expect to see people — like college students deciding to major in that, people moving to wherever it is. That would be the big sign: investment moving toward anything. And the key thing is, you would see actual big, fast productivity increases. There'd be some companies in cities who were just booming. You were talking about stagnation recently: The ‘60s were faster than now, but that's within a factor of two. Well, we're talking about a factor of 60 to 200.So we don't need to spend a lot of time on the data measurement issues. Like, “Is productivity up 1.7 percent, 2.1?”If you're a greedy investor and you want to be really in on this early so you buy it cheap before everybody else, then you've got to be looking at those early indicators. But if you're like the rest of us wondering, “Do I change my job? Do I change my career?” then you might as well wait and wait till you see something really big. So even at the moment, we've got a lot of exciting demos: DALL-E, GPT-3, things like that. But if you ask for commercial impact and ask them, “How much money are people making?” they shrug their shoulders and they say “Soon, maybe.” But that's what I would be looking for in those things. When people are generating a lot of revenue — so it's a lot of customers making a lot of money — then that's the sort of thing to maybe consider.Something I've written about, probably too often, is the Long Bets website. And two economists, Robert Gordon and Erik Brynjolfsson, have made a long bet. Gordon takes the role of techno-pessimist, Brynjolfsson techno-optimist. Let me just briefly read the bet in case you don't happen to have it memorized: “Private Nonfarm business productivity growth will average over 1.8 percent per year from the first quarter of 2020 to the last quarter of 2029.” Now, if it does that, that's an acceleration. Brynjolfsson says yes. Gordon says no…But you want to pick a bigger cutoff. Productivity growth in the last decade is maybe half that, right? So they're looking at a doubling. And a doubling is news, right? But, honestly, a doubling is within the usual fluctuation. If you look over, say, the last 200 years, and we say sometimes some cities grow faster, some industries grow faster. You know, we have this steady growth rate, but it contains fluctuations. I think the key thing, as always, when you're looking for a regime change, is you're looking at — there's an average and a fluctuation — when is a new fluctuation out of the range of the previous ones? And that's when I would start to really pay attention, when it's not just the typical magnitude. So honestly, that's within the range of the typical magnitudes you might expect if we just had an unusually productive new technology, even if we stay in the same mode for another century.When you look at the enthusiasm we had at the turn of this century, do you think we did the things that would encourage rapid growth? Did we create a better ecosystem of growth over the past 20 years or a worse one?I don't think the past 20 years have been especially a deviation. But I think slowly since around 1970, we have seen a decline in our support for innovation. I think increasing regulations, increasing size of organizations in response to regulation, and just a lot of barriers. And even more disturbingly, I think it's worth noting, we've seen a convergence of regulation around the world. If there were 150 countries, each of which had different independent regulatory regimes, I would be less concerned. Because if one nation messes it up and doesn't allow things, some other nation might pick up the slack. But we've actually seen pretty strong convergence, even in this global pandemic. So, for example, challenge trials were an idea early voiced, but no nation allowed them. Anywhere. And even now, hardly they've been tried. And if you look at nuclear energy, electric magnetic spectrum, organ sales, medical experimentation — just look at a lot of different regulatory areas, even airplanes — you just see an enormous convergence worldwide. And that's a problem because it means we're blocking innovation the same everywhere. And so there's just no place to go to try something new.Global governance and risk aversionThere's always concern in Europe about their own productivity, about their technological growth. And they're always putting out white papers in Europe about what [they] can do. And I remember reading that somebody decided that Europe's comparative advantage was in regulation. Like that was Europe's superpower: regulation.Yeah, sure.And speaking of convergence, a lot of people who want to regulate the tech industry here have been looking to what Europe is doing. But Europe has not shown a lot of tech progress. They don't generate the big technology companies. So that, to me, is unsettling. Not only are we converging, but we're converging sometimes toward the least productive areas of the advanced world.In a lot of people's minds, the key thing is the unsafe dangers that tech might provide. And they look to Europe and they say, “Look how they're providing security there. Look at all the protections they're offering against the various kinds of insecurity we could have. Surely, we want to copy them for that.”I don't want to copy them for that. I'm willing to take a few risks.But many people want that level of security. So I'm actually concerned about this over the coming centuries. I think this trend is actually a trend toward not just stronger global governance, but stronger global community or even mobs, if we call it that. That is the reason why nuclear energy is regulated the same everywhere: the regulators in each place are part of a world community, and they each want to be respected in that community. And in order to be respected, they need to conform to what the rest of the community thinks. And that's going to just keep happening more over the coming centuries, I fear.One of my favorite shows, more realistic science-fiction shows and book series, is The Expanse, which takes place a couple hundred years in the future where there's a global government — which seems to be a democratic global government. I'm not sure how efficient it is. I'm not sure how entrepreneurial it is. Certainly the evidence seems to be that global governance does not lead to a vibrant, trial-and-error, experimenting kind of ecology. But just the opposite: one that focuses on safety and caution and risk aversion.And it's going to get a lot worse. I have a book called The Age of Em: Work, Love, and Life when Robots Rule the Earth, and it's about very radical changes in technology. And most people who read about that, they go, “Oh, that's terrible. We need more regulations to stop that.” I think if you just look toward the longer run of changes, most people, when they start to imagine the large changes that will be possible, they want to stop that and put limits and control it somehow. And that's going to give even more of an impetus to global governance. That is, once you realize how our children might become radically different from us, then that scares people. And they really, then, want global governance to limit that.I fear this is going to be the biggest choice humanity ever makes, which is, in the next few centuries we will probably have stronger global governance, stronger global community, and we will credit it for solving many problems, including war and global warming and inequality and things like that. We will like the sense that we've all come together and we get to decide what changes are allowed and what aren't. And we limit how strange our children can be. And even though we will have given up on some things, we will just enjoy … because that's a very ancient human sense, to want to be part of a community and decide together. And then a few centuries from now, there will come this day when it's possible for a colony ship to leave the solar system to go elsewhere. And we will know by then that if we allow that to happen, that's the end of the era of shared governance. From that point on, competition reaffirms itself, war reaffirms itself. The descendants who come out there will then compete with each other and come back here and impose their will here, probably. And that scares the hell out of people.Indeed, that's the point of [The Expanse]. It's kind of a mixed bag with how successful Earth's been. They didn't kill themselves in nuclear war, at least. But the geopolitics just continues and that doesn't change. We're still human beings, even if we happen to be living on Mars or Europa. All that conflict will just reemerge.Although, I think it gets the scale wrong there. I think as long as we stay in the solar system, a central government will be able to impose its rule on outlying colonies. The solar system is pretty transparent. Anywhere in the solar system you are, if you're doing something somebody doesn't like, they can see you and they can throw something at you and hit you. And so I think a central government will be feasible within the solar system for quite some time. But once you get to other star systems, that ends. It's not feasible to punish colonies 20 light-years away when you don't get the message of what they did [until] 20 years later. That just becomes infeasible then. I would think The Expanse is telling a more human story because it's happening within this solar system. But I think, in fact, this world government becomes a solar system government, and it allows expansion to the solar system on its terms. But it would then be even stronger as a centralized governance community which prevents change.Thinking about the future like an economistIn a recent blog post, you wrote that when you think about the future, you try to think about it as an economist. You use economic analysis “to predict the social consequences of a particular envisioned future technology.” Have futurists not done that? Futurism has changed. I've written a lot about the classic 1960s futurists who were these very big, imaginative thinkers. They tended to be pretty optimistic. And then they tended to get pessimistic. And then futurism became kind of like marketing, like these were brand awareness people, not really big thinkers. When they approached it, did they approach it as technologists? Did they approach it as sociologists? Are economists just not interested in this subject?Good question. So I'd say there are three standard kinds of futurists. One kind of futurist is a short-term marketing consultant who's basically telling you which way the colors will go or the market demand will go in the short term.Is neon green in or lime green in, or something.And that's economically valuable. Those people should definitely exist. Then there's a more aspirational, inspirational kind of futurist. And that's changed over the decades, depending on what people want to be inspired by or afraid of. In the ‘50s, ‘60s, it might be about America going out and becoming powerful. Or later it's about the environment, and then it's about inequality and gender relations. In some sense, science fiction is another kind of futurism. And these two tend to be related in the sense that science fiction mainly focuses on an indirect way to tell metaphorical stories about us. Because we're not so interested in the future, really, we're interested in us. Those are futures serving various kinds of communities, but neither of them are that realistically oriented. They're not focused on what's likely to actually happen. They're focused on what will inspire people or entertain people or make people afraid or tell a morality tale.But if you're interested in what's actually going to happen, then my claim is you want to just take our standard best theories and just straightforwardly apply them in a thoughtful way. So many people, when they talk about the future, they say, “It's just impossible to say anything about the future. No one could possibly know; therefore, science fiction speculations are the best we can possibly do. You might as well go with that.” And I think that's just wrong. My demonstration in The Age of Em is to say, if you take a very specific technology scenario, you can just turn the crank with Econ 101, Sociology 101, Electrical Engineering 101, all the standard things, and just apply it to that scenario. And you can just say a lot. But what you will find out is that it's weird. It's not very inspiring, and it doesn't tell the perfect horror story of what you should avoid. It's just a complicated mess. And that's what you should expect, because that's what we would seem to our ancestors. [For] somebody 200 or 2000 years ago, our world doesn't make a good morality tale for them. First of all, they would just have trouble getting their head around it. Why did that happen? And [what] does that even mean? And then they're not so sure what to like or dislike about it, because it's just too weird. If you're trying to tell a nice morality tale [you have] simple heroes and villains, right? And this is too messy. The real futures you should just predict are going to be too messy to be a simple morality tale. They're going to be weird, and that's going to make them hard to deal with.The stories we tell ourselves about the futureDo you think it matters, the kinds of stories we tell ourselves about what the future could hold? My bias is, I think it does. I think it matters if all we paint for people is a really gloomy one, then not only is it depressing, then it's like, “What are we even doing here?” Because if we're going to move forward, if we're going to take risks with technology, there needs to be some sort of payoff. But yet, it seems like a lot of the culture continues. We mentioned The Expanse, which by the modern standard of a lot of science fiction, I find to be pretty optimistic. Some people say, "Well, it's not optimistic because half the population is on a basic income and there's war.” But, hey, there are people. Global warming didn't kill everybody. Nuclear war didn't kill everybody. We continued. We advanced. Not perfect, but society seems to be progressing. Has that mattered, do you think, the fact that we've been telling ourselves such terrible stories about the future? We used to tell much better ones.The first-order theory about change is that change doesn't really happen because people anticipated or planned for it or voted on it. Mostly this world has been changing as a side effect of lots of local economic interests and technological interests and pursuits. The world is just on this train with nobody driving, and that's scary and should be scary, I guess. So to the first order, it doesn't really matter what stories we tell or how we think about the future, because we haven't actually been planning for the future. We haven't actually been choosing the future.It kind of happens while we're doing something else.The side effect of other things. But that's the first order, that's the zeroth-order effect. The next-order effect might be … look, places in the world will vary in to what extent they win or lose over the long run. And there are things that can radically influence that. So being too cautious and playing it safe too much and being comfortable, predictably, will probably lead you to not win the future. If you're interested in having us — whoever us is — win the future or have a bright, dynamic future, then you'd like “us” to be a little more ambitious about such things. I would think it is a complement: The more we are excited about the future, and the future requires changes, the more we are telling ourselves, “Well, yeah, this change is painful, but that's the kind of thing you have to do if you want to get where we're going.”Long-term thinking and innovationIf you've been reading the New York Times lately or the New Yorker, the average is related to something called “effective altruism,” is the idea that there are big, existential problems facing the world, and we should be thinking a lot harder about them because people in the future matter too, not just us. And we should be spending money on these problems. We should be doing more research on these problems. What do you think about this movement? It sounds logical.Well, if you just compare it to all the other movements out there and their priorities, I've got to give this one credit. Obviously, the future is important.They are thinking directly about it. And they have ideas.They are trying to be conscious about that and proactive and altruistic about that. And that's certainly great compared to the vast majority of other activity. Now, I have some complaints, but overall, I'm happy to praise this sort of thing. The risk is, as with most futurism, that even though we're not conscious of it, what we're really doing is sort of projecting our issues now into the future and sort of arguing about future stuff by talking about our stuff. So you might say people seem to be really concerned about the future of global warming in two centuries, but all the other stuff that might happen in two centuries, they're not at all interested. It's like, what's the difference there? They might say global warming lets them tell this anti-materialist story that they'd want to tell anyway, tell why it's bad to be materialist and so to cut back on material stuff is good. And it's sort of a pro-environment story. I fear that that's also happening to some degree in effective altruism. But that's just what you should expect for humans in general. Effective altruists, in terms of their focus on the future, are overwhelmingly focused as far as I can tell on artificial intelligence risk. And I think that's a bit misdirected. In a big world I don't mind it …My concern is that we'll be super cautious and before we have developed anything that could really create existential risk … we will never get to the point where it's so powerful because, like the Luddites, we'll have quashed it early on out of fear.A friend of mine is Eric Drexler, who years ago was known as talking about nanotechnology. Nanotechnology is still a technology in the future. And he experienced something that made him a little unsure whether he should have said all these things, he said, which is that once you can describe a vivid future, the first thing everybody focuses on is almost all the things that can go wrong. Then they set up policy to try to focus on preventing the things that can go wrong. That's where the whole conversation goes. And then people are distancing themselves from it. He found that many people distanced themselves from nanotechnology until they could take over the word, because in their minds it reflected these terrible risks. So people wanted to not even talk about that. But you could ask, if he had just inspired people to make the technology but not talked about the larger policy risks, maybe that would be better? It might be in fact true that the world today is broken so much that if ordinary people and policymakers don't know about a future risk, the world's better off, because at least they won't mess it up by trying to limit it and control it too early and too crudely.Then the challenge is, maybe you want the technologists who might make it to hear about it and get inspired, but you don't want everybody else to be inspired to control it and correct it and channel it and prepare for it. Because honestly, that seems to go pretty bad. I guess the question is, what technology that people did see well ahead of time, did they not come up with terrible scenarios to worry about? For example, television: People didn't think about television very much ahead of time. And when it came, a lot of people watched it. And a lot of people complained about that. But if you could imagine ahead of time that in 20 years people are going to spend five hours a day watching this thing. If that's an accurate prediction, people would've freaked out.Or cars: As you may know, in the late 1800s, people just did not envision the future of cars. When they envisioned the future of transportation, they saw dirigibles and trains and submarines, even, but not cars. Because cars were these individual things. And if they had envisioned the actual future of cars — automobile accidents, individual people controlling a thing going down the street at 80 miles an hour — they might have thought, “That's terrible. We can't allow that.” And you have to wonder… It was only in the United States, really, that cars took off. There's a sense in which the world had rapid technological progress around 1900 or so because the US was an exception worldwide. A lot of technologies were only really tried in the US, like even radio, and then the rest of the world copied and followed because the US had so much success with them.I think if you want to pick a point where that optimistic ‘90s came to an end, it might have been, speaking of Wired magazine, the Bill Joy article … “Why the Future Doesn't Need Us.” Talking about nanotech and gray goo… Since you brought up nanotech and Eric Drexler, do you know what the state of that technology is? We had this nanotechnology initiative, but I don't think it was working on that kind of nanotech.No, it wasn't.It was more like a materials science. But as far as creating these replicating tiny machines…The federal government had a nanotechnology initiative, where they basically took all the stuff they were doing that was dealing with small stuff and they relabeled it. They didn't really add more money. They just put it under a new initiative. And then they made sure nobody was doing anything like this sort of dangerous stuff that could cause what Eric was talking about.Stuff you'd put in sunscreen…Exactly. So there was still never much funding there. There's a sense in which, in many kinds of technology areas, somebody can envision ahead of time a new technology that was possible if a concentrated effort goes into a certain area in a certain way. And they're trying to inspire that. But absent that focused effort, you might not see it for a long time. That would be the simplest story about nanotech: We haven't seen the focused effort and resources that he had proposed. Now, that doesn't mean had we had those efforts he would've succeeded. He could just be wrong about what was feasible and how soon. But nevertheless, that still seemed to be an exciting, promising technology that would've been worth the investment to try. And still is, I would say.One concern I have about the notion of longtermism, is that it seems to place a lot of emphasis on our ability to rally people, get them thinking long term, taking preparatory steps. And we've just gone through a pandemic which showed that we don't do that very well. And the way we dealt with it was not through preparation, but by being a rich, technologically advanced society that could come up with a vaccine. That's my kind of longtermism, in a way: being rich and technologically capable so you can react to the unexpected.And that's because we allowed an exception in how vaccines were developed in that case. Had we gone with the usual way vaccines had been developed before, it would've taken a lot longer. So the problem is that when we make too many structures that restrain things, then we aren't able to quickly react to new circumstances. You probably know that most companies, they might have a forecasting department, but they don't fund it very much. They don't actually care that much. Almost everything they do is reactive in most organizations. That's just the fact of how most organizations work. Because, in fact, it is hard to prepare. It's hard to anticipate things.I'm not saying we shouldn't try to figure out ways to deflect asteroids. We should. To have this notion of longtermism over a broad scope of issues … that's fine. But I hope we don't forget the other part, which is making sure that we do the right things to create those innovative ecosystems where we do increase wealth, we do increase our technological capabilities to not be totally dependent on our best guesses right now.Here's a scary example of how this thinking can go wrong, in my mind. In the longtermism community, there's this serious proposal that many people like, which is called the Long Reflection.The Long Reflection, which is, we've solved all the problems and then we take a time out.We stop allowing change for a while. And for a good long time, maybe a thousand years or even longer, we're in this period where no change substantially happens. Then we talk a lot about what we could do to deal with things when things are allowed to change again. And we work it all out, and then we turn it back on and allow change. That's giving a lot of credit to this system of talking.Who's talking? Are these post-humans talking? Or is it people like us?It would be before the change, remember. So it would be people like us. I actually think this is this ancient human intuition from the forger world, before the farming era, where in the small band the way we made most important decisions was to sit down around the campfire and discuss it and then decide together and then do something. And that's, in some sense, how everybody wants to make all the big decisions. That's why they like a world government and a world community, because it goes back to that. But I honestly think we have to admit that just doesn't go very well lately. We're not actually very capable of having a discussion together and feeling all the options and making choices and then deciding together to do it. That's how we want to be able to work. And that's how we maybe should, but it's not how we are. I feel, with the Long Reflection, once we institutionalize a world where change isn't allowed, we would get pretty used to that world.It seems very comfortable, and we'd start voting for security.And then we wouldn't really allow the Great Reflection to end, because that would be this risky, into the strange world. We would like the stable world we were in. And that would be the end of that.I should say that I very much like Toby Ord's book, The Precipice. He's also one of my all-time favorite guests. He's really been a fantastic guest. Though, the Long Reflection, I do have concerns about.Come back next Thursday for part two of my conversation with Robin Hanson. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Experts on how chicken prices, a staple protein for most South Africans, will increase in the coming weeks as war in east-Europe persists. Chris Yelland, energy analyst and MD at EE Business Intelligence on how the changing global economy will make the bid-window 6 for renewable energy more costly. Bronwyn Williams, trend translator and future finance specialist at Flux Trends reviews the book, “Elephant in the Brain: Hidden Motives in Everyday Life” by Kevin Simler and Robin Hanson. See omnystudio.com/listener for privacy information.
Dr. Robin Hanson is an associate professor of economics at George Mason University and the coauthor of The Elephant in the Brain: Hidden Motives in Everyday Life. He joins us along with guest host Dr. Alex Williams to explore what clients are subconsciously signaling when they seek therapy – and what therapists are signaling to others when they enter the field. Plus, why is feedback-informed treatment similar to paying a friend $100 for a home-cooked meal? Thank you for listening. To support the show and receive access to regular bonus episodes, check out the Very Bad Therapy Patreon community. Introduction: 0:00 – 4:54 Part One: 4:54 – 1:18:27 Very Bad Therapy: Website / Facebook / Bookshelf / Tell Us Your Story Show Notes: Robin Hanson: Contact / Twitter Alex Williams: Contact / Twitter The Elephant in the Brain: Hidden Motives in Everyday Life
On 'current history', or what might be going on out there. Subscribe at: paid.retraice.com Details: what's GOOT; current history; hypotheses [and some predictions]; What's next? Complete notes and video at: https://www.retraice.com/segments/re17 Air date: Monday, 7th Mar. 2022, 4 : 20 PM Eastern/US. 0:00:00 what's GOOT; 0:01:35 current history; 0:04:30 hypotheses [and some predictions]; 0:13:38 What's next? References: Allison, G. (2018). Destined for War: Can America and China Escape Thucydides's Trap? Mariner Books. ISBN: 978-1328915382. Searches: https://www.amazon.com/s?k=9781328915382 https://www.google.com/search?q=isbn+9781328915382 https://lccn.loc.gov/2017005351 Andrew, C. (2018). The Secret World: A History of Intelligence. Yale University Press. ISBN in paperback edition printed as "978-0-300-23844-0 (hardcover : alk. paper)". Searches: https://www.amazon.com/s?k=978-0300238440 https://www.google.com/search?q=isbn+978-0300238440 https://lccn.loc.gov/2018947154 Baumeister, R. F. (1999). Evil: Inside Human Violence and Cruelty. Holt Paperbacks, revised ed. ISBN: 978-0805071658. Searches: https://www.amazon.com/s?k=9780805071658 https://www.google.com/search?q=isbn+9780805071658 https://lccn.loc.gov/96041940 Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020. Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020. Bostrom, N., & Cirkovic, M. M. (Eds.) (2008). Global Catastrophic Risks. Oxford University Press. ISBN: 978-0199606504. Searches: https://www.amazon.com/s?k=978-0199606504 https://www.google.com/search?q=isbn+978-0199606504 https://lccn.loc.gov/2008006539 Brockman, J. (Ed.) (2015). What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence. Harper Perennial. ISBN: 978-0062425652. Searches: https://www.amazon.com/s?k=978-0062425652 https://www.google.com/search?q=isbn+978-0062425652 https://lccn.loc.gov/2016303054 Chomsky, N. (1970). For Reasons of State. The New Press, revised ed. ISBN: 1565847946. Originally published 1970; this revised ed. 2003. Searches: https://www.amazon.com/s?k=1565847946 https://www.google.com/search?q=isbn+1565847946 https://catalog.loc.gov/vwebv/search?searchArg=1565847946 Chomsky, N. (2017). Requiem for the American Dream: The 10 Principles of Concentration of Wealth & Power. Seven Stories Press. ISBN: 978-1609807368. Searches: https://www.amazon.com/s?k=978-1609807368 https://www.google.com/search?q=isbn+978-1609807368 https://lccn.loc.gov/2016054121 Cirkovic, M. M. (2008). Observation selection effects and global catastrophic risks. (pp. 120-145). In Bostrom & Cirkovic (2008). de Grey, A. (2007). Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime. St. Martin's Press. ISBN: 978-0312367060. Searches: https://www.amazon.com/s?k=978-0312367060 https://www.google.com/search?q=isbn+978-0312367060 https://lccn.loc.gov/2007020217 Deary, I. J. (2001). Intelligence: A Very Short Introduction. Oxford. ISBN: 978-0192893215. Searches: https://www.amazon.com/s?k=978-0192893215 https://www.google.com/search?q=isbn+978-0192893215 https://lccn.loc.gov/2001269139 Diamond, J. (1997). Guns, Germs, and Steel: The Fates of Human Societies. Norton. ISBN: 0393317552. Searches: https://www.amazon.com/s?k=0393317552 https://www.google.com/search?q=isbn+0393317552 https://catalog.loc.gov/vwebv/search?searchArg=0393317552 Dolan, R. M. (2000). UFOs and the National Security State Vol. 1: An Unclassified History. Keyhole, 1st ed. ISBN: 0967799503. Searches: https://www.amazon.com/s?k=0967799503 https://www.google.com/search?q=isbn+0967799503 https://catalog.loc.gov/vwebv/search?searchArg=0967799503 Dolan, R. M. (2009). UFOs and the National Security State Vol. 2: The Cover-Up Exposed, 1973-1991. Keyhole. ISBN: 978-0967799513. Searches: https://www.amazon.com/s?k=978-0967799513 https://www.google.com/search?q=isbn+978-0967799513 Durant, W., & Durant, A. (1968). The Lessons of History. Simon and Schuster. No ISBN. Searches: https://www.amazon.com/s?k=lessons+of+history+durant https://www.google.com/search?q=lessons+of+history+durant https://lccn.loc.gov/68019949 Dyson, G. (2015). Analog, the revolution that dares not speak its name. (pp. 255-256). In Brockman (2015). Dyson, G. (2020). Analogia: The Emergence of Technology Beyond Programmable Control. Farrar, Straus and Giroux. ISBN: 978-0374104863. Searches: https://www.amazon.com/s?k=9780374104863 https://www.google.com/search?q=isbn+9780374104863 https://catalog.loc.gov/vwebv/search?searchArg=9780374104863 Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208 Frank, R., & Bernanke, B. (2001). Principles of Economics. Mcgraw-Hill. ISBN: 0072289627. Searches: https://www.amazon.com/s?k=0072289627 https://www.google.com/search?q=isbn+0072289627 https://catalog.loc.gov/vwebv/search?searchArg=0072289627 Frankfurt, H. G. (1988). The Importance of What We Care About. Cambridge. ISBN: 978-0521336116. Searches: https://www.amazon.com/s?k=978-0521336116 https://www.google.com/search?q=isbn+978-0521336116 https://lccn.loc.gov/87026941 Gawande, A. (2014). Being Mortal: Medicine and What Matters in the End. Metropolitan Books. ISBN: 978-0805095159. Searches: https://www.amazon.com/s?k=9780805095159 https://www.google.com/search?q=isbn+9780805095159 https://catalog.loc.gov/vwebv/search?searchArg=9780805095159 Grabo, C. M. (2002). Anticipating Surprise: Analysis for Strategic Warning. Center for Strategic Intelligence Research. ISBN: 0965619567 https://www.ni-u.edu/ni_press/pdf/Anticipating_Surprise_Analysis.pdf Retrieved 7th Sep. 2020. Griffiths, P. J. (1971). Vietnam, Inc.. Phaidon, 2nd ed. ISBN: 978-0714846033. Originally published 1971. This edition 2006. Link and searches: http://philipjonesgriffiths.org/photography/selected-work/vietnam-inc/ Retrieved 10 Mar. 2022. https://www.amazon.com/s?k=978-0714846033 https://www.google.com/search?q=isbn+978-0714846033 https://lccn.loc.gov/2006283959 Hamming, R. W. (2020). The Art of Doing Science and Engineering: Learning to Learn. Stripe Press. ISBN: 978-1732265172. Searches: https://www.amazon.com/s?k=9781732265172 https://www.google.com/search?q=isbn+9781732265172 Hawking, S. (2018). Brief Answers to the Big Questions. Bantam. ISBN: 978-1984819192. Searches: https://www.amazon.com/s?k=9781984819192 https://www.google.com/search?q=isbn+9781984819192 https://catalog.loc.gov/vwebv/search?searchArg=9781984819192 Herrnstein, R. J., & Murray, C. (1996). The Bell Curve: Intelligence and Class Structure in American Life. Free Press. ISBN: 978-0684824291. Searches: https://www.amazon.com/s?k=9780684824291 https://www.google.com/search?q=isbn+9780684824291 https://catalog.loc.gov/vwebv/search?searchArg=9780684824291 Johnson, S. (2014). How We Got to Now: Six Innovations That Made the Modern World. Riverhead Books. ISBN: 978-1594633935. Searches: https://www.amazon.com/s?k=9781594633935 https://www.google.com/search?q=isbn+9781594633935 https://lccn.loc.gov/2014018412 Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN: 978-0374533557. Searches: https://www.amazon.com/s?k=978-0374533557 https://www.google.com/search?q=isbn+978-0374533557 https://lccn.loc.gov/2012533187 Kaplan, F. (2016). Dark Territory: The Secret History of Cyber War. Simon & Schuster. ISBN: 978-1476763255. Searches: https://www.amazon.com/s?k=9781476763255 https://www.google.com/search?q=isbn+9781476763255 https://catalog.loc.gov/vwebv/search?searchArg=9781476763255 Kelleher, C. A., & Knapp, G. (2005). Hunt for the Skinwalker: Science Confronts the Unexplained at a Remote Ranch in Utah. Paraview Pocket Books. ISBN: 978-1416505211. Searches: https://www.amazon.com/s?k=978-1416505211 https://www.google.com/search?q=isbn+978-1416505211 https://lccn.loc.gov/2005053457 Keyhoe, D. (1950). The Flying Saucers Are Real. Forgotten Books. ISBN: 978-1605065472. Originally published 1950; this edition 2008. Searches: https://www.amazon.com/s?k=9781605065472 https://www.google.com/search?q=isbn+9781605065472 https://lccn.loc.gov/50004886 Kilcullen, D. (2020). The Dragons And The Snakes: How The Rest Learned To Fight The West. Oxford University Press. ISBN: 978-0190265687. Searches: https://www.amazon.com/s?k=9780190265687 https://www.google.com/search?q=isbn+9780190265687 https://catalog.loc.gov/vwebv/search?searchArg=9780190265687 Lazar, B. (2019). Dreamland: An Autobiography. Interstellar. ISBN: 978-0578437057. Searches: https://www.amazon.com/s?k=9780578437057 https://www.google.com/search?q=isbn+9780578437057 Lee, K.-F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt. ISBN: 978-1328546395. Searches: https://www.amazon.com/s?k=9781328546395 https://www.google.com/search?q=isbn+9781328546395 https://catalog.loc.gov/vwebv/search?searchArg=9781328546395 Mitter, R. (2008). Modern China: A Very Short Introduction. Oxford University Press, kindle ed. ISBN: 978-0199228027. Searches: https://www.amazon.com/s?k=9780199228027 https://www.google.com/search?q=isbn+9780199228027 https://catalog.loc.gov/vwebv/search?searchArg=9780199228027 Nouri, A., & Chyba, C. F. (2008). Biotechnology and biosecurity. (pp. 450-480). In Bostrom & Cirkovic (2008). O'Donnell, P. K. (2004). Operatives, Spies, and Saboteurs: The Unknown Story of the Men and Women of World War II's OSS. Free Press / Simon & Schuster. ISBN: 074323572X. Edition and searches: https://archive.org/details/operativesspiess00odon https://www.amazon.com/s?k=074323572X https://www.google.com/search?q=isbn+074323572X https://catalog.loc.gov/vwebv/search?searchArg=074323572X Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette. ISBN: 978-0316484916. Searches: https://www.amazon.com/s?k=978-0316484916 https://www.google.com/search?q=isbn+978-0316484916 https://lccn.loc.gov/2019956459 Orlov, D. (2008). Reinventing Collapse: The Soviet Example and American Prospects. New Society. ISBN: 978-0865716063. Searches: https://www.amazon.com/s?k=9780865716063 https://www.google.com/search?q=isbn+9780865716063 https://catalog.loc.gov/vwebv/search?searchArg=9780865716063 Osnos, E. (2020/01/06). The Future of America's Contest with China. The New Yorker. https://www.newyorker.com/magazine/2020/01/13/the-future-of-americas-contest-with-china Retrieved 22 April, 2020. Perlroth, N. (2020). This Is How They Tell Me the World Ends: The Cyberweapons Arms Race. Bloomsbury. ISBN: 978-1635576054. Searches: https://www.amazon.com/s?k=978-1635576054 https://www.google.com/search?q=isbn+978-1635576054 https://lccn.loc.gov/2020950713 Phoenix, C., & Treder, M. (2008). Nanotechnology as global catastrophic risk. (pp. 481-503). In Bostrom & Cirkovic (2008). Pillsbury, M. (2015). The Hundred-Year Marathon: China's Secret Strategy to Replace America as the Global Superpower. St. Martin's Griffin. ISBN: 978-1250081346. Searches: https://www.amazon.com/s?k=9781250081346 https://www.google.com/search?q=isbn+9781250081346 https://lccn.loc.gov/2014012015 Pinker, S. (2011). The Better Angels of Our Nature: Why Violence Has Declined. Penguin Publishing Group. ISBN: 978-0143122012. Searches: https://www.amazon.com/s?k=978-0143122012 https://www.google.com/search?q=isbn+978-0143122012 https://lccn.loc.gov/2011015201 Pogue, D. (2021). How to Prepare for Climate Change: A Practical Guide to Surviving the Chaos. Simon & Schuster. ISBN: 978-1982134518. Searches: https://www.amazon.com/s?k=9781982134518 https://www.google.com/search?q=isbn+9781982134518 https://catalog.loc.gov/vwebv/search?searchArg=9781982134518 Putnam, R. D. (2015). Our Kids: The American Dream in Crisis. Simon & Schuster. ISBN: 978-1476769905. Searches: https://www.amazon.com/s?k=9781476769905 https://www.google.com/search?q=isbn+9781476769905 https://lccn.loc.gov/2015001534 Rees, M. (2003). Our Final Hour: A Scientist's Warning. Basic Books. ISBN: 0465068634. Searches: https://www.amazon.com/s?k=0465068634 https://www.google.com/search?q=isbn+0465068634 https://lccn.loc.gov/2004556001 Rees, M. (2008). Foreword to Bostrom & Cirkovic (2008). (pp. iii-vii). Reid, T. R. (2017). A Fine Mess: A Global Quest for a Simpler, Fairer, and More Efficient Tax System. Penguin Press. ISBN: 978-1594205514. Searches: https://www.amazon.com/s?k=9781594205514 https://www.google.com/search?q=isbn+9781594205514 https://catalog.loc.gov/vwebv/search?searchArg=9781594205514 Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020. Retraice (2020/11/10). Re13: The Care Factor. retraice.com. https://www.retraice.com/segments/re13 Retrieved 10th Nov. 2020. Romm, J. (2016). Climate Change: What Everyone Needs to Know. Oxford University Press. ISBN: 978-0190250171. Searches: https://www.amazon.com/s?k=9780190250171 https://www.google.com/search?q=isbn+9780190250171 https://catalog.loc.gov/vwebv/search?searchArg=9780190250171 Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498 Salter, A. (2003). Predators. Basic Books. ISBN: 978-0465071732. Searches: https://www.amazon.com/s?k=978-0465071739 https://www.google.com/search?q=isbn+978-0465071739 https://lccn.loc.gov/2002015846 Sanger, D. E. (2018). The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age. Broadway Books. ISBN: 978-0451497901. Searches: https://www.amazon.com/s?k=9780451497901 https://www.google.com/search?q=isbn+9780451497901 https://catalog.loc.gov/vwebv/search?searchArg=9780451497901 Sapolsky, R. M. (2018). Behave: The Biology of Humans at Our Best and Worst. Penguin Books. ISBN: 978-0143110910. Searches: https://www.amazon.com/s?k=9780143110910 https://www.google.com/search?q=isbn+9780143110910 https://lccn.loc.gov/2016056755 Shirer, W. L. (1959). The Rise and Fall of the Third Reich: A History of Nazi Germany. Simon & Schuster, 50th anniv. ed. ISBN: 978-1451651683. Originally published 1959; this ed. 2011. Searches: https://www.amazon.com/s?k=9781451651683 https://www.google.com/search?q=isbn+9781451651683 https://lccn.loc.gov/60006729 Shorrocks, A., Davies, J., Lluberas, R., & Rohner, U. (2019). Global wealth report 2019. Credit Suisse Research Institute. Oct. 2019. https://www.credit-suisse.com/about-us/en/reports-research/global-wealth-report.html Retrieved 4 July, 2020. Simler, K., & Hanson, R. (2018). The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford University Press. ISBN: 9780190495992. Searches: https://www.amazon.com/s?k=9780190495992 https://www.google.com/search?q=isbn+9780190495992 https://lccn.loc.gov/2017004296 Spalding, R. (2019). Stealth War: How China Took Over While America's Elite Slept. Portfolio. ISBN: 978-0593084342. Searches: https://www.amazon.com/s?k=9780593084342 https://www.google.com/search?q=isbn+9780593084342 https://catalog.loc.gov/vwebv/search?searchArg=9780593084342 Stephens-Davidowitz, S. (2018). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Dey Street Books. ISBN: 978-0062390868. Searches: https://www.amazon.com/s?k=9780062390868 https://www.google.com/search?q=isbn+9780062390868 https://lccn.loc.gov/2017297094 Sternberg, R. J. (Ed.) (2020). The Cambridge Handbook of Intelligence (Cambridge Handbooks in Psychology) (2 vols.). Cambridge University Press, 2nd ed. ISBN: 978-1108719193. Searches: https://www.amazon.com/s?k=9781108719193 https://www.google.com/search?q=isbn+9781108719193 https://lccn.loc.gov/2019019464 Vallee, J. (1979). Messengers of Deception: UFO Contacts and Cults. And/Or Press. ISBN: 0915904381. Different edition and searches: https://archive.org/details/MessengersOfDeceptionUFOContactsAndCultsJacquesValle1979/mode/2up https://www.amazon.com/s?k=0915904381 https://www.google.com/search?q=isbn+0915904381 https://catalog.loc.gov/vwebv/search?searchArg=0915904381 Walter, B. F. (2022). How Civil Wars Start. Crown. ISBN: 978-0593137789. Searches: https://www.amazon.com/s?k=978-0593137789 https://www.google.com/search?q=isbn+978-0593137789 https://lccn.loc.gov/2021040090 Walter, C. (2020). Immortality, Inc.: Renegade Science, Silicon Valley Billions, and the Quest to Live Forever. National Geographic. ISBN: 978-1426219801. Searches: https://www.amazon.com/s?k=9781426219801 https://www.google.com/search?q=isbn+9781426219801 https://catalog.loc.gov/vwebv/search?searchArg=9781426219801 Zubrin, R. (1996). The Case for Mars: The Plan to Settle the Red Planet and Why We Must. Free Press. First published in 1996. This 25th anniv. edition 2021. ISBN: 978-0684827575. Searches: https://www.amazon.com/s?k=978-0684827575 https://www.google.com/search?q=isbn+978-0684827575 https://lccn.loc.gov/2011005417 Zubrin, R. (2019). The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility. Prometheus Books. ISBN: 978-1633885349. Searches: https://www.amazon.com/s?k=978-1633885349 https://www.google.com/search?q=isbn+978-1633885349 https://lccn.loc.gov/2018061068 Copyright: 2022 Retraice, Inc. https://retraice.com
Human beings are primates, and primates are political animals. Our brains, therefore, are designed not just to hunt and gather but also to help us get ahead socially, often via deception and self-deception. But while we may be self-interested schemers, we benefit by pretending otherwise. The less we know about our own ugly motives, the better - and thus, we don't like to talk, or even think, about the extent of our selfishness. This is "the elephant in the brain". Such an introspective taboo makes it hard for us to think clearly about our nature and the explanations for our behaviour. The aim of this book, then, is to confront our hidden motives directly - to track down the darker, unexamined corners of our psyches and blast them with floodlights. Then, once everything is clearly visible, we can work to better understand ourselves: Why do we laugh? Why are artists sexy? Why do we brag about travel? You won't see yourself - or the world - the same after confronting the elephant in the brain. We welcome the author of a multitude of titles including the focus for today's episode: “The Elephant in the Brain: Hidden Motives in Everyday Life” Robin Hanson.
A pioneer of prediction markets since the 1980s, Robin Hanson is the author of two books (The Elephant in the Brain, The Age of Em) and has a popular blog called Overcoming Bias. Hanson is also an Associate Professor of Economics at George Mason University, and Research Associate at the Future of Humanity Institute of Oxford University. In this episode:(00:00) — Episodes begins(01:07) — What inspired idea futures?(06:00) — Decision markets for organisations(09:16) — Using prediction markets to overcome bias(14:25) — Honesty razors in today's world(17:08) — "An autist in the C-suite"(24:25) — Ideamarket discussion beginsRound 1(43:10) — Why Ideamarket results are not tied to any external truthRound 2(55:44) — Informed traders vs Noise tradersRound 3(1:16:34) — The Colour Wheel of Truth — is Ideamarket a Keynesian Beauty Contest?EPISODE LINKS: - Robin on Twitter- Overcoming Bias- Robin's Bio- Book: The Age of Em: Work, Love, and Life when Robots Rule the Earth- Book: The Elephant in the Brain: Hidden Motives in Everyday LifeIDEAMARKET LINKS- Ideamarket Website- Ideamarket on Twitter- Ideamarket Discord- Apple Podcasts- Spotify—The Ideamarket Podcast is where venture philosophers share the ideas, trends, and concepts they're most bullish on. —About Ideamarket: Ideamarket is the credibility layer of the internet. Ideamarket allows the public to mainstream the world's best information using market signals, replacing media corporations as arbiter of credibility.Get started now.—
We kick off Season 9 with a classic: Part I of the a16z story. How did this brand new venture firm charge out of the gates in 2009, going from zero to disrupting the entire venture industry overnight? You probably know Marc & Ben's history with Netscape and Loudcloud/Opsware... but what about the Black Panthers, Nintendo 64, Al Gore, Doug Leone, Masayoshi Son, and an epic feud with Benchmark Capital that became Silicon Valley's version of the Hatfields and the McCoys? Buckle up, Acquired's got the truth. If you love Acquired and want more, join our LP Community for access to over 50 LP-only episodes, monthly Zoom calls, and live access for big events like emergency pods and book club discussions with authors. We can't wait to see you there. Join here at: https://acquired.fm/lp/ Sponsors: Thank you to Pilot for being our presenting sponsor for all of Acquired Season 9! Pilot takes care of startups' bookkeeping, tax and CFO services so busy founders can focus on what matters, which is building the company. To paraphrase Jeff Bezos's famous AWS analogy: bookkeeping and tax don't make your product any better — so you should let Pilot handle them for you. In fact Pilot is backed by Bezos himself via Bezos Expeditions, along with an all-star roster of other investors including Sequoia, Index, and Stripe. They are truly the gold standard for startup bookkeeping, and many of the companies we work with run on them. You can get in touch with Pilot here: https://bit.ly/acquiredfmpilot , and Acquired listeners get 20% off their first 6 months! (use the link above) Thank you as well to Pitchbook and to Nord Security. You can learn more about them at: https://bit.ly/acquiredpitchbook https://bit.ly/acquirednord Links: David Streitfeld's great NYT piece on the Horowitz family: https://www.nytimes.com/2017/07/22/technology/one-family-many-revolutions-from-black-panthers-to-silicon-valley-to-trump.html Marc on the Tim Ferriss Show: https://tim.blog/2018/01/01/the-tim-ferriss-show-transcripts-marc-andreessen/ 2003 Marc in SF Gate: https://www.sfgate.com/business/ontherecord/article/OPSWARE-INC-On-the-record-Marc-Andreessen-2525822.php#photo-2684736 Carve Outs: Ben: The Elephant in the Brain: Hidden Motives in Everyday Life: https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995 David: Resonant Arc Podcast / YouTube Channel: https://www.youtube.com/channel/UCFzWAEPDGiY34bGpwM_DWmA Episode Sources: [Google Doc link: https://docs.google.com/document/d/1DDb2nGfvnQ_XV4qs6qoHE084eS0J8cP74gTfdr736GE/edit# ] http://1997.webhistory.org/www.lists/www-talk.1993q1/0099.html http://allthingsd.com/20130125/go-west-young-geek-chris-dixon-on-why-he-became-a-silicon-valley-vc-at-andreessen-horowitz-and-more-video/ http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/ http://www.internethistorypodcast.com/2015/08/20-years-on-why-netscapes-ipo-was-the-big-bang-of-the-internet-era/ https://a16z.com/2011/05/09/microsoft-buys-skype/ https://a16z.com/2011/06/30/meet-our-new-general-partner-jeff-jordan/ https://a16z.com/2017/04/07/todd-and-freddy-okta/ https://a16z.com/2018/09/25/michael-ovitz-entertainment-culture-negotiation-talent/ https://a16z.com/2019/06/20/slack/ https://a16z.com/2019/11/20/brand-building-a16z-ideas-people-marketing/ https://a16z.com/2019/11/26/a16z-podcast-how-what-why-500th-episode-behind-the-scenes/ https://bits.blogs.nytimes.com/2010/06/15/andreessen-horowitz-hires-a-female-partner-from-outcast-communications/ https://bits.blogs.nytimes.com/2010/11/03/andreessen-horowitz-starts-second-fund/ https://books.google.com/books?id=zyIvOn7sKCsC&pg=PA15#v=onepage&q&f=false https://charlierose.com/videos/12907 https://en.wikipedia.org/wiki/Andreessen_Horowitz https://en.wikipedia.org/wiki/Ben_Horowitz https://en.wikipedia.org/wiki/Black_Panther_Party https://en.wikipedia.org/wiki/Browser_wars https://en.wikipedia.org/wiki/David_Horowitz https://en.wikipedia.org/wiki/James_H._Clark https://en.wikipedia.org/wiki/Jeff_Jordan_(venture_capitalist) https://en.wikipedia.org/wiki/Marc_Andreessen https://en.wikipedia.org/wiki/Michael_Ovitz https://en.wikipedia.org/wiki/Mosaic_(web_browser) https://en.wikipedia.org/wiki/Murder_of_Betty_Van_Patter https://en.wikipedia.org/wiki/Netscape https://en.wikipedia.org/wiki/Ning_(website) https://en.wikipedia.org/wiki/Spyglass,_Inc. https://fortune.com/2011/07/12/skype-the-inside-story-of-the-boffo-8-5-billion-deal/ https://fortune.com/2021/01/20/tech-and-crypto-funder-andreessen-horowitz-wants-to-replace-the-media-that-might-be-bad-news-for-investors/ https://fortune.com/longform/jeff-jordan-vc/ https://greatness.floodgate.com/episodes/marc-andreessen-was-netscape-an-overnight-success https://money.cnn.com/2009/07/02/technology/marc_andreessen_venture_fund.fortune/index.htm https://money.cnn.com/magazines/fortune/fortune_archive/2005/07/25/8266639/ https://newrepublic.com/article/162227/david-horowitz-profile-trump-propagandist-radical-leftist https://podcasts.apple.com/us/podcast/ben-horowitz-02-25-20/id814550071?i=1000466601994 https://techcrunch.com/2009/02/20/andreessen-on-charlie-rose-i-am-creating-a-fund-full-video/ https://techcrunch.com/2010/06/20/andreessen-horowitz-celebrates-first-year-with-new-general-partner-john-ofarrell/ https://techcrunch.com/2014/03/27/andreessen-horowitz-raises-massive-new-1-5-billion-fund/ https://techcrunch.com/2018/06/25/andreessen-horowitz-has-a-new-crypto-fund-and-its-first-female-general-partner-is-running-it-with-chris-dixon/ https://techcrunch.com/2019/05/01/a16z-ushers-in-new-fund-strategy-with-2-75b/ https://techcrunch.com/2021/06/24/andreessen-horowitz-triples-down-on-blockchain-startups-with-massive-2-2-billion-crypto-fund-iii/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADZwBJIiwmoiePFPnoZk3s1WbLc0aUfwh4wj_nMnhEi5nYQ0Q1xfZfmYDhSbKEsY22uz29mILPEgwMe6RNf3pL8Jmpa6t8I3D19mTdP2c5zWv5jnGf2VNMFgB6UcS4o_5nTs2ymb7QON0OtJ4X4aiHWgNAW5auKjI6Hq65Unz0x8 https://thehistoryoftheweb.com/browser-wars/ https://tim.blog/2018/01/01/the-tim-ferriss-show-transcripts-marc-andreessen/ https://venturebeat.com/2009/08/19/first-andreessen-horowitz-investment-apptio-raises-14m/ https://web.archive.org/web/20110407235346/http://bhorowitz.com/2011/04/06/andreessen-horowitz-has-a-new-200mm-co-investment-fund/ https://web.archive.org/web/20120212181829/http://www.engineer.ucla.edu/visitor-links/alumni/alumni-profiles-1/ben-horowitz-ms-201990 https://www.acquired.fm/episodes/episode-42-opsware-with-special-guest-michel-feaster https://www.amazon.com/Hard-Thing-About-Things-Building/dp/0062273205 https://www.businessinsider.com/benchmark-website-2012-11 https://www.coindesk.com/andreessen-horowitz-doubles-down-on-crypto-investments-with-new-515m-fund https://www.coindesk.com/andreessen-horowitz-rakes-in-2-2b-for-third-crypto-venture-fund https://www.economist.com/technology-quarterly/2011/09/03/disrupting-the-disrupters https://www.forbes.com/sites/alexkonrad/2019/04/02/andreessen-horowitz-is-blowing-up-the-venture-capital-model-again/?sh=6f3cdbfc7d9f https://www.justice.gov/sites/default/files/atr/legacy/2006/03/03/20.pdf https://www.jwz.org/blog/2019/03/brand-necrophilia-part-7/ https://www.linkedin.com/in/jeffjordan1/ https://www.linkedin.com/in/mcopeland/ https://www.newcomer.co/p/the-unauthorized-story-of-andreessen https://www.newyorker.com/magazine/2015/05/18/tomorrows-advance-man https://www.nytimes.com/1995/08/10/us/with-internet-cachet-not-profit-a-new-stock-is-wall-st-s-darling.html https://www.nytimes.com/1995/08/10/us/with-internet-cachet-not-profit-a-new-stock-is-wall-st-s-darling.html https://www.nytimes.com/2017/07/22/technology/one-family-many-revolutions-from-black-panthers-to-silicon-valley-to-trump.html https://www.quora.com/How-did-Netscape-Navigator-make-money https://www.sec.gov/Archives/edgar/data/1660134/000119312517080301/d289173ds1.htm https://www.sfgate.com/business/ontherecord/article/OPSWARE-INC-On-the-record-Marc-Andreessen-2525822.php#photo-2684736 https://www.statista.com/statistics/203734/global-smartphone-penetration-per-capita-since-2005/ https://www.theinformation.com/articles/these-guys-are-very-different-inside-andreessen-horowitzs-rise https://www.theringer.com/2017/6/8/16045766/jeff-jordan-andreessen-horowitz-vc-pickup-basketball-ab4e54928186 https://www.wired.com/1999/02/aol-names-andreessen-cto/ https://www.wired.com/story/andreessen-horowitz-new-crypto-fund-iii/ https://www.worth.com/a-decade-later-how-has-andreessen-horowitz-changed-silicon-valley/ https://www.wsj.com/articles/andreessen-horowitzs-returns-trail-venture-capital-elite-1472722381 https://www.wsj.com/articles/SB10001424053111903480904576512250915629460 https://www.wsj.com/articles/SB984080550858322401 https://youtu.be/PbW-1k3ZOA4 https://youtu.be/k5pbximmZdI http://cseweb.ucsd.edu/~little/OldSites/CSE_Uptime/v4.7-8/xmosaic.html https://www.baltimoresun.com/news/bs-xpm-1999-09-11-9909110235-story.html https://www.wikiwand.com/en/List_of_web_browsers http://www.internethistorypodcast.com/2014/01/mosaic/ http://www.internethistorypodcast.com/2014/01/chapter-1-part-2-netscape-the-big-bang/ http://www.internethistorypodcast.com/2014/02/chapter-1-part-3-netscape-the-big-bang/
“Humans play with social rules - that way we can test the boundaries without actually violating the rules”Robin Hanson is a pioneer in rigorous futurism, an Economics professor at George Mason University, a Future of Humanity Institute Research Associate, and the founder of OvercomingBias.In this podcast episode, Hanson discusses his book “The Elephant In The Brain”. Hanson argues that our brains are designed to help us get ahead socially, often via deception and self-deception. The less we know about our own ugly motives, the better - and thus we don't like to talk about how selfish we might really be. This is "the elephant in the brain." Session Summary: (425) Robin Hanson: Enlightening Hidden Motives & Social Agendas @Foresight Institute - YouTube Music:Comfortable Mystery 4 - Film Noire by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/I Knew a Guy by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight's virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts. Hosted on Acast. See acast.com/privacy for more information.
Dans cet épisode du CoachDrague Podcast : ✓ Faut-il suivre les conseils de séduction venant des femmes ? ✓ Pourquoi en matière de séduction ne faut-il pas écouter les femmes ? ✓ Quand faut-il écouter les femmes en matière de séduction ? Quand faut-il suivre leurs conseils ? Et quand est-ce mieux de ne pas les écouter ? ✓ À quoi dois-tu faire attention pour savoir si tu dois écouter les conseils de séduction d'une femme ou pas ? Sur quels critères dois-tu te baser ? ✓ Quelles sont les meilleures sources d'informations en matière de séduction ? ✓ Selon les femmes, qu'est-ce qui les attire le plus chez un homme : l'apparence physique ou le statut et le pouvoir ? ✓ Selon les femmes, est-ce que les techniques de drague fonctionnent ou pas ? ✓ Que pensent les femmes du conseil classique qu'on donne aux hommes « sois naturel et reste toi-même » ? ✓ Est-ce qu'une femme peut coucher avec un mec qu'elle ne trouve pas beau ? ✓ Pourquoi est-ce que parfois les femmes donnent des conseils de séduction non-sollicités aux mecs ? Quels sont leurs motifs ? ✓ Un coach en séduction te dit de faire X, une femme te dit de faire Y. Qui a raison ? Qui écouter ? LIENS ET RESSOURCES MENTIONNÉS DANS CE PODCAST : Comment savoir si une fille est amoureuse ? ↪ https://www.coachdrague.com/blog/fille-amoureuse-comment-savoir/ TED Talks ↪ https://www.ted.com/talks L'effet Dunning-Kruger ↪ https://fr.wikipedia.org/wiki/Effet_Dunning-Kruger L'ultracrépidarianisme ↪ https://fr.wikipedia.org/wiki/Ultracr%C3%A9pidarianisme The Elephant in the Brain: Hidden Motives in Everyday Life ↪ https://amzn.to/3fx66pR I, Mammal: How to Make Peace With the Animal Urge for Social Power ↪ https://amzn.to/3cGGllr Le Script : que dire, mot pour mot, pour aborder une fille, développer la conversation, prendre ses infos, fixer un rencard amoureux et conclure lors de ce rencard ? ↪ http://www.coachdrague.com/produits/le-script/ Micro-coaching ↪ http://www.coachdrague.com/services/micro-coaching/ La page « témoignages » sur le blog de CoachDrague ↪ https://www.coachdrague.com/blog/temoignages-et-remerciements/
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science, master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. With over 3100 citations and sixty academic publications, he's recognized not only for his contributions to economics (especially, pioneering the theory and use of prediction markets), but also for the wide range of fields in which he's been published. He is the author of The Age of Em: Work, Love, and Life when Robots Rule the Earth. Robin has strong and controversial views (backed by his research) regarding various institutions in society, and discusses how many routine activities we take for granted, carry hidden motives based on the evolution of ourselves and our society. Some of the points we touch on are items such as, how charities don’t really exist to help others, our schools don’t really exist to educate students, and our political expression isn’t actually about choosing wise policies. Show Links https://twitter.com/robinhanson https://overcomingbias.com Book Links (Aff Links) The Elephant in the Brain: Hidden Motives in Everyday Life - https://amzn.to/38sIPRD The Age of Em: Work, Love, and Life When Robots Rule the Earth - https://amzn.to/3epFuqj The Hanson-Yudkowsky AI-Foom Debate - https://amzn.to/3cd4Che Show Sponsor (25% Off Code: SUCCESS) https://getmr.com/
“At every single stage [of processing information]—from its biased arrival, to its biased encoding, to organizing it around false logic, to misremembering and then misrepresenting it to others—the mind continually acts to distort information flow in favor of the usual goal of appearing better than one really is.” —Robert Trivers In this episode, I speak with author and intellectual Robin Hanson. Robin is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. Robin has a bachelor's and a master's degree in physics, a Ph.D. in social science, and he has also researched artificial intelligence at Lockheed and NASA. The topic of conversation for this episode centered around a recent book of his, one which he co-authored with Kevin Simler, titled The Elephant in the Brain: Hidden Motives in Everyday Life. What is the Elephant in the Brain?Basically, it is a blindspot about how our minds work. As social creatures, we are wired to greatly care about what others think of us. And like all primates, our complex social behavior involves the politics of coalitions and norm enforcement—although grooming does serve a hygienic purpose, primates like chimpanzees use grooming for political purposes as well. Human beings don't groom each other this obviously, but we are constantly judging each other. We are watching each other to make sure that our social norms are being followed and to judge whether people will be good allies. And we are worried about them judging us the same way. So in this desire to look good, we often downplay our more selfish motives and amplify our more altruistic ones. And the disturbing thing is that our brain does this unconsciously, keeping “us” in the dark. To quote from the book: "We, human beings, are a species that’s not only capable of acting on hidden motives—we’re designed to do it. Our brains are built to act in our self-interest while at the same time trying hard not to appear selfish in front of other people. And in order to throw them off the trail, our brains often keep “us,” our conscious minds, in the dark. The less we know of our own ugly motives, the easier it is to hide them from others." When it comes to choosing who we want in our social circles, we tend to want teammates who value the group over their selfish desires. And we rely on social signals to get this information and to make sure the signals are honest. But lying is a cheap signal—a strategy that allows one to reap the benefits without paying the price. And this setup created an evolutionary arms race between lying and lie detection. George Costanza's LyingWhile we may think that the contents of our minds are private, we signal much more than we realize. And people monitor each other closely. So it turns out that the best way to lie is to follow George Costanza's advice: "Remember—it's not a lie if you believe it." Because of this, our selfish motives remain hidden away in our subconscious so that our conscious minds can believe—and thus convincingly communicate to others—our nicer sounding and more group-oriented motives. And the same goes for our institutions, which are often acting out secret agendas alongside the accepted and better sounding official agendas. Another quote from the book: “And they aren’t mere mouse-sized motives, scurrying around discreetly in the back recesses of our minds. These are elephant-sized motives large enough to leave footprints in national economic data." Red Pill or Blue Pill?It can be disturbing to get into the workings of the mind like this—it is a brutally honest view of human beings and our institutions. It means you have to get rid of the nicer and more prosocial explanations for human behavior and replace it with the hidden selfish motives that actually drive us. And while this might be easy to do on other people, it's quite difficult to do on yourself. In this... Support this podcast
In this episode, Brian Beckcom speaks with Professor Robin Hanson about the unconscious motives that drive human behavior and their impact on our everyday lives. Brian and Professor Hanson talk about how to confront our hidden motives, examine them, and see clearly so that we can better understand ourselves and our fellow human beings. Robin Hanson is the co-author of The Elephant in the Brain: Hidden Motives in Everyday Life.” In his book, Robin explains how our minds actually work. He explains how and why we deceive ourselves and others. And he describes how our unconscious motives impact more than just our private behavior; they influence our institutions, art, medicine, schools, and politics. Robin Hanson’s work is relevant today considering the bizarre place we find ourselves in history. Brian and Professor Robin Hanson discuss: How he transitioned from STEM fields into social science Predetermined human behavior and the duplicity of free will Why humans act based on hidden motives and why we fail to detect them How our unconscious motives have shaped the political landscape we see today The essence of science and the differences between “experts” and “elites” How to effectively deal with disagreements on difficult topics Why so many spouses hate cryonics (the low-temperature freezing and storage of a human corpse or severed head) Why we haven’t seen aliens and when to expect them! And other things Robin D. Hanson is an economics professor at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from the California Institute of Technology, a master’s degree in physics and philosophy from the University of Chicago, and nine years of experience as a research programmer, at Lockheed Martin and NASA. Professor Hanson has 4510 citations, a citation h-index of 33, and over ninety academic publications ranging from Algorithmica and Information Systems Frontiers to Social Philosophy and Social Epistemology. Robin has diverse research interests, with papers on spatial product competition, product bans, evolutionary psychology, voter information incentives, incentives to fake expertise, self-deception in disagreement, wiretaps, image reconstruction, the origin of life, the survival of humanity, and interstellar colonization. To learn more about Professor Robin Hanson, please visit his bio at https://www.overcomingbias.com/bio.
My guest today is Robin Hanson, the co-author of The Elephant in the Brain: Hidden Motives in Everyday Life. Robin is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.In his book, Robin explores the hidden (and oftentimes sometimes darker) motives in everyday life. In our conversation, Robin talks about the hidden meaning of body language, how humans deceive themselves and others, the dark motivation behind charity donations, how sex, status, and politics all play a role in our motives, and much more. If you're into evolutionary psychology or ever wondered why humans act the way they do, you'll find this conversation fascinating.TIMESTAMPS:[00:45] - What is the elephant in the brain?[03:45] - Hidden motives in chimps[05:54] - The 3 main games people play: sex, status, and politics[07:58] - Social norms & hiding our darker intentions[13:20] - Understanding the secret message of body language [18:12] - The hidden motives behind buying luxury items [23:32] - The darker motives of why people give to charities[27:47] - Hidden motives of speaking [31:29] - Why people laugh & how it serves as a signal to others[36:17] - How this book can help you develop a new mental model of the world[41:06] - The hidden motive behind the educational system[44:52] - Two books that had a big impact on RobinLearn more about the author:Twitter: @robinhansonWebsite: hanson.gmu.edu***If you enjoyed this podcast, please subscribe & write a positive review.Every week, I send out a free weekly newsletter with actionable advice from amazing books. Join 3,100+ readers here.Connect with Alex & Books:Twitter: @alexandbooks_Instagram: @alexandbooks_YouTube: Alex and Books
The hidden motives of everyday life
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a PhD in social science from Cal Tech, master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. He's recognized not only for his contributions to economics (pioneering the theory and use of prediction markets) but also in a wide range of other fields. He is the author (along with Kevin Simler) of The Elephant in the Brain: Hidden Motives in Everyday Life. Connect with Robin Hanson: Get the book - https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995 http://www.overcomingbias.com/author/robin-hanson https://www.ted.com/speakers/robin_hanson Twitter: @robinhanson http://mason.gmu.edu/~rhanson/home.html Connect with Nick Holderbaum: https://www.primalosophy.com/ https://twitter.com/primalosophy https://www.youtube.com/channel/UCBn7jiHxx2jzXydzDqrJT2A If you enjoy the podcast please leave a review on iTunes. https://podcasts.apple.com/us/podcast/the-primalosophy-podcast/id1462578947 If you would like to set up a consult call with Nick Holderbaum, you can schedule with him at https://www.primalosophy.com/health-coaching
Jared Janes and Jason Snyder reflect on recent episodes & themes, their evolving perspectives on memetic mediation, and a laundry list of related topics they think are worth exploring more. In this Episode of Both/And Episode #1 Who What Why? Baha'i Politics Tweet: Why meta? Episode #15 Being a Baha'i Creative with Samah Tokmachi Episode #16 Mind Fitness & Gender with Chance Lunceford Chance on Twitter Tweet: The plural universe Tweet: Diverging parallel realities The Four Quadrants Article: Memetic Engineering Scott Adams Book: The Elephant in the Brain: Hidden Motives in Everyday Life Emerge Podcast Episode: Daniel Schmachtenberger - Utopia or Bust Episode #10 Culture, Tech & Collaboration with Jess Euvie Ivanova Future Guests: Evan Driscoll, What Is Metamodern? & Jordan Hall Support Both/And by becoming a patron &/or subscribing & reviewing us on iTunes Jared Janes participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, a small percentage of its price is sent to us.
Why did you choose to listen to this episode? Why are you listening to it in one app versus another? Why are we asking you all these questions? Robin Hanson, our guest today, is here to shed some light on these questions and more. As the author of The Elephant in the Brain: Hidden Motives in Everyday Life, Robin has turned his focus on what makes humans tick and what motivates us to act and behave as we do. In this interview, Robin and Chad discuss behavior patterns, the mixed motivations that influence people, and how we can become more aware of these “elephants” in our brains. Mission Daily and all of our podcasts are created with love by our team at Mission.org We own and operate a network of podcasts, and brand story studio designed to accelerate learning. Our clients include companies like Salesforce, Twilio, and Katerra who work with us because we produce results. To learn more and get our case studies, check out Mission.org/Studios. If you’re tired of media and news that promotes fear, uncertainty, and doubt and want an antidote, you’ll want to subscribe to our daily newsletter at Mission.org. When you do, you’ll receive a mission-driven newsletter every morning that will help you start your day off right!
Robin Hanson (@robinhanson)is an associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University with a doctorate in social science from CalTech, master's degrees in physics and philosophy from the University of Chicago. He has spent nine years as a research programmer, at Lockheed and NASA, has 3500 citations, 60 publications, 700 media mentions, and he blogs at OvercomingBias. Robin is also the author of The Elephant in the Brain - Hidden Motives in Everyday Life and co-authored The Age of Em - Work, Love and Life when Robots Own the World. Hanson is credited with originating the concept of the Policy Analysis Market, a DARPA project to implement a market for betting on future developments in the Middle East. Hanson also created and supports a proposed system of government called futarchy, where policies would be determined by prediction markets. Hanson is a man willing to challenge conventional wisdom/norms and has lately drawn criticism for his unconventional economics positions on sex, gender dynamics and problems with today's society.You can listen right here on iTunesIn today's episode we discuss:* Unconventional approaches to legal reform which just might work* How Robin proposes we fix America's immigration problem* The sadistic yet serious future where your insurance company enforces the law* Ways to use prediction markets to create better outcomes* Why governments are outdated and often need a reset switch* The reason social signaling is one of the biggest problems in society* Separating facts and values and why it's almost impossible* How true libertarianism could lead to Soviet like surveillance and oppression* The truth about isolationism and innovation* Why governmental structure is making society more ineffective* An interesting approach to end the prison systemMake a Tax-Deductible Donation to Support The DisruptorsMake a Tax-Deductible Donation to Support The DisruptorsThe Disruptors is supported by the generosity of its readers and listeners. If you find our work valuable, please consider supporting us on Patreon, via Paypal or with DonorBox powered by Stripe.Donate
Welcome to episode number 202, with Dr. Robin Hanson, co-author of The Elephant in the Brain: Hidden Motives in Everyday Life. Robin Hanson is associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master’s degrees in … Continue reading "202: Robin Hanson | Career, Viewpoints And Articles From His Blog “Overcoming Bias”" The post 202: Robin Hanson | Career, Viewpoints And Articles From His Blog “Overcoming Bias” appeared first on The Armen Show.
This episode features: -Why breakups are always the other person’s fault -Why does love cause us to see our partner as better than they really are -How much do people lie -What do people lie about in their online dating profile -Is it possible to detect lies -What traits make somebody likable vs unlikable -How do we deceive ourselves -Why we often don’t understand our own motivations Full transcript -References- Apply Psychology: Anderson, N. H. (1968). Likableness ratings of 555 personality-trait words. Journal of personality and social psychology, 9(3), 272. Bond Jr, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and social psychology review, 10(3), 214-234. DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological bulletin, 129(1), 74. Helweg-Larsen, M., Sadeghian, P., & Webb, M. S. (2002). The stigma of being pessimistically biased. Journal of Social and Clinical Psychology, 21(1), 92-107. Kurzban, R. (2011). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton University Press. Simler, K., & Hanson, R. (2017). The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford University Press. Tetlock, P. E. (2017). Expert political judgment: How good is it? How can we know?. Princeton University Press. Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of personality and social psychology, 39(5), 806. OKCupid data on lying in online dating profiles Deflategate poll data Check This Rec: Edge.org
I’ve known Tucker Max for 2 decades now. 20 years ago, he emailed me for advice. I told him to quit his job. And he did. He’s the bestselling author of “I Hope They Serve Beer in Hell.” And I always tell people that if they want to study good writing, go read 10 pages of Tucker's books. “My stuff blew up... All I did was stand up and use my real name and tell my real stories about my real life.” So now Tucker wants to help other people get out there and tell their truth. This part 1 of my interview will tell you how to separate the truth from everything else. Then (in part 2), Tucker will tell you how to deal with fear, how to find your audience, and how to write. Links and Resources Scribe Media - scribemedia.com Tucker’s series of articles “Asshole to CEO” “I Hope They Serve Beer In Hell” by Tucker Max “Assholes Finish First” by Tucker Max “Hilarity Ensues” by Tucker Max “Sloppy Seconds: The Tucker Max Leftovers” by Tucker Max Tuckermax.com Follow Tucker on Facebook + Twitter Also Mentioned: "The Ultimate Guide to Self Publishing." This is my free guide for anyone who wants to write and self-publish their own book. I put together this guide to help you get started. Because, in my own experience, writing a book has lead to more opportunities than anything else. Get my guide today at jamesaltucher.com/publish Billions Rounders My interview with Brian Koppleman Hunter S. Thompson author of “Fear and Loathing in Las Vegas” Nassim Taleb author of “Antifragile: Things That Gain from Disorder” My interview with Nassim Taleb Jordan Peterson author of “12 Rules for Life: An Antidote to Chaos” My interview with Jordan Peterson The four great titans of psychological thought: 1. William James 2. Sigmund Freud 3. Carl Jung 4. Alfred Adler “The Courage to Be Disliked: The Japanese Phenomenon That Shows You How to Change Your Life and Achieve Real Happiness” by Fumitake Koga and Ichiro Kishimi “Tall Poppy Syndrome” “The Last Black Unicorn” by Tiffany Haddish My interview with Tifanny Haddish “Linchpin: Are You Indispensable?” by Seth Godin Eric Weinstein Sarah Jeong Candice Owens “The Elephant in the Brain: Hidden Motives in Everyday Life” by Robin Hanson and Kevin Simler JT Mccormick - CEO of scribe media Alex Jones Karl Marx I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast. Thanks so much for listening! If you like this episode, please subscribe to “The James Altucher Show” and rate and review wherever you get your podcasts: Apple Podcasts Stitcher iHeart Radio Spotify Follow me on Social Media: Twitter Facebook Linkedin Instagram See omnystudio.com/listener for privacy information.
I've known Tucker Max for 2 decades now. 20 years ago, he emailed me for advice. I told him to quit his job. And he did. He's the bestselling author of "I Hope They Serve Beer in Hell." And I always tell people that if they want to study good writing, go read 10 pages of Tucker's books. "My stuff blew up... All I did was stand up and use my real name and tell my real stories about my real life." So now Tucker wants to help other people get out there and tell their truth. This part 1 of my interview will tell you how to separate the truth from everything else. Then (in part 2), Tucker will tell you how to deal with fear, how to find your audience, and how to write. Links and Resources Scribe Media - scribemedia.com Tucker's series of articles "Asshole to CEO" "I Hope They Serve Beer In Hell" by Tucker Max "Assholes Finish First" by Tucker Max "Hilarity Ensues" by Tucker Max "Sloppy Seconds: The Tucker Max Leftovers" by Tucker Max Tuckermax.com Follow Tucker on Facebook + Twitter Also Mentioned: "The Ultimate Guide to Self Publishing." This is my free guide for anyone who wants to write and self-publish their own book. I put together this guide to help you get started. Because, in my own experience, writing a book has lead to more opportunities than anything else. Get my guide today at jamesaltucher.com/publish Billions Rounders My interview with Brian Koppleman Hunter S. Thompson author of "Fear and Loathing in Las Vegas" Nassim Taleb author of "Antifragile: Things That Gain from Disorder" My interview with Nassim Taleb Jordan Peterson author of "12 Rules for Life: An Antidote to Chaos" My interview with Jordan Peterson The four great titans of psychological thought: 1. William James 2. Sigmund Freud 3. Carl Jung 4. Alfred Adler "The Courage to Be Disliked: The Japanese Phenomenon That Shows You How to Change Your Life and Achieve Real Happiness" by Fumitake Koga and Ichiro Kishimi "Tall Poppy Syndrome" "The Last Black Unicorn" by Tiffany Haddish My interview with Tifanny Haddish "Linchpin: Are You Indispensable?" by Seth Godin Eric Weinstein Sarah Jeong Candice Owens "The Elephant in the Brain: Hidden Motives in Everyday Life" by Robin Hanson and Kevin Simler JT Mccormick - CEO of scribe media Alex Jones Karl Marx I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast. Thanks so much for listening! If you like this episode, please subscribe to "The James Altucher Show" and rate and review wherever you get your podcasts: Apple Podcasts Stitcher iHeart Radio Spotify Follow me on Social Media: Twitter Facebook Linkedin Instagram ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
Robin Hanson is an economist, futurist, and blogger at overcomingbias.com. I've been following Robin for a while now because he's a genuine intellectual: he thinks, speaks, and writes intensely and prolifically about whatever he wants, even if it seems weird to other people. His recently published book, co-authored with Kevin Simler, is called The Elephant in the Brain: Hidden Motives in Everyday Life. In this podcast, we talked about the new book; his larger motivation behind the book; which minds Robin would like to change; the internet; Robin's strategic insights on how to be an intellectual, especially for young-ish academic types such as myself; the near future; Robin's ideas about "futarchy"; Robin's book The Age of Em, the profit motive and the space of institutions beyond the profit motive; and a few other things.
Robin Hanson (@robinhanson)is associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University with a doctorate in social science from CalTech, master's degrees in physics and philosophy from the University of Chicago. He has spent nine years as a research programmer, at Lockheed and NASA, has 3500 citations, 60 publications, 700 media mentions, and he blogs at OvercomingBias.Robin is also the author of The Elephant in the Brain - Hidden Motives in Everyday Life and co-authored The Age of Em - Work, Love and Life when Robots Own the World.Hanson is credited with originating the concept of the Policy Analysis Market, a DARPA project to implement a market for betting on future developments in the Middle East. Hanson also created and supports a proposed system of government called futarchy, where policies would be determined by prediction markets.Hanson is a man willing to challenge conventional wisdom/norms and has lately drawn criticism for his unconventional economics positions on sex, gender dynamics and problems with today's society. You can listen right here on iTunesIn our wide-ranging conversation, we cover many things, including: * The reasons our culture values and norms are quickly changing * Why physics forced Robins to become an atheist * How Robin sees artificial intelligence progressing * The power of prediction markets and why we haven't seen more uptick * Why brain emulation may be the most likely future scenario * The reason Robin prefers to be more like a historian than a futurist * Why we'll never answer the hard problem of consciousness * Why economics is a great way to forecast the future * The problem with academia and education * Why Robin isn't worried about breakout AI * Why Robin is sceptical of blockchains * The reason Robin signed up for cryonics * What folks should know about AI boom and bust cyclesTranscriptProducing this podcast and transcribing the episode takes tons of time and resources. If you support FringeFM and the work we do, please consider making a tax-deductible donation. If you can’t afford to support us, we completely understand as well, but an iTunes review or share on Twitter can go a long way too! So, brain emulation is the scenario where we report the software that's in the human brain now. So today if you have an old computer running software that you like and you want that same kind of software running on a new computer one approaches to stare at the software or try to guess how it works and then write software on the new computer that works how you think it works on the old computer. But another approach is to write an emulator on the new computer that just makes the new computer look like the old computer to this offer if you can write an emulator you can just move this offer over and it works. You don't have to understand it you don't have to rewrite it. Big savings that offer as complicated and messy. So the idea is to do that for the human brain to make an emulator for the software in the human brain. AI automation robotics it seems as if every startup and large business today is looking at AI and the implications of automating both jobs and tasks to lead to greater efficiency and output.
We tell ourselves stories about what motivates us to do we do what we do. The reality is far more complicated. Robin Hanson is the coauthor of The Elephant in the Brain: Hidden Motives in Everyday Life. See acast.com/privacy for privacy and opt-out information.
On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week. Why did the doctors go this far? Prof, Robin Hanson, Associate Professor of Economics at George Mason University suspects that on top of any medical beliefs they also had a hidden motive: it needed to be clear, to the king and the public, that the physicians cared enormously about saving His Royal Majesty. Only by going ‘all out’ would they be protected against accusations of negligence should the King die. Full transcript, summary, and links to articles discussed in the show. If you believe Hanson, the same desire to be seen to care about our family and friends explains much of what’s perverse about our medical system today. And not just medicine - Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students and our politics are about choosing wise policies. So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others. Robin is a polymath economist, who has come up with surprising and novel insight in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, *The Elephant in the Brain: Hidden Motives in Everyday Life*, but also: * What was it like being part of a competitor group to the ‘World Wide Web’, and being beaten to the post? * If people aren’t going to school to learn, what’s education all about? * What split brain patients tell us about our ability to justify anything * The hidden motivations that shape religions * Why we choose the friends we do * Why is our attitude to medicine mysterious? * What would it look like if people were focused on doing as much good as possible? * Are we better off donating now, when we’re older, or even wait until well after our deaths? * How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible? * What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing? * And much more...
Sam Harris speaks with Robin Hanson about our hidden motives in everyday life. They discuss selfishness, hypocrisy, norms and meta-norms, cheating, deception, self-deception, education, the evolutionary logic of conversation, social status, signaling and counter-signaling, common knowledge, AI, and many other topics. Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a Phd in social science from Cal Tech, master’s degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. He’s recognized not only for his contributions to economics (pioneering the theory and use of prediction markets) but also in a wide range of other fields. He is the author (along with Kevin Simler) of The Elephant in the Brain: Hidden Motives in Everyday Life. Twitter: @robinhanson
Matt Granite has super savings on sneakers, Robin Hanson talks about "The Elephant in the Brain: Hidden Motives in Everyday Life", Cleveland Independents are in studio for the Morning Show Feud, and Paul Reiser talks about his March 24th appearance at the Hard Rock Rocksino--and stranger things... (get it?)
I got some new books in and some new content to share as always. This one includes some about those books, tangents like Kaylia’s book club episode, and other variety with people. new books I got from the library after I returned Homo Deus The Elephant In The Brain, the next book I will read … Continue reading "86: The Elephant In The Brain, Hidden Motives, And Science Authors" The post 86: The Elephant In The Brain, Hidden Motives, And Science Authors appeared first on The Armen Show.
Robin Hanson and Kevin Simler have written a book about the hidden motives in all of us: quite often, our brains get up to activities that we know little or nothing about. This isn’t just a question of regulating hormone levels or involuntary reflexes. Many of these involuntary behaviors are social signals, such as laughter or tears. Involuntary motives appear to underlie many forms of human sociability, including family formation, art, religion, and recreation. What are the implications for public policy? How can we understand politics and governance better in light of our hidden motives? Our discussion of The Elephant in the Brain: Hidden Motives in Everyday Life will focus on just these questions. See acast.com/privacy for privacy and opt-out information.
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a PhD in social science from Caltech, Master's in physics and philosophy from the University of Chicago and worked for nine years in artificial intelligence as a research programmer at Lockheed and NASA. He helped pioneer the field of prediction markets, and published The Age of Em: Work, Love and Life when Robots Rule the Earth, which was the topic of our discussion in a previous podcast episode back in 2016. His most recent book is entitled, The Elephant in the Brain: Hidden Motives in Everyday Life. He also blogs at OvercomingBias.com. The big mistake we are making – the ‘elephant in the brain’. the elephant in the room, n. An important issue that people are reluctant to acknowledge or address; a social taboo. the elephant in the brain, n. An important but unacknowledged feature of how our minds work; an introspective taboo. The elephant in the brain is the reason that people don’t do things they want to do. They have a lot of hidden motives. People think they do certain things for one reason but really do these things for a different reason. Some of the motives are unconscious. This may be due to many reasons but one of them is the desire/need to conform to social norms. The book, The Elephant in the Brain includes 10 areas of hidden motives in everyday life. These include: Body language Laughter Conversation Consumption Art Charity Education – one reason people really go to school is to ‘show off’ Medicine – it isn’t just about health – it’s also about demonstrating caring Religion Politics The puzzle of social status in the workplace is one to be explored. People are always working to improve their position within an organization but often the competition is ‘hidden’ by socially expected terms like ‘experience’ or ‘seniority’. To discuss one’s social status in the workplace is not acceptable. So, to continue to explore and think about people’s true motives can be beneficial. What you will learn in this episode: Why people have hidden motives. Are people just selfish? Why do companies have sexual harassment workshops? What could be alternative reasons to hold workplace meetings? How Robin and co-author Kevin Simler researched for the book Do we have the power to change our self-deceptive ways?
Why do we do the things we do? We like to think we have good reasons for the choices we make, but we may very well be fooling ourselves. In their intriguing new book, The Elephant in the Brain: Hidden Motives in Everyday Life, authors Robin Hanson and Kevin Simler explain how hardwired primate behavior, social norms, and evolution combine to obscure our motives...even (or maybe especially) from ourselves. While it’s easy to see how hiding our motives from others might bring about certain advantages, it’s harder to imagine why we would ever try to hide our reasons from ourselves. But Hanson argues that it’s no great mystery. "We prefer to attribute our behavior to the highest-minded motives,” he explains. “But often our behavior is better explained by less high-minded motives -- i.e., more selfish motives -- and we'd rather not look at that and acknowledge it." About Our Guest Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science, master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. With over 3600 citations and sixty academic publications, he's recognized not only for his contributions to economics, but also for the wide range of fields in which he's been published. His amazing blog OvercomingBias.com has had some eight million visits. He is the author of The Age of Em: Work, Love, and Life when Robots Rule the Earth. He blogs at Overcoming Bias. Music: www.bensound.com FF 003-698
Robin Hanson is a professor of economics at George Mason University. He is the author of The Age of Em and co-author of the new book The Elephant in the Brain: Hidden Motives in Everyday Life which is a facilitating look into the real reasons that motivate people and society. Topics Discussed: - Robin's new book, The Elephant in the Brain - Split brain patients - The reasons for vague language - The lack of correlation between medicine and health - Using gifts to show how much we care - The cost of violating norms - Why religious people are healthier and wealthier - Demonstrating group loyalty through ritual - Religion and politics - Signaling and education - Our choices have audiences - How signals change For full show notes visit isaacmorehouse.com/podcast
Firstly, I’d like to apologise to all listeners to the Economic Rockstar podcast for what seemingly appears to be me turning my back on the podcast and on you. I honestly never had planned for this and I had always intended to work hard and deliver great quality episodes to the best of my abilities with the most amazing, thought-provoking and inspiring economists to you every week. However, personal circumstances changed in my life and this impacted on the podcast. I felt that I couldn’t commit 100% to the time I had allocated to the podcast. In the meantime I’d like to thank all of you who have contacted me on Facebook, Twitter and by email enquiring about the podcast and wishing me well. I truly appreciate it and it was really nice to have my listeners get in touch and show a desire and hunger for more interviews. The realisation kicked in when I struggled to feel the natural enthusiasm that I previously had in the lead up to and during each interview process. I honestly felt that it wasn’t fair to my guests and to you by not being fully present. The last interview that I recorded (prior to this most recent one) was early in 2017 with the distinguished economist and Nobel laureate Professor Vernon Smith and ironically I felt that it was my best interview to date. I decided that I just wanted to know about the person rather than the discipline and I felt that this approach uncovered great insights into Professor Smith’s thinking and role as an economist. And perhaps it’s a coincidence that i’m releasing the first episode in almost a year on the day of Professor Smith’s birthday, January 1st. Happy 91st birthday Vernon. I’ll release my interview with Vernon soon. If you’re a fan of the podcast and would like to show your support in anyway, please check out my Patreon page at patreon.com/economicrockstar where you can sign up for any of the awards for as little as $1 a month or you can simply follow me on the Economic Rockstar Facebook page or on Twitter or simply recommend the show to a friend, especially if they have never had the opportunity to study economics. So to begin again… In this weeks episode of the economic rockstar podcast I speak to Professor Robin Hanson, associate professor of economics at George Mason University. Professor Hanson has been on the podcast on two previous occasions, episodes 73 and 91 and has kindly joined me again for a hat-trick of episodes. We talk about his new book The Elephant In the Brain: Hidden Motives in everyday life, co-authored with Kevin Simler and available to buy in all good bookstores and, of course, online through Amazon, Barnes and Noble, Book Depository and more. Check Robin and Kevin’s website elephantintheroom.com to explore the book in finer detail as well as some great content such as interviews, reviews and a Ted talk on the subject. You can download or stream this 122nd episode as well as find all the links mentioned above at economicrockstar.com/robinhanson3
Robin Hanson returns to the podcast to discuss his new book, The Elephant in the Brain: Hidden Motives in Everyday Life, co-authored with Kevin Simler. As the subtitle suggests, the book looks at humans' hidden motives. Robin argues that these hidden motives are much more prevalent than our conscious minds assume. We are not conscious of the vast majority of the functions of our brains. This extends beyond the most basic things our brains do (such as commanding our hearts to beat every second or so) to many things we think of as higher-level cognitive tasks. Hanson and Simler argue that, if the brain were a corporation, the conscious mind wouldn't be the CEO but the press secretary. Most of the reasons our conscious brains give for our actions are actually ex-post rationalizations for decisions that have been made unconsciously and for reasons that aren't immediately obvious to us. As a press secretary, the conscious mind is better off not knowing if we are doing things for selfish reasons since that would make it more difficult to justify our actions to others. Some very compelling evidence for this thesis comes from studies of people with split brains. People with severe epilepsy have sometimes been treated by severing the connections between the two halves of their brains. Researchers noticed that when one side of the brain was fed information that led to a particular action (e.g. an instruction from the researcher to "stand up") the other side would construct a reason for the action (e.g. "I was thirsty and I got up to get a drink"). If the brain were truthfully answering these questions, it would say "I don't know." However, the split-brain patients confidently gave false answers apparently without realizing they were false. Hanson argues that neurotypical minds are doing the same thing: constructing justifications for our actions even if we aren't really aware of our true underlying motives. From the book's online description, "The aim of this book is to confront our hidden motives directly---to track down the darker, unexamined corners of our psyches and blast them with floodlights. Then, once our minds are more clearly visible, we can work to better understand human nature: Why do people laugh? Why are artists sexy? Why do we brag about travel? Why do we prefer to speak rather than listen?" We discuss this theory of the brain and how it applies to many areas of everyday life from medicine to body language. The Amazon links on this page are affiliate links. If this podcast convinced you to buy a copy of The Elephant in the Brain, doing so through one of these links will provide revenue to the podcast at no additional cost to yourself.
My guest today is Robin Hanson, an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is an expert on idea futures and markets, was involved in the creation of the Foresight Institute's Foresight Exchange, and DARPA's Future MAP project. He is co-author of “The Elephant in the Brain: Hidden Motives in Everyday Life.” And today Robin and Michael dive right into the heart of our hidden motives. Robin shows that once our brains are able to confront these blind spots, we can better have a grasp on ourselves and the motivations behind how we think–which of course can then lead to possibly better policy. The topic is his book The Elephant in the Brain: Hidden Motives in Everyday Life. In this episode of Trend Following Radio we discuss: Hidden motives Humans as political animals Deception vs. self deception Selfishness Understanding your motivations Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is an expert on idea futures and markets, was involved in the creation of the Foresight Institute’s Foresight Exchange, and DARPA’s Future MAP project. He is co-author of “The Elephant in the Brain: Hidden Motives in Everyday Life.” And today Robin and Michael dive right into the heart of our hidden motives. “The Elephant in the Brain” helps confront hidden motives embedded in the brain–things people don’t like to talk about, also known as, elephants in the room. Robin shows that once our brains are able to confront these blind spots, we can better have a grasp on ourselves and the motivations behind how we think–which of course can then lead to possibly better policy. Think about it: Why does one person find another attractive? Why do we laugh? Robin answers these questions and more throughout his work. He forces you to dig into the deeper, darker parts of your psyche and look in the mirror. And Michael takes great pleasure in letting Robin reveal his awesome insights on today’s show. In this episode of Trend Following Radio: Hidden motives Humans as political animals Deception vs. self deception Selfishness Understanding your motivations