POPULARITY
Why might our brains be keeping us in the dark about our own motives? What's the reason humans give to charity? How do cultural norms lead to continual efforts to signal to our potential allies?Robin Hanson is a professor of economics at George Mason University . His latest two books are titled, The Elephant in the Brain: Hidden Motives in Everyday Life, and The Age of Em: Work, Love, and Life when Robots Rule the Earth.Robin and Greg discuss the discrepancies between what we say and our true intentions.Robin shares how human interaction within our discussions is less about the content and more about social positioning and signaling. Robin talks about the intricate dance of conversations, where showing status, expressing care, and signaling allyship are at the forefront. They also wrestle with the concept of luxury goods and their role in consumer behavior, challenging the conventional wisdom about why we buy what we buy and the messages we're really sending with our choices.*unSILOed Podcast is produced by University FM.*Episode Quotes:On conscious mind and social norm23:39: Humans have rules about what you're supposed to do and not supposed to do, especially regarding each other. And we really care a lot about our associates not violating those norms, and we're very eager to find rivals violating them and call them out on that. And that's just a really big thing in our lives. And in fact, it's so big that plausibly your conscious mind, the part of your mind I'm talking to, isn't the entire mind, you have noticed. You've got lots of stuff going on in your head that you're not very conscious of, but your conscious mind is the part of you whose job it is mainly to watch what you're doing and at all moments have a story about why you're doing it and why this thing you're doing, for the reason you're doing it,isn't something violating norms. If you didn't have this conscious mind all the time putting together the story, you'd be much more vulnerable to other people claiming that you're violating norms and accusing you of being a bad person for doing bad things.Our individual doesn't care much about norms20:25: Sometimes norms are functional and helpful, and sometimes they're not. Our individual incentive doesn't care much about that. Our incentive is to not violate the norms and not be caught violating the norms, regardless of whether they're good or bad norms, regardless of what function they serve.Why do people not want to subsidize luxury items, but they do subsidize education?46:34: So part of the problem is that we often idealize some things and even make them sacred. And then, in their role as something sacred, we are willing to subsidize them and sacrifice for them. And then it's less about maybe their consequences and more about showing our devotion to the sacred. In some sense, sacred things are the things we are most eager to show our devotion to. And that's why people who want to promote things want us to see them as sacred. So, schools have succeeded in getting many people to see schools as a sacred venture and therefore worthy of extra subsidy. And they're less interested in maybe the calculation of the job consequences of education because they just see education itself as sacred.On notion of cultural drift47:55: So human superpower is cultural evolution. This is why we can do things so much better than other animals. The key mechanism of culture is that we copy the behaviors of others. In order to make that work, we have to differentially copy the behavior that's better, not the behavior that's worse. And to do that, we need a way to judge who is more successful so that we will copy the successful. So our estimate of what counts as success—who are the people around us who we will count as successful and worthy of emulation—is a key element of culture. And that's going to drive a lot of our choices, including our values and norms. We're going to have compatible and matching with our concept of who around us is the most admirable, the most worthy of celebration and emulation.Show Links:Recommended Resources:François de La RochefoucauldMicrosociologyPatek Philippe WatchesConsumptionParochialismThe Case against Education: Why the Education System Is a Waste of Time and MoneyEvolutionGuest Profile:Faculty Profile at George Mason UniversityBlog - Overcoming BiasPodcast - Minds Almost MeetingProfile on LinkedInSocial Profile on XHis Work:Amazon Author PageThe Elephant in the Brain: Hidden Motives in Everyday LifeThe Age of Em: Work, Love, and Life when Robots Rule the Earth
In addition to everything else, that is. Bradley sorts through recent developments in transportation with his comms guy Cory Epstein – who previously worked at Citi Bike, Lyft, and Transportation Alternatives. Plus, why the latest banking meltdown forces us to rethink our approach to regulation.[1:20] First Republic Bank failure[12:55] David Byrne's bike at the Met Gala [14:46] Vision Zero and why traffic deaths haven't gone down[20:20] How community boards and politics have hampered safe streets progress[25:58] When, and how, do we get robotaxis in New York City?[32:22] Should Citi Bike be free?[36:58] What's the timeframe for flying cars?[38:43] Recommendation of the Week: The Diplomat on NetflixThis episode was taped at P&T Knitwear at 180 Orchard Street — New York City's only free podcast recording studio.Send us an email with your thoughts on today's episode: info@firewall.mediaSubscribe to Bradley's weekly newsletter, follow Bradley on Twitter, and visit the Firewall website.Mentioned on today's episode:American Road Deaths Show an Alarming Racial Gap by Adam Paul Susaneck, The New York Times (4/26/23)When a Walkable City Becomes a Death Trap by Ginia Bellafante, The New York Times (4/28/23)Weekly Dose of Optimism, Packy McCormick (4/28/23) Citi Bike Is Amazing. Citibank Should Pay For Every New Yorker's Membership by Amos Barshad, Hell Gate (4/26/23)Forget Tesla: You Can Get to Work Even Faster in This Vehicle (for Just $98,000) by Rob Lenihan, The Street (4/26/23)
“The machine that builds the machine” is Elon Musk's oft-cited vision of the automotive factory of the future. Robots will rule the roost and humans will only go there as customers. Other robot fans have even more radical visions of where we're headed.
Shanny Luft speaks with Vera Klekovkina (Department of World Languages), Joshua Horn (Department of Philosophy), and Tomi Heimonen (Department of Computer Science) about their thoughts and concerns about robots.This episode was inspired by UW-Stevens Point College of Letters and Science Community Engagement series, "When Robots Rule the World." From the futuristic portrayal of robots in film to robots in the operating room, from the daily use of artificial intelligence (AI) in mundane tasks to the latest advances in the field of human-centered AI, a wide range of university faculty will explore the implications of When Robots Rule the World, in lectures and public events in Stevens Point.More information about the Community Engagement Series is available here: https://www.uwsp.edu/2022-2023-community-engagement-series/Here are some links to books and articles referenced in this episode:Kathleen Richardson, Sex Robots: The End of Love (https://www.amazon.com/Sex-Robots-Love-Kathleen-Richardson/dp/1509530282/ref=sr_1_1?crid=BSU2LR46L8H4&keywords=sex+robots+the+end+of+love&qid=1664985872&sprefix=sex+robots+the+end+of+love%2Caps%2C93&sr=8-1)David Levy, Love and Sex with Robots: The Evolution of Human-Robot Relationships (https://www.amazon.com/Love-Sex-Robots-Human-Robot-Relationships-ebook/dp/B000XUACXM/ref=sr_1_1?crid=JU9PPZX8YKQ7&keywords=love+and+sex+with+robots&qid=1664985898&qu=eyJxc2MiOiIxLjQ4IiwicXNhIjoiMS4yMyIsInFzcCI6IjEuMzAifQ%3D%3D&sprefix=love+and+sex+with+robots%2Caps%2C100&sr=8-1)Jean Baudrillard, Simulation and simulacra (https://www.amazon.com/Simulacra-Simulation-Body-Theory-Materialism/dp/0472065211/ref=sr_1_1?keywords=simulation+and+simulacra&qid=1664985831&qu=eyJxc2MiOiIxLjE2IiwicXNhIjoiMC42NCIsInFzcCI6IjAuOTcifQ%3D%3D&sprefix=simulation+and+simul%2Caps%2C103&sr=8-1)Report submitted to EU in 2017 about roboat rights: www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect---------------Please rate and review No Cure for Curiosity in your favorite podcast app. Five-star reviews help other people find No Cure for Curiosity!Our intro music was written by UWSP music student Derek Carden and our logo is by artist and graphic designer Ryan Dreimiller.
In this episode of Faster, Please! — The Podcast, I'm continuing last week's discussion with Robin Hanson, professor of economics at George Mason University and author of the Overcoming Bias blog. His books include The Age of Em: Work, Love and Life when Robots Rule the Earth and The Elephant in the Brain: Hidden Motives in Everyday Life.(Be sure to check out last week's episode for the first part of my conversation with Robin. We discussed futurism, innovation, and economic growth over the very long run, among other topics. Definitely worth the listen!)In part two, Robin and I talk about the possibility of extraterrestrial life. Earlier this year, the US House of Representatives held a hearing on what Washington now calls "unexplained aerial phenomena." While the hearing didn't unveil high-def, close-up footage of little green men or flying saucers, it did signal that Washington is taking UAPs more seriously. But what if we really are being visited by extraterrestrials? What would contact with an advanced alien civilization mean for humanity? It's exactly the kind of out-there question Robin considers seriously and then applies rigorous, economic thinking. In This Episode:* The case for extraterrestrial life (1:34)* A model to explain UFOs (6:49)* Could aliens be domesticating us right now? (13:23)* Would advanced alien civilization renew our interest in progress? (17:01)* Is America on the verge of a pro-progress renaissance? (18:49)Below is an edited transcript of our conversation.The case for extraterrestrial lifeJames Pethokoukis: In the past few years there have been a lot of interesting developments on the UFO — now UAP — front. The government seems to be taking these sightings far more seriously. Navy pilots are testifying. What is your take on all this?Robin Hanson: There are two very different discussions and topics here. One topic is, “There are these weird sightings. What's with that? And could those be aliens?” Another more standard, conservative topic is just, “Here's this vast empty universe. Are there aliens out there? If so, where?” So that second topic is where I've recently done some work and where I feel most authoritative, although I'm happy to also talk about the other subject as well. But I think we should talk first about the more conservative subject.The more conservative subject, I think, is — and I probably have this maybe 50 percent correct — once civilizations progress far enough, they expand. When they expand, they change things. If there were a lot of these civilizations out there, we should be able to, at this point, detect the changes they've made. Either we've come so early that there aren't a lot of these kinds of civilizations out there … let me stop there and then you can begin to correct me.The key question is: it looks like we soon could go out expanding and we don't see limits to how far we could go. We could fill the universe. Yet, we look out and it's an empty universe. So there seems to be a conflict there.Where are the giant Dyson spheres?One explanation is, we are so rare that in the entire observable universe, we're the only ones. And therefore, that's why there's nobody else out there. That's not a crazy position, except for the fact that we're early. The median star will last five trillion years. We're here on our star after only five billion years, a factor of 1000. Our standard best theory of when advanced life like us should appear, if the universe would stay empty and wait for it, would be near the end of a long-lived planet. That's when it would be most likely to appear.There's this power of the number of hard steps, which we could go into, but basically, the chance of appearing should go as the power of this time. If there are, say, six hard steps, which is a middle estimate, then the chance of appearing 1000 times later would go as 1000 to the power of six. Which would be 10 to the 18th. We are just crazy early with respect to that analysis. There is a key assumption of the analysis, which is the universe would sit and wait empty until we showed up. The simplest way to resolve this is to deny that assumption is to say, “The universe is not sitting and waiting empty. In fact, it's filling up right now. And in a billion years or two, it'll be all full. And we had to show up before that deadline.” And then you might say, “If the universe is filling up right now, if right now the universe is half full of aliens, why don't we see any?”We should be detecting signals, seeing things. We have this brand new telescope out there sitting a million miles away.If we were sitting at a random place in the universe, that would be true. But we are the subject of a selection effect. Here's the key story: We have to be at a place where the aliens haven't gotten to yet. Because otherwise, they would be here instead of us. That's the key problem. If aliens expand at almost the speed of light, then you won't see them until they're almost here. And that means if you look backwards in our light cone — from our point, all the way backwards — almost all that light cone is excluded. Aliens couldn't be there because, again, if they had arisen there, they would be here now instead of us. The only places aliens could appear that we could see now would have to be just at the edge of that cone.Therefore, the key explanation is aliens are out there, but everywhere the aliens are not, we can't see them because the aliens are moving so fast we don't see them until they're almost there. So the day on the clock is the thing telling you aliens are out there right now. That might seem counterintuitive. “How's the clock supposed to tell me about aliens? Shouldn't I see pictures of weird guys with antennae?” Something, right? I'm saying, “No, it's the clock. The clock is telling you that they're out there.” Because the clock is saying you're crazy early, and the best explanation for why you're crazy early is that they're out there right now.But if we take a simple model of, they're arising in random places and random times, and we fit it to three key datums we know, we can actually get estimates for this basic model of aliens out there. It has the following key parameter estimates: They're expanding at, say, half the speed of light or faster; they appear roughly once per million galaxies, so pretty rare; and if we expanded out soon and meet them, we'd meet them in a billion years or so. The observable universe has a trillion galaxies in it. So once per million galaxies means there are a lot of them that will appear in our observable universe. But it's not like a few stars over. This is really rare. Once per million galaxies. We're not going to meet them soon. Again, in a billion years. So there's a long time to wait here.A model to explain UFOsBased on this answer, I don't think your answer to my first question is “We are making contact with alien intelligence.”This simple model predicts strongly that there's just no way that UFOs are aliens. If this were the only possible model, that would be my answer. But I have to pause and ask, “Can I change the model to make it more plausible?” I tried to do this exercise; I tried to say, “How could I most plausibly make a set of assumptions that would have as their implication UFOs are aliens and they're really here?”Is this a different model or are you just changing something key in that model?I'm going to change some things in this model, I'll have to change several things. I'm going to make some assumptions so that I get the implication that some UFOs are aliens and they're doing the weird things we see. And the key question is going to be, “How many assumptions do you have to make, and how unlikely are they?” This is the argument about the prior on this theory. Think of a murder trial. In a murder trial, somebody says A killed B. You know that the prior probability of that is like one in a million: One in 1000 people are killed in a murder and they each know 1000 people. The idea that any one of those people killed them would be one in a million. So you might say, “Let's just dismiss this murder trial, because the prior is so low.” But we don't do that. Why? Because it's actually possible in a typical murder trial to get concrete, physical evidence that overcomes a one-in-a-million prior. So the analogy for UFOs would be, people say they see weird stuff. They say you should maybe think that's aliens. The first question you have to ask is, how a priori unlikely is that? If it was one in 10 to the 20 unlikely, you'd say, “There's nothing you could tell me to make me believe this. I'm just not going to look, because it's just so crazy.”There are a lot of pretty crazy explanations that aren't as crazy as that.Exactly. But my guess is the prior is roughly one in a thousand. And with a one-in-thousand prior, you've got to look at the evidence. You don't just draw the conclusion on one in a thousand, because that's still low. But you've got to be willing to look at the evidence if it's one in a thousand. That's where I'd say we are.Then the question is, how do I get one in a thousand [odds]? I'm going to try to generate a scenario that is as plausible as possible and consistent with the key datums we have about UFOs. Here are the key datums. One is, the universe looks empty. Two is, they're here now. Three is, they didn't kill us. We're still alive. And four is, they didn't do the two obvious things they could do. They could have come right out and been really obvious and just slapped us on the face and said, “Here we are.” That would've been easy. Or they could have been completely invisible. And they didn't do either of those. What they do is hang out at the edge of visibility. What's with that? Why do that weird intermediate thing? We have to come up with a hypothesis that explains these things, because those are the things that are weird here.The first thing I need to do is correlate aliens and us in space-time. Because if it was once randomly per million galaxies, that doesn't work. The way to do that is panspermia. Panspermia siblings, in fact. That is, Earth life didn't start on Earth. It started somewhere else. And that somewhere else seeded our stellar nursery. Our star was born with a thousand other stars, all in one place at the same time, with lots of rocks flying back and forth. If life was seeded in that stellar nursery, it would've seeded not just our Earth, but seeded life on many of those other thousand stars. And then they would've drifted apart over the last four billion years. And now they're in a ring around the galaxy. The scenario would be one of those other planets developed advanced life before us.The way we get it is we assume panspermia happened. We assume there are siblings, and that one of them came to our level before us. If that happened, the average time duration would be maybe 100 million years. It wouldn't have happened in the last thousand years or even million years. It would be a long time. Given this, we have to say, “Okay, they reached our level of advancement a hundred million years ago. And they're in the same galaxy as us; they're not too far away. We know that they could find us. We can all find the rest of the stellar siblings by just the spectra. We all were in the same gas with the same mixture of chemicals. We just find the same mixture of chemicals, and we've found the siblings. They could look out and find our siblings.We have this next piece of data: The universe is empty. The galaxy is empty. They've been around for 100 million years, if they wanted to take over the galaxy, they could have. Easy, in 100 million years. But they didn't. To explain that, I think we have to postulate that they have some rule against expansion. They decided that they did not want to lose their community and central governance and allow their descendants to change and be strange and compete with them. They chose to keep their civilization local and, therefore, to ban or prohibit, effectively, any colonists from leaving. And we have to assume not only that was their plan, they succeeded … for 100 million years. That's really hard.They didn't allow their generation ships to come floating through our solar system.No, they did not allow any substantial colonization away from their home world for a hundred million years. That's quite a capability. They may have stagnated in many ways, but they have maintained order in this thing. Then they realize that they have siblings. They look out and they can see them. And now they have to realize we are at risk of breaking the rule. If they just let us evolve without any constraints, then we might well expand out. Their rule they maintain for a hundred million years to try to maintain their precious coherence, it would be for naught. Because we would violate it. We would become the competitors they didn't want.That creates an obvious motive for them to be here. A motive to allow an exception. Again, they haven't allowed pretty much any expansion. But they're going to travel thousands of light-years from there to here to allow an expedition here, which risks their rule. If this expedition goes rogue, the whole game is over. So we are important enough that they're going to allow this expedition here to come here to try to convince us not to break the rule. But not just to kill us, because they could have just killed us. Clearly, they feel enough of an affiliation or a sibling connection of some sort that they didn't just kill us. They want us to follow their rule, and that's why they're here. So that all makes sense.Could aliens be sort of “domesticating” us right now?But then we still have the last part to explain. How, exactly, do they expect to convince us? And how does hanging out at the edge of our visibility do that? You have to realize whoever from home sent out this expedition, they didn't trust this expedition very much. They had to keep them pretty constrained. So they had to prove some strategy early on that they thought would be pretty robust, that could plausibly work, that isn't going to allow these travelers to have much freedom to go break their rules. Very simple, clean strategy. What's that strategy? The idea is, pretty much all social animals we know have a status hierarchy. The way we humans domesticate other animals is … what we usually do is swap in and sit at the top of their status hierarchy. We are the top dog, the top horse, whatever it is. That's how we do it. That's a very robust way that animals have domesticated other animals. So that's their plan. They're going to be at the top of the status hierarchy. How do they do that? They just show up and be the most impressive. They just fly around and say, “Look at me. I'm better.”You don't need to land on the National Mall. You just need to go 20 times faster than our fastest jet. That says something right there.Once we're convinced they exist, we're damn impressed. In order to be at the top of our status hierarchy, they need to be impressive. But they also need to be here and relatively peaceful. If they were doing it from light-years away, then we'd be scared and threatened. They need to be here at the top of our status hierarchy, being very impressive. Now it would be very impressive, of course, if they landed on the White House lawn and started talking to us, too. But that's going to risk us not liking something. As you know, we humans have often disliked other humans for pretty minor things: just because they don't eat the kind of foods we do or marry the way we do or things like that.If they landed on the White House lawn, someone would say, “We need to plan for an invasion.”The risk is that if they told if they showed up and they told a lot about them, they gave us their whole history and videos of their home world and everything else, we're going to find something we hate. We might like nine things out of 10. But that one thing we hate, we're going to hate a lot. And unfortunately, humans are not very forgiving of that, right? Or most creatures. This is their fear scenario. If they showed too much, then game over. We're not going to defer to them as the top of our status hierarchy, because they're just going to be these weird aliens. They need to be here, but not show very much to us. The main thing they need to show is how impressive they are and that they're peaceful. And their agenda — but we can figure out the agenda. Just right now, we can see why they're here: because the universe is empty, so they didn't fill it; they must have a rule against that, and we'd be violating the rule. Ta-da. They can be patient. They're in no particular rush. They can wait for us to figure out what we believe or not. Because they just have to hang around and be there until we decide we believe it. And then everything else follows from that.As you were describing that, it reminded me of the television show, The Young Pope. We have a young Pope, and he starts off by not appearing because he thinks part of his power comes from an air of mystery and this mystique. In a way, what you're saying is that's what these aliens would be doing.Think of an ancient emperor. The ancient emperor was pretty weird. Typically, an emperor came from a whole different place and was a different ethnicity or something from the local people. How does an emperor in the ancient world get the local people to obey them? They don't show them a lot of personal details, of course. They just have a really impressive palace and impressive parades and an army. And then everybody goes, “I guess they're the top dog.” Right. And that's worked consistently through history.I like “top dog” better than apex predator, by the way.Would advanced alien civilization renew our interest in progress?I wrote about this, and the scenario I came up with is kind of what you just described: We know they're here, and we know they have advanced technology. But that's it. We don't meet them. I would like to think that we would find it really aspirational. That we would think, “Wow. We are nowhere near the end. We haven't figured it all out. We haven't solved all we need to know about physics or anything else.” What do you think of that idea? And what do you think would be the impact of that kind of scenario where they didn't give us their gadgets, we just know they're there and advanced. What does that do to us?All through history, humans haven't quite dared to think that they could rule their fate. They had gods above them who were more in control. It's only in the last few centuries where we've taken on ourselves this sense that we're in charge of ourselves and we get to decide our future. If real aliens show up and they really are much more powerful, then we have to revise that back to the older stance of, “Okay, there are gods. They have opinions, and I guess we should pay attention.” But if these are gods who once were us, that's a different kind of god. And that wasn't the ancient god. That's a different kind of god that we could then aspire to. We can say “These gods were once like us. We could become like them. And look how possible it is.”Now, of course, we will be suspicious of whether we can trust them and whether we should admire them. And that's where not saying very much will help. They just show up and they are just really powerful. They just don't tell us much. And they say, “We're going to let you guys work that out. You get the basics.” I think we would be inspired, but also deflated a bit that we aren't in charge of ourselves. If they have an agenda and it's contradicting ours, they're going to win. We lose. It's going to be pretty hard.Is America on the verge of a pro-progress renaissance?We've had this stagnation relative to what our expectations were in the immediate postwar decades. I would like to think I'm seeing some signs that maybe that's changing. Maybe our attitude is changing. Maybe we're getting to more of a pro-progress, progress-embracing phase of our existence. Maybe 50 years of this after 50 years of that.There are two distinctions here that are importantly different. One is the distinction between caution and risk. The other is between fear and hope. Unfortunately, it just seems that fear and hate are just much stronger motives for most humans than hope. We've had this caution, due to fear. I think the best hope for aggression or risk is also fear or hate. That is, if we can find a reason, say, “We don't want those Russians to win the war, and therefore we're going to do more innovation.” Or those people tell us we can't do it, and therefore you can. Many people recently have entered the labor force and then been motivated by, “Those people don't think we're good enough, and we're going to show we're good enough and what we can do.”If you're frightened enough about climate change, then at some point you'll think, “We need all of the above. If that's nuclear, that's fine. If it's digging super deep into the Earth…”If you could make strong enough fear. I fear that's just actually showing that people aren't really that afraid yet. If they were more afraid, they would be willing to go more for nuclear. But they're not actually very afraid. Back in 2003, I was part of this media scandal about the policy analysis market. Basically, we had these prediction markets that were going to make estimates about Middle Eastern geopolitical events. And people thought that was a terrible sort of thing to do. It didn't fit their ideals of how foreign policy estimates should be produced. And one of the things I concluded from that event was that they just weren't actually very scared of bad things happening in the Middle East. Because if so, they wouldn't have minded this, if this was really going to help them make those things go better.And we actually saw that in the pandemic. I don't think we ever got so scared in the pandemic that we did what we did in World War II. As you may know, in the beginning of World War II we were losing. We were losing badly, and we consistently were losing. And we got scared and we fired people and fired contractors and changed things until we stopped losing. And then we eventually won. We never fired anybody in the pandemic. Nobody lost their job. We never reorganized anything and said, “You guys are doing crap, and we're going to hand the job to this group.” We were never scared enough to do that. That's part of why it didn't go so well. The one thing that went well is when we said, “Let's set aside the usual rules and let you guys go for something.”We got scared of Sputnik and 10 years later there's an American flag on the Moon.Right. And that was quite an impressive spurt, initially driven by fear.Perhaps if we're scared enough of shortages or scared enough of climate change or scared enough that the Chinese are going to come up with a super weapon, then that would be a catalyst for a more dynamic, innovative America, maybe.I'm sorry for this to be a negative sign, but I think the best you can hope for optimism is that some sort of negative emotion would drive for more openness and more risk taking.Innovation is a fantastic free lunch, it seems like. And we don't seem to value it enough until we have to.For each one of us, it risks these changes. And we'd rather play it safe. You might know about development in the US. We have far too little housing in the US. The main reason we have far too little housing is we've empowered a lot of local individual critics to complain about various proposals. They basically pick just all sorts of little tiny things that could go wrong. And they say, “You have to fix this and fix that.” And that's what takes years. And that's why we don't have enough housing and building, because we empower those sorts of very safety-oriented, tiny, “if any little things go wrong, then you've got to deal with it” sort of thinking. We have to be scared enough of something else. Otherwise those fears dominate. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Few economists think more creatively and also more rigorously about the future than Robin Hanson, my guest on this episode of Faster, Please! — The Podcast. So when he says a future of radical scientific and economic progress is still possible, you should take the claim seriously. Robin is a professor of economics at George Mason University and author of the Overcoming Bias blog. His books include The Age of Em: Work, Love and Life when Robots Rule the Earth and The Elephant in the Brain: Hidden Motives in Everyday Life.In This Episode:* Economic growth over the very long run (1:20)* The signs of an approaching acceleration (7:08)* Global governance and risk aversion (12:19)* Thinking about the future like an economist (17:32)* The stories we tell ourselves about the future (20:57)* Longtermism and innovation (23:20)Next week, I'll feature part two of my conversation with Robin, where we discuss whether we are alone in the universe and what alien life means for humanity's long-term potential.Below is an edited transcript of our conversation.Economic growth over the very long runJames Pethokoukis: Way back in 2000, you wrote a paper called “Long-Term Growth as a Sequence of Exponential Modes.” You wrote, “If one takes seriously the model of economic growth as a series of exponential … [modes], then it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.” Is that still your expectation for the 21st century?Robin Hanson: It's my expectation for the next couple of centuries. Whether it's the 21st isn't quite so clear.Has anything happened in the intervening two decades to make you think that something might happen sooner rather than later … or rather, just later?Just later, I'm afraid. I mean, we have a lot of people hyping AI at the moment, right?Sure, I may be one of them on occasion.There are a lot of people expecting rapid progress soon. And so, I think I've had a long enough baseline there to think, "No, maybe not.” But let's go with the priors.Is it a technological mechanism that will cause this? Is it AI? Is it that we find the right general-purpose technology, and then that will launch us into very, very rapid growth?That would be my best guess. But just to be clear for our listeners, we just look at history, we seem to see these exponential modes. There are, say, four of them so far (if we go pre-human). And then the modes are relatively steady and then have pretty sharp transitions. That is, the transition to a growth rate of 50 or 200 times faster happens within less than a doubling time.So what was the last mode?We're in industry at the moment: doubles roughly every 15 years, started around 1800 or 1700. The previous mode was farming, doubled every thousand years. And so, in roughly less than a thousand years, we saw this rapid transition to our current thing, less than the doubling time. The previous mode before that was foraging, where humans doubled roughly every quarter million years. And in definitely less than a quarter million years, we saw a transition there. So then the prediction is that we will see another transition, and it will happen in less than 15 years, to a faster growth mode. And then if you look at the previous increases in growth rates, they were, again, a factor of 60 to 200. And so, that's what you'd be looking for in the next mode. Now, obviously, I want to say you're just looking at a low data set here. Four events. You can't be too confident. But, come on, you've got to guess that maybe a next one would happen.If you go back to that late ‘90s period, there was a lot of optimism. If you pick up Wired magazine back then, [there was] plenty of optimism that something was happening, that we were on the verge of something. One of my favorite examples — and a sort of non-technologist example, was a report from Lehman Brothers from December 1999. It was called “Beyond 2000.” And it was full of predictions, maybe not talking about exponential growth, but how we were in for a period of very fast growth, like 1960s-style growth. It was a very bullish prediction for the next two decades. Now Lehman did not make it another decade itself. These predictions don't seem to have panned out — maybe you think I'm being overly pessimistic on what's happened over the past 20 years — but do you think it was because we didn't understand the technology that was supposedly going to drive these changes? Did we do something wrong? Or is it just a lot of people who love tech love the idea of growth, and we all just got too excited?I think it's just a really hard problem. We're in this world. We're living with it. It's growing really fast. Again, doubling every 15 years. And we've long had this sense that it's possible for something much bigger. So automation, the possibility of robots, AI: It sat in the background for a long time. And people have been wondering, “Is that coming? And if it's coming, it looks like a really big deal.” And roughly every 30 years, I'd say, we've seen these bursts of interest in AI and public concern, like media articles, you know…We had the ‘60s. Now we have the ‘90s…The ‘60s, ‘90s, and now again, 2020. Every 30 years, a burst of interest and concern about something that's not crazy. Like, it might well happen. And if it was going to happen, then the kind of precursor you might expect to see is investors realizing it's about to happen and bidding up assets that were going to be important for that to really high levels. And that's what you did see around ‘99. A lot of people thought, “Well, this might be it.”Right. The market test for the singularity seemed to be passing.A test that is not actually being passed quite so much at the moment.Right.So, in some sense, you had a better story then in terms of, look, the investors seem to believe in this.You could also look at harder economic numbers, productivity numbers, and so on.Right. And we've had a steady increase in automation over, you know, centuries. But people keep wondering, “We're about to have a new kind of automation. And if we are, will we see that in new kinds of demos or new kinds of jobs?” And people have been looking out for these signs of, “Are we about to enter a new era?” And that's been the big issue. It's like, “Will this time be different?” And so, I've got to say this time, at the moment, doesn't look different. But eventually, there will be a “this time” that'll be different. And then it'll be really different. So it's not crazy to be watching out for this and maybe taking some chances betting on it.The signs of an approaching accelerationIf we were approaching a kind of acceleration, a leap forward, what would be the signs? Would it just be kind of what we saw in the ‘90s?So the scenario is, within a 15-year period, maybe a five-year period, we go from a current 4 percent growth rate, doubling every 15 years, to maybe doubling every month. A crazy-high doubling rate. And that would have to be on the basis of some new technology, and therefore, investment. So you'd have to see a new promising technology that a lot of people think could potentially be big. And then a lot of investment going into that, a lot of investors saying, “Yeah, there's a pretty big chance this will be it.” And not just financial investors. You would expect to see people — like college students deciding to major in that, people moving to wherever it is. That would be the big sign: investment moving toward anything. And the key thing is, you would see actual big, fast productivity increases. There'd be some companies in cities who were just booming. You were talking about stagnation recently: The ‘60s were faster than now, but that's within a factor of two. Well, we're talking about a factor of 60 to 200.So we don't need to spend a lot of time on the data measurement issues. Like, “Is productivity up 1.7 percent, 2.1?”If you're a greedy investor and you want to be really in on this early so you buy it cheap before everybody else, then you've got to be looking at those early indicators. But if you're like the rest of us wondering, “Do I change my job? Do I change my career?” then you might as well wait and wait till you see something really big. So even at the moment, we've got a lot of exciting demos: DALL-E, GPT-3, things like that. But if you ask for commercial impact and ask them, “How much money are people making?” they shrug their shoulders and they say “Soon, maybe.” But that's what I would be looking for in those things. When people are generating a lot of revenue — so it's a lot of customers making a lot of money — then that's the sort of thing to maybe consider.Something I've written about, probably too often, is the Long Bets website. And two economists, Robert Gordon and Erik Brynjolfsson, have made a long bet. Gordon takes the role of techno-pessimist, Brynjolfsson techno-optimist. Let me just briefly read the bet in case you don't happen to have it memorized: “Private Nonfarm business productivity growth will average over 1.8 percent per year from the first quarter of 2020 to the last quarter of 2029.” Now, if it does that, that's an acceleration. Brynjolfsson says yes. Gordon says no…But you want to pick a bigger cutoff. Productivity growth in the last decade is maybe half that, right? So they're looking at a doubling. And a doubling is news, right? But, honestly, a doubling is within the usual fluctuation. If you look over, say, the last 200 years, and we say sometimes some cities grow faster, some industries grow faster. You know, we have this steady growth rate, but it contains fluctuations. I think the key thing, as always, when you're looking for a regime change, is you're looking at — there's an average and a fluctuation — when is a new fluctuation out of the range of the previous ones? And that's when I would start to really pay attention, when it's not just the typical magnitude. So honestly, that's within the range of the typical magnitudes you might expect if we just had an unusually productive new technology, even if we stay in the same mode for another century.When you look at the enthusiasm we had at the turn of this century, do you think we did the things that would encourage rapid growth? Did we create a better ecosystem of growth over the past 20 years or a worse one?I don't think the past 20 years have been especially a deviation. But I think slowly since around 1970, we have seen a decline in our support for innovation. I think increasing regulations, increasing size of organizations in response to regulation, and just a lot of barriers. And even more disturbingly, I think it's worth noting, we've seen a convergence of regulation around the world. If there were 150 countries, each of which had different independent regulatory regimes, I would be less concerned. Because if one nation messes it up and doesn't allow things, some other nation might pick up the slack. But we've actually seen pretty strong convergence, even in this global pandemic. So, for example, challenge trials were an idea early voiced, but no nation allowed them. Anywhere. And even now, hardly they've been tried. And if you look at nuclear energy, electric magnetic spectrum, organ sales, medical experimentation — just look at a lot of different regulatory areas, even airplanes — you just see an enormous convergence worldwide. And that's a problem because it means we're blocking innovation the same everywhere. And so there's just no place to go to try something new.Global governance and risk aversionThere's always concern in Europe about their own productivity, about their technological growth. And they're always putting out white papers in Europe about what [they] can do. And I remember reading that somebody decided that Europe's comparative advantage was in regulation. Like that was Europe's superpower: regulation.Yeah, sure.And speaking of convergence, a lot of people who want to regulate the tech industry here have been looking to what Europe is doing. But Europe has not shown a lot of tech progress. They don't generate the big technology companies. So that, to me, is unsettling. Not only are we converging, but we're converging sometimes toward the least productive areas of the advanced world.In a lot of people's minds, the key thing is the unsafe dangers that tech might provide. And they look to Europe and they say, “Look how they're providing security there. Look at all the protections they're offering against the various kinds of insecurity we could have. Surely, we want to copy them for that.”I don't want to copy them for that. I'm willing to take a few risks.But many people want that level of security. So I'm actually concerned about this over the coming centuries. I think this trend is actually a trend toward not just stronger global governance, but stronger global community or even mobs, if we call it that. That is the reason why nuclear energy is regulated the same everywhere: the regulators in each place are part of a world community, and they each want to be respected in that community. And in order to be respected, they need to conform to what the rest of the community thinks. And that's going to just keep happening more over the coming centuries, I fear.One of my favorite shows, more realistic science-fiction shows and book series, is The Expanse, which takes place a couple hundred years in the future where there's a global government — which seems to be a democratic global government. I'm not sure how efficient it is. I'm not sure how entrepreneurial it is. Certainly the evidence seems to be that global governance does not lead to a vibrant, trial-and-error, experimenting kind of ecology. But just the opposite: one that focuses on safety and caution and risk aversion.And it's going to get a lot worse. I have a book called The Age of Em: Work, Love, and Life when Robots Rule the Earth, and it's about very radical changes in technology. And most people who read about that, they go, “Oh, that's terrible. We need more regulations to stop that.” I think if you just look toward the longer run of changes, most people, when they start to imagine the large changes that will be possible, they want to stop that and put limits and control it somehow. And that's going to give even more of an impetus to global governance. That is, once you realize how our children might become radically different from us, then that scares people. And they really, then, want global governance to limit that.I fear this is going to be the biggest choice humanity ever makes, which is, in the next few centuries we will probably have stronger global governance, stronger global community, and we will credit it for solving many problems, including war and global warming and inequality and things like that. We will like the sense that we've all come together and we get to decide what changes are allowed and what aren't. And we limit how strange our children can be. And even though we will have given up on some things, we will just enjoy … because that's a very ancient human sense, to want to be part of a community and decide together. And then a few centuries from now, there will come this day when it's possible for a colony ship to leave the solar system to go elsewhere. And we will know by then that if we allow that to happen, that's the end of the era of shared governance. From that point on, competition reaffirms itself, war reaffirms itself. The descendants who come out there will then compete with each other and come back here and impose their will here, probably. And that scares the hell out of people.Indeed, that's the point of [The Expanse]. It's kind of a mixed bag with how successful Earth's been. They didn't kill themselves in nuclear war, at least. But the geopolitics just continues and that doesn't change. We're still human beings, even if we happen to be living on Mars or Europa. All that conflict will just reemerge.Although, I think it gets the scale wrong there. I think as long as we stay in the solar system, a central government will be able to impose its rule on outlying colonies. The solar system is pretty transparent. Anywhere in the solar system you are, if you're doing something somebody doesn't like, they can see you and they can throw something at you and hit you. And so I think a central government will be feasible within the solar system for quite some time. But once you get to other star systems, that ends. It's not feasible to punish colonies 20 light-years away when you don't get the message of what they did [until] 20 years later. That just becomes infeasible then. I would think The Expanse is telling a more human story because it's happening within this solar system. But I think, in fact, this world government becomes a solar system government, and it allows expansion to the solar system on its terms. But it would then be even stronger as a centralized governance community which prevents change.Thinking about the future like an economistIn a recent blog post, you wrote that when you think about the future, you try to think about it as an economist. You use economic analysis “to predict the social consequences of a particular envisioned future technology.” Have futurists not done that? Futurism has changed. I've written a lot about the classic 1960s futurists who were these very big, imaginative thinkers. They tended to be pretty optimistic. And then they tended to get pessimistic. And then futurism became kind of like marketing, like these were brand awareness people, not really big thinkers. When they approached it, did they approach it as technologists? Did they approach it as sociologists? Are economists just not interested in this subject?Good question. So I'd say there are three standard kinds of futurists. One kind of futurist is a short-term marketing consultant who's basically telling you which way the colors will go or the market demand will go in the short term.Is neon green in or lime green in, or something.And that's economically valuable. Those people should definitely exist. Then there's a more aspirational, inspirational kind of futurist. And that's changed over the decades, depending on what people want to be inspired by or afraid of. In the ‘50s, ‘60s, it might be about America going out and becoming powerful. Or later it's about the environment, and then it's about inequality and gender relations. In some sense, science fiction is another kind of futurism. And these two tend to be related in the sense that science fiction mainly focuses on an indirect way to tell metaphorical stories about us. Because we're not so interested in the future, really, we're interested in us. Those are futures serving various kinds of communities, but neither of them are that realistically oriented. They're not focused on what's likely to actually happen. They're focused on what will inspire people or entertain people or make people afraid or tell a morality tale.But if you're interested in what's actually going to happen, then my claim is you want to just take our standard best theories and just straightforwardly apply them in a thoughtful way. So many people, when they talk about the future, they say, “It's just impossible to say anything about the future. No one could possibly know; therefore, science fiction speculations are the best we can possibly do. You might as well go with that.” And I think that's just wrong. My demonstration in The Age of Em is to say, if you take a very specific technology scenario, you can just turn the crank with Econ 101, Sociology 101, Electrical Engineering 101, all the standard things, and just apply it to that scenario. And you can just say a lot. But what you will find out is that it's weird. It's not very inspiring, and it doesn't tell the perfect horror story of what you should avoid. It's just a complicated mess. And that's what you should expect, because that's what we would seem to our ancestors. [For] somebody 200 or 2000 years ago, our world doesn't make a good morality tale for them. First of all, they would just have trouble getting their head around it. Why did that happen? And [what] does that even mean? And then they're not so sure what to like or dislike about it, because it's just too weird. If you're trying to tell a nice morality tale [you have] simple heroes and villains, right? And this is too messy. The real futures you should just predict are going to be too messy to be a simple morality tale. They're going to be weird, and that's going to make them hard to deal with.The stories we tell ourselves about the futureDo you think it matters, the kinds of stories we tell ourselves about what the future could hold? My bias is, I think it does. I think it matters if all we paint for people is a really gloomy one, then not only is it depressing, then it's like, “What are we even doing here?” Because if we're going to move forward, if we're going to take risks with technology, there needs to be some sort of payoff. But yet, it seems like a lot of the culture continues. We mentioned The Expanse, which by the modern standard of a lot of science fiction, I find to be pretty optimistic. Some people say, "Well, it's not optimistic because half the population is on a basic income and there's war.” But, hey, there are people. Global warming didn't kill everybody. Nuclear war didn't kill everybody. We continued. We advanced. Not perfect, but society seems to be progressing. Has that mattered, do you think, the fact that we've been telling ourselves such terrible stories about the future? We used to tell much better ones.The first-order theory about change is that change doesn't really happen because people anticipated or planned for it or voted on it. Mostly this world has been changing as a side effect of lots of local economic interests and technological interests and pursuits. The world is just on this train with nobody driving, and that's scary and should be scary, I guess. So to the first order, it doesn't really matter what stories we tell or how we think about the future, because we haven't actually been planning for the future. We haven't actually been choosing the future.It kind of happens while we're doing something else.The side effect of other things. But that's the first order, that's the zeroth-order effect. The next-order effect might be … look, places in the world will vary in to what extent they win or lose over the long run. And there are things that can radically influence that. So being too cautious and playing it safe too much and being comfortable, predictably, will probably lead you to not win the future. If you're interested in having us — whoever us is — win the future or have a bright, dynamic future, then you'd like “us” to be a little more ambitious about such things. I would think it is a complement: The more we are excited about the future, and the future requires changes, the more we are telling ourselves, “Well, yeah, this change is painful, but that's the kind of thing you have to do if you want to get where we're going.”Long-term thinking and innovationIf you've been reading the New York Times lately or the New Yorker, the average is related to something called “effective altruism,” is the idea that there are big, existential problems facing the world, and we should be thinking a lot harder about them because people in the future matter too, not just us. And we should be spending money on these problems. We should be doing more research on these problems. What do you think about this movement? It sounds logical.Well, if you just compare it to all the other movements out there and their priorities, I've got to give this one credit. Obviously, the future is important.They are thinking directly about it. And they have ideas.They are trying to be conscious about that and proactive and altruistic about that. And that's certainly great compared to the vast majority of other activity. Now, I have some complaints, but overall, I'm happy to praise this sort of thing. The risk is, as with most futurism, that even though we're not conscious of it, what we're really doing is sort of projecting our issues now into the future and sort of arguing about future stuff by talking about our stuff. So you might say people seem to be really concerned about the future of global warming in two centuries, but all the other stuff that might happen in two centuries, they're not at all interested. It's like, what's the difference there? They might say global warming lets them tell this anti-materialist story that they'd want to tell anyway, tell why it's bad to be materialist and so to cut back on material stuff is good. And it's sort of a pro-environment story. I fear that that's also happening to some degree in effective altruism. But that's just what you should expect for humans in general. Effective altruists, in terms of their focus on the future, are overwhelmingly focused as far as I can tell on artificial intelligence risk. And I think that's a bit misdirected. In a big world I don't mind it …My concern is that we'll be super cautious and before we have developed anything that could really create existential risk … we will never get to the point where it's so powerful because, like the Luddites, we'll have quashed it early on out of fear.A friend of mine is Eric Drexler, who years ago was known as talking about nanotechnology. Nanotechnology is still a technology in the future. And he experienced something that made him a little unsure whether he should have said all these things, he said, which is that once you can describe a vivid future, the first thing everybody focuses on is almost all the things that can go wrong. Then they set up policy to try to focus on preventing the things that can go wrong. That's where the whole conversation goes. And then people are distancing themselves from it. He found that many people distanced themselves from nanotechnology until they could take over the word, because in their minds it reflected these terrible risks. So people wanted to not even talk about that. But you could ask, if he had just inspired people to make the technology but not talked about the larger policy risks, maybe that would be better? It might be in fact true that the world today is broken so much that if ordinary people and policymakers don't know about a future risk, the world's better off, because at least they won't mess it up by trying to limit it and control it too early and too crudely.Then the challenge is, maybe you want the technologists who might make it to hear about it and get inspired, but you don't want everybody else to be inspired to control it and correct it and channel it and prepare for it. Because honestly, that seems to go pretty bad. I guess the question is, what technology that people did see well ahead of time, did they not come up with terrible scenarios to worry about? For example, television: People didn't think about television very much ahead of time. And when it came, a lot of people watched it. And a lot of people complained about that. But if you could imagine ahead of time that in 20 years people are going to spend five hours a day watching this thing. If that's an accurate prediction, people would've freaked out.Or cars: As you may know, in the late 1800s, people just did not envision the future of cars. When they envisioned the future of transportation, they saw dirigibles and trains and submarines, even, but not cars. Because cars were these individual things. And if they had envisioned the actual future of cars — automobile accidents, individual people controlling a thing going down the street at 80 miles an hour — they might have thought, “That's terrible. We can't allow that.” And you have to wonder… It was only in the United States, really, that cars took off. There's a sense in which the world had rapid technological progress around 1900 or so because the US was an exception worldwide. A lot of technologies were only really tried in the US, like even radio, and then the rest of the world copied and followed because the US had so much success with them.I think if you want to pick a point where that optimistic ‘90s came to an end, it might have been, speaking of Wired magazine, the Bill Joy article … “Why the Future Doesn't Need Us.” Talking about nanotech and gray goo… Since you brought up nanotech and Eric Drexler, do you know what the state of that technology is? We had this nanotechnology initiative, but I don't think it was working on that kind of nanotech.No, it wasn't.It was more like a materials science. But as far as creating these replicating tiny machines…The federal government had a nanotechnology initiative, where they basically took all the stuff they were doing that was dealing with small stuff and they relabeled it. They didn't really add more money. They just put it under a new initiative. And then they made sure nobody was doing anything like this sort of dangerous stuff that could cause what Eric was talking about.Stuff you'd put in sunscreen…Exactly. So there was still never much funding there. There's a sense in which, in many kinds of technology areas, somebody can envision ahead of time a new technology that was possible if a concentrated effort goes into a certain area in a certain way. And they're trying to inspire that. But absent that focused effort, you might not see it for a long time. That would be the simplest story about nanotech: We haven't seen the focused effort and resources that he had proposed. Now, that doesn't mean had we had those efforts he would've succeeded. He could just be wrong about what was feasible and how soon. But nevertheless, that still seemed to be an exciting, promising technology that would've been worth the investment to try. And still is, I would say.One concern I have about the notion of longtermism, is that it seems to place a lot of emphasis on our ability to rally people, get them thinking long term, taking preparatory steps. And we've just gone through a pandemic which showed that we don't do that very well. And the way we dealt with it was not through preparation, but by being a rich, technologically advanced society that could come up with a vaccine. That's my kind of longtermism, in a way: being rich and technologically capable so you can react to the unexpected.And that's because we allowed an exception in how vaccines were developed in that case. Had we gone with the usual way vaccines had been developed before, it would've taken a lot longer. So the problem is that when we make too many structures that restrain things, then we aren't able to quickly react to new circumstances. You probably know that most companies, they might have a forecasting department, but they don't fund it very much. They don't actually care that much. Almost everything they do is reactive in most organizations. That's just the fact of how most organizations work. Because, in fact, it is hard to prepare. It's hard to anticipate things.I'm not saying we shouldn't try to figure out ways to deflect asteroids. We should. To have this notion of longtermism over a broad scope of issues … that's fine. But I hope we don't forget the other part, which is making sure that we do the right things to create those innovative ecosystems where we do increase wealth, we do increase our technological capabilities to not be totally dependent on our best guesses right now.Here's a scary example of how this thinking can go wrong, in my mind. In the longtermism community, there's this serious proposal that many people like, which is called the Long Reflection.The Long Reflection, which is, we've solved all the problems and then we take a time out.We stop allowing change for a while. And for a good long time, maybe a thousand years or even longer, we're in this period where no change substantially happens. Then we talk a lot about what we could do to deal with things when things are allowed to change again. And we work it all out, and then we turn it back on and allow change. That's giving a lot of credit to this system of talking.Who's talking? Are these post-humans talking? Or is it people like us?It would be before the change, remember. So it would be people like us. I actually think this is this ancient human intuition from the forger world, before the farming era, where in the small band the way we made most important decisions was to sit down around the campfire and discuss it and then decide together and then do something. And that's, in some sense, how everybody wants to make all the big decisions. That's why they like a world government and a world community, because it goes back to that. But I honestly think we have to admit that just doesn't go very well lately. We're not actually very capable of having a discussion together and feeling all the options and making choices and then deciding together to do it. That's how we want to be able to work. And that's how we maybe should, but it's not how we are. I feel, with the Long Reflection, once we institutionalize a world where change isn't allowed, we would get pretty used to that world.It seems very comfortable, and we'd start voting for security.And then we wouldn't really allow the Great Reflection to end, because that would be this risky, into the strange world. We would like the stable world we were in. And that would be the end of that.I should say that I very much like Toby Ord's book, The Precipice. He's also one of my all-time favorite guests. He's really been a fantastic guest. Though, the Long Reflection, I do have concerns about.Come back next Thursday for part two of my conversation with Robin Hanson. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
From umpires to referees – the boys debate the officiating in baseball and the NBA playoffs. Matt, John and Manny also look at what could have been the points system in the NHL and what the league can do better with its Winter Classic. Plus perfect games, intentional walks and exceptional status – all hot topics on Rapid Fire with a new song for Pump It or Dump It.
Campo saw Robots everywhere in Jeff Bezos's Space warehouse in the Shatner in Space doco
A pioneer of prediction markets since the 1980s, Robin Hanson is the author of two books (The Elephant in the Brain, The Age of Em) and has a popular blog called Overcoming Bias. Hanson is also an Associate Professor of Economics at George Mason University, and Research Associate at the Future of Humanity Institute of Oxford University. In this episode:(00:00) — Episodes begins(01:07) — What inspired idea futures?(06:00) — Decision markets for organisations(09:16) — Using prediction markets to overcome bias(14:25) — Honesty razors in today's world(17:08) — "An autist in the C-suite"(24:25) — Ideamarket discussion beginsRound 1(43:10) — Why Ideamarket results are not tied to any external truthRound 2(55:44) — Informed traders vs Noise tradersRound 3(1:16:34) — The Colour Wheel of Truth — is Ideamarket a Keynesian Beauty Contest?EPISODE LINKS: - Robin on Twitter- Overcoming Bias- Robin's Bio- Book: The Age of Em: Work, Love, and Life when Robots Rule the Earth- Book: The Elephant in the Brain: Hidden Motives in Everyday LifeIDEAMARKET LINKS- Ideamarket Website- Ideamarket on Twitter- Ideamarket Discord- Apple Podcasts- Spotify—The Ideamarket Podcast is where venture philosophers share the ideas, trends, and concepts they're most bullish on. —About Ideamarket: Ideamarket is the credibility layer of the internet. Ideamarket allows the public to mainstream the world's best information using market signals, replacing media corporations as arbiter of credibility.Get started now.—
Join correspondent Tom Wilmer at the Tennessee State Library & Archives in Nashville, Tennessee with State Librarian and Archivist Charles Sherrill. The new 165,000 square-foot, 3-story facility is a game changer for anyone seeking information about any nuance of Tennessee's history via robotic access to historical documents. The multi-story robotic retrieval system archives are so extensive that if all the historical documents were set on one linear bookshelf—it would be 26-miles long. The Library and Archives' extensive and wide-ranging collections of books and original historical documents include state and county records, censuses and genealogical information, military records, penitentiary records, newspapers, city directories and telephone books, bibliographies, ledgers, manuscripts, letters, diaries, maps, photographs, broadsides, prints, postcards, oral histories, films, sheet music and general reference materials. The Library and Archives is home to several notable
Today's guest is Dr. Robin Hanson, Associate Professor of Economics at George Mason University with a Ph.D. in Social Science from the California Institute of Technology. Dr. Hanson's first book, “The Age of Em: Work, Love, and Life when Robots Rule the Earth,” was published in 2016 and focused on the concept of emulating human consciousness by scanning the human brain. This series is all about the question of what the future of the human experience is like, and Dr. Hanson shares his eye-opening perspective of this emulation technology alongside strong artificial intelligence and how the two might come together to change the future of work and life. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple podcasts, and let us know what you learned, found helpful, or liked most about this show!
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science, master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. With over 3100 citations and sixty academic publications, he's recognized not only for his contributions to economics (especially, pioneering the theory and use of prediction markets), but also for the wide range of fields in which he's been published. He is the author of The Age of Em: Work, Love, and Life when Robots Rule the Earth. Robin has strong and controversial views (backed by his research) regarding various institutions in society, and discusses how many routine activities we take for granted, carry hidden motives based on the evolution of ourselves and our society. Some of the points we touch on are items such as, how charities don’t really exist to help others, our schools don’t really exist to educate students, and our political expression isn’t actually about choosing wise policies. Show Links https://twitter.com/robinhanson https://overcomingbias.com Book Links (Aff Links) The Elephant in the Brain: Hidden Motives in Everyday Life - https://amzn.to/38sIPRD The Age of Em: Work, Love, and Life When Robots Rule the Earth - https://amzn.to/3epFuqj The Hanson-Yudkowsky AI-Foom Debate - https://amzn.to/3cd4Che Show Sponsor (25% Off Code: SUCCESS) https://getmr.com/
Robin Hanson is an economist and the author of "The Age of Em: Work, Love and Life when Robots Rule the Earth." He joins the show to discuss his theory that in the future the most intelligence and productive people in society will be uploaded to computers and indefinitely duplicated, to supercharge the economy.
Got a birthday coming up or a special school event for which you want a shout out? Maybe there’s a news story you want Squiz Kids to cover? Get in touch at https://www.squizkids.com.au/contact/. LINKS: Handwash challenge:https://www.facebook.com/717545176/videos/10163073179040177/Babe: the pig sheepdoghttps://www.youtube.com/watch?v=LGoKKhQPggMColdplay’s Chris Martin home concert:https://www.youtube.com/watch?v=YMBK9OfsKO4&feature=youtu.beSquiz Kids is a news podcast just for kids. A short weekday podcast, created here in Australia, that gives kids (and their adults) the rundown on the big news stories, delivered without opinion, and with positivity and humour.‘Kid-friendly news that keeps them up to date without all the nasties’ (A Squiz Parent)This Australian podcast for kids easily fits into the morning routine - helping curious kids stay informed about the world around them.Squiz Kids kids supported by the Judith Neilson Institute for Journalism and Ideas
SUCK IT --- Send in a voice message: https://anchor.fm/xdrcft-hdysi/message
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University and one the world’s most influential futurists. We talk to Robin about how rigorous social science can help us describe a society in which “mind uploading” – the idea of simulating whole brains on digital hardware – might actually look. How does a society look where most minds live their lives in virtual reality, immortals in a world where labour is plentiful? Will the emulated humans be rich or poor, happy or miserable, care-free or stressed, honest or false, lazy or industrious, diverse or all the same? Will they fall in love, have friends, swear, distrust others, commit suicide, and find meaning in their lives?Robin’s book about this topic is “The Age of Em: Work, Love and Life when Robots Rule the Earth” (Oxford University Press, 2016). Robin blogs about rationality at http://www.overcomingbias.com.
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University and one the world’s most influential futurists. We talk to Robin about how rigorous social science can help us describe a society in which “mind uploading” – the idea of simulating whole brains on digital hardware – might actually look. How does a society look where most minds live their lives in virtual reality, immortals in a world where labour is plentiful? Will the emulated humans be rich or poor, happy or miserable, care-free or stressed, honest or false, lazy or industrious, diverse or all the same? Will they fall in love, have friends, swear, distrust others, commit suicide, and find meaning in their lives?Robin’s book about this topic is “The Age of Em: Work, Love and Life when Robots Rule the Earth” (Oxford University Press, 2016). Robin blogs about rationality at http://www.overcomingbias.com.
Robots will rule us all. I feel that’s already been established by more sci-fi writers than can be credited in one podcast. You may have recently seen that an Uber self-driving car killed a pedestrian in a headline grabbing frenzy that re-awakened the humanity in all of us. Questions like What if… insert dystopian human based car fiasco here where AI (artificial intelligence) is deciding the fate of human lives with alarmingly regularity. During my recent discussions I found some common themes where these infinite ethical conundrums often came back to the questions of responsibility. Who do you blame if AI kills one of our humans AND… assuming the AI is forced into making a decision which means one life over another how does it make that choice? Heavy stuff folks. Fasten your seatbelts. Or don't. Doing my best and simplifying my position, I’ll start with defining for the curious (hey that’s why you listen to podcasts isn’t it!) how self-driving cars work and then we'll start talking philosophy and how we are trying to tackle those tough questions, or, ask ourselves if we're just being too darn human. Read the notes at https://codifyre.com/culture/self-driving-cars/ Follow us on... Twitter: https://www.twitter.com/codifyre Facebook: https://www.facebook.com/codifyre Instagram: https://www.instagram.com/codifyre.co.uk Web: https://www.codifyre.com
Meet Kenneth Ford, an incredibly well-read, highly accomplished and fascinating man with such an enormous Kenneth Ford is Founder and Chief Executive Officer of the Florida Institute for Human & Machine Cognition (IHMC) — a not-for-profit research institute located in Pensacola, Florida. IHMC has grown into one of the nation’s premier research organizations with world-class scientists and engineers investigating a broad range of topics related to building technological systems aimed at amplifying and extending human cognition, perception, locomotion and resilience. Richard Florida has described IHMC as “a new model for interdisciplinary research institutes that strive to be both entrepreneurial and academic, firmly grounded and inspiringly ambitious.” IHMC headquarters are in Pensacola with a branch research facility in Ocala, Florida. In 2004 Florida Trend Magazine named Dr. Ford one of Florida’s four most influential citizens working in academia. Dr. Ford is the author of hundreds of scientific papers and six books. Dr. Ford’s research interests include: artificial intelligence, cognitive science, human-centered computing, and entrepreneurship in government and academia. Dr. Ford received his Ph.D. in Computer Science from Tulane University. He is Emeritus Editor-in-Chief of AAAI/MIT Press and has been involved in the editing of several journals. Ford is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a charter Fellow of the National Academy of Inventors, a member of the Association for Computing Machinery, a member of the IEEE Computer Society, and a member of the National Association of Scholars. Ford has received many awards and honors including the Doctor Honoris Causas from the University of Bordeaux in 2005 and the 2008 Robert S. Englemore Memorial Award for his work in artificial intelligence (AI). In 2012 Tulane University named Ford its Outstanding Alumnus in the School of Science and Engineering. In 2015, the Association for the Advancement of Artificial Intelligence named Dr. Ford the recipient of the 2015 Distinguished Service Award. Also in 2015, Dr. Ford was elected as Fellow of the American Association for the Advancement of Science (AAAS). In January 1997, Dr. Ford was asked by NASA to develop and direct its new Center of Excellence in Information Technology at the Ames Research Center in Silicon Valley. He served as Associate Center Director and Director of NASA’s Center of Excellence in Information Technology. In July 1999, Dr. Ford was awarded the NASA Outstanding Leadership Medal. That same year, Ford returned to private life and to the IHMC. In October of 2002, President George W. Bush nominated Dr. Ford to serve on the National Science Board (NSB) and the United States Senate confirmed his nomination in March of 2003. The NSB is the governing board of the National Science Foundation (NSF) and plays an important role in advising the President and Congress on science policy issues. In 2005, Dr. Ford was appointed and sworn in as a member of the Air Force Science Advisory Board. In 2007, he became a member of the NASA Advisory Council and on October 16, 2008, Dr. Ford was named as Chairman – a capacity in which he served until October 2011. In August 2010, Dr. Ford was awarded NASA’s Distinguished Public Service Medal – the highest honor the agency confers. In February of 2012, Dr. Ford was named to a two-year term on the Defense Science Board (DSB) and in 2013, he became a member of the Advanced Technology Board (ATB) which supports the Office of the Director of National Intelligence (ODNI). Also, on November 6th I will be inducted into the Florida Inventor’s Hall of Fame bringing the total to 28 members including the likes of Thomas Edison, Henry Ford and other luminaries. Interestingly, 4 of the 28 members are associated with IHMC. During our discussion, you'll discover: -How Ken is developing an exoskeleton for human performance...[9:17] -What Ken thinks about the idea that artificial intelligence robots start going down the street and killing people...[14:30 & 17:10] -How Ken became involved with the ketogenic diet and exogenous ketones, even before ketosis became the latest sexy diet trend...[19:25] -The amazing research is currently doing on exogenous ketones, including avoidance of age-related loss of muscle mass and function) and anabolic resistance...[22:10] -How ketones can lower blood sugar even if you still consume glucose...[36:05] -Ken's best exercise biohacks and moves including, hierarchical sets, blood flow restriction training, and kettlebell bottoms-up training...[41:00 & 52:20] -Why maintaining your grip strength is one of the most important things you can do...[55:30] -What Ken has found about maintaining and building muscle by studying muscle loss in space...[58:50] -The vibration training platform you can "take on a plane"...[63:45] -Ken's thoughts on exercise "mimetic" supplements like acetyl-l-carnitine and SARMS...[66:15] -Ken's research-based approach to cardiovascular training...[71:40] -And much more! Resources from this episode: - - - - (IHMC is 15% discount code) - - - - - - (pdf download) - (pdf download) Show Sponsors: -Human Charger - Go to and use the code BEN20 for 20% off. -Molekule - Go to and enter promo code BEN for $75 off your order! -Thrive Market - Go to to get $60 of free organic groceries now! Do you have questions, thoughts or feedback for Ken or me? Leave your comments at and one of us will reply!
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a PhD in social science from Caltech, Master's in physics and philosophy from the University of Chicago and worked for nine years in artificial intelligence as a research programmer at Lockheed and NASA. He helped pioneer the field of prediction markets, and published The Age of Em: Work, Love and Life when Robots Rule the Earth, which was the topic of our discussion in a previous podcast episode back in 2016. His most recent book is entitled, The Elephant in the Brain: Hidden Motives in Everyday Life. He also blogs at OvercomingBias.com. The big mistake we are making – the ‘elephant in the brain’. the elephant in the room, n. An important issue that people are reluctant to acknowledge or address; a social taboo. the elephant in the brain, n. An important but unacknowledged feature of how our minds work; an introspective taboo. The elephant in the brain is the reason that people don’t do things they want to do. They have a lot of hidden motives. People think they do certain things for one reason but really do these things for a different reason. Some of the motives are unconscious. This may be due to many reasons but one of them is the desire/need to conform to social norms. The book, The Elephant in the Brain includes 10 areas of hidden motives in everyday life. These include: Body language Laughter Conversation Consumption Art Charity Education – one reason people really go to school is to ‘show off’ Medicine – it isn’t just about health – it’s also about demonstrating caring Religion Politics The puzzle of social status in the workplace is one to be explored. People are always working to improve their position within an organization but often the competition is ‘hidden’ by socially expected terms like ‘experience’ or ‘seniority’. To discuss one’s social status in the workplace is not acceptable. So, to continue to explore and think about people’s true motives can be beneficial. What you will learn in this episode: Why people have hidden motives. Are people just selfish? Why do companies have sexual harassment workshops? What could be alternative reasons to hold workplace meetings? How Robin and co-author Kevin Simler researched for the book Do we have the power to change our self-deceptive ways?
Why do we do the things we do? We like to think we have good reasons for the choices we make, but we may very well be fooling ourselves. In their intriguing new book, The Elephant in the Brain: Hidden Motives in Everyday Life, authors Robin Hanson and Kevin Simler explain how hardwired primate behavior, social norms, and evolution combine to obscure our motives...even (or maybe especially) from ourselves. While it’s easy to see how hiding our motives from others might bring about certain advantages, it’s harder to imagine why we would ever try to hide our reasons from ourselves. But Hanson argues that it’s no great mystery. "We prefer to attribute our behavior to the highest-minded motives,” he explains. “But often our behavior is better explained by less high-minded motives -- i.e., more selfish motives -- and we'd rather not look at that and acknowledge it." About Our Guest Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science, master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. With over 3600 citations and sixty academic publications, he's recognized not only for his contributions to economics, but also for the wide range of fields in which he's been published. His amazing blog OvercomingBias.com has had some eight million visits. He is the author of The Age of Em: Work, Love, and Life when Robots Rule the Earth. He blogs at Overcoming Bias. Music: www.bensound.com FF 003-698
This week, we've got Powerline's John Hinderaker in the Long Chair®, John Yoo protecting us from sentient robots (read his new book Striking Power: How Cyber, Robots, and Space Weapons Change the Rules for War ), and the Hoover Institution's Kori Schake with some thoughts on how to take down Rocket Man. Also, Minnesota statues and other assorted ephemera. Music from this week's podcast: Rocket Man... Source
This week we bring our first impressions and several bits of news from CES, the consumer electronics trade show held annually in Las Vegas. I’m here while Kevin avoids the lines by staying in Pennsylvania, but we’re both happy to talk about connected grooming products, robots and the onslaught of Echo-related news. I also noticed … Continue reading Episode 92: At CES Amazon Alexa and robots rule
A robot-driven world is often a mainstay of science fiction titles like Terminator and I, Robot. While that future may be far off, emulations — computers that scan and reproduce human brains — could be the first step into the age of robotics. Their society could evolve at the pace of software, not hardware or biology — allowing for radical transformations in less time than it takes humans to get their dry cleaning back. So what might an emulation-based society look like? How would emulation technology affect how humans live in the future? Joining Berin to discuss is Professor Robin Hanson of George Mason University, author of The Age of Em: Work, Love, and Life when Robots Rule the Earth. For more, see the book's website.
The Age of Em by Robin Hanson is the best worst book I have read in a very long while. It is the best because Robin has a very effective, efficient and eloquent writing style and a personality to match it. Thus he is able to say utterly horrendous things – like “the 3rd Reich […]
Topics Include: - Predictions for the Presidential Debates! - The Dalai Lama Said What Bout Trump?! - Holiday Weight Gaining Facts! Arm Yourself With Evidence! - Cheat Days Vs. Diets! - Leonardo DiCaprio and the Invention of Robutts! - Robutts!? - A History of the First Robutts and Robutts Through Time! - The Future of Robutts and the End of Humanity! And so much more! So sit back, relax, and enjoy the most downloaded podcast in the world! The Unimaginary Friendcast! The Unimaginary Friendcast is hosted by David Monster, Erin Marie Bette Davis Jr. and Nathan Edmondson. www.unimaginaryfriend.com/friendcast And find us on Facebook!
So in this EM world, what would EMs do? In Hanson’s view they would take over all of the work form the humans. Some EMs would do virtual jobs and some would do physical jobs, therefore they would be able to switch from a physical form to a virtual form in an instant as we are able to get in and out of our car to go somewhere. EMs would live mostly in city centers and interact with each other as humans do. And what would humans be doing during this time? Well, first they would all have to retire. After EMs are around humans wouldn’t be able to compete for jobs so they would retire to live off of their savings and live a life of leisure. Hanson believes some humans would have money from creating EMs, because in the beginning the people who are the best in their fields would be sought out to scan their brains for EMs earning big money. Later on younger people would most likely be sought out to create EMs as they would be able to learn new things the quickest. Some may also make money from investments or have money saved up. Those who don’t have money at this time probably wouldn’t survive, it would just depend on how areas would take care of each other, divide money, and provide for humans. EMs would most likely run 1,000 times faster than humans so they would evolve much more quickly than humans have. Therefore, the EM Age may only last 1-2 years so in that time humans probably won’t have time to change much. There are different views that people have when they read about EMs, either they think it is fun and exciting to learn and think about or they think it is crazy or scary or impossible. For people who think it is impossible, Hanson explains that we have had 3 major eras of humans; Foragers, Farmers, and Industry and in each era there has been a sudden change to bring about the next era. So the next era after ours could be the EM Age. People who lived 1,000 years ago would probably think that the innovations we have today are crazy or impossible. Regardless of what the future holds it will still be strange to those of us who are living in the current era. Hanson’s book touches on several aspects of the EM Age including the basics, organization, economics, sociology and physics. In the way of physics Hanson touches on things such as the relationship between the body size and mind speed of EMs as well as the energy and cooling usage that the EMs would need. In the section on economics Hanson discusses many things including the fact that EMs will happen when it is feasible to make them at a low cost. Even if we had the technology now to create them, it would be too expensive. It would have to cost as much as or less than it costs to pay humans to do those jobs now. When Hanson talks about organization he talks about how EMs will have similar units as we have among humans; cities, families, firms. However they will also have clans. Clans will be EMs that are copies of the same human and they will be more identical than twins. And in the section on sociology Hanson talks about how sex and mating will be different for the EMs. On the one hand they are a copy of humans and therefore it would be ingrained in them to have a need for love, sex and connection. However there would be factors that would make this difficult such as their work drive not allowing them to focus on anything else and the fact that the ratio between male and female probably wouldn’t be equal. Many people may ask how could we get a future that no one wants. It is hard to imagine anyone in today’s age that would want all humans to have their jobs taken over by machines and the possibility that humans would be without money and therefore not be able to survive. However, it would not be a result of what we all want together. No one is choosing technology collectively; it’s not something we vote on or agree on. It is done by individuals who are innovating things in order to move forward and make money. The EM Age could come as a result of decentralized competition. Each of us trying to individually get what we want could end in all of us together getting what we don’t want. What you will learn in this episode: Find out what an EM is What the next 100 years look like with Ems Why should we care about EMs now How robots and automation will affect the way we live and work in the future Find out how EMs are different than automation and AI How will EMs live and work What the role of humans will be in an EM Age What would be needed to create an EM Age Link From The Episode: The Age of Em on Amazon (Music by Ronald Jenkees)
In a century or so, humans might be able to scan our brains and run emulations or "ems" on futuristic supercomputers. That means conscious, digital minds working much faster than our own, and rendering us all but obsolete! So how should we plan for that? Today David interviews Robin Hanson, George Mason University economist and author of "The Age of Em - Work, Love and Life when Robots Rule the Earth."
Robin Hanson is a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is an expert on idea futures and markets, and he was involved in the creation of the Foresight Institute’s Foresight Exchange and DARPA's FutureMAP project. He invented market scoring rules used by prediction markets and has conducted research on signaling. Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D. he researched artificial intelligence, Bayesian statistics, and hypertext publishing. In addition, he started the first internal corporate prediction market at Xanadu in 1990. Robin’s Challenge; Pursue your interests and start new projects in your free time. Your life will be long. You’ll have lots of time to pursue a lot of odd projects. Check out Robin’s Book; Age of Em: Work, Love and Life when Robots Rule the Earth Connect with Robin Twitter Website hanson@gmu.edu If you liked this interview, check out episode 118 with Kevin Kelly where we discuss the inevitable technological trends shaping our future.
My guest today is Robin Hanson, an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known for his work on idea futures and markets, and he was involved in the creation of the Foresight Institute's Foresight Exchange and DARPA's FutureMAP project. He invented market scoring rules like LMSR (Logarithmic Market Scoring Rule) used by prediction markets such as Consensus Point (where Hanson is Chief Scientist), and has conducted research on signalling. The topic is his book The Age of Em: Work, Love and Life when Robots Rule the Earth. In this episode of Trend Following Radio we discuss: Singularity Robots taking over Artificial intelligence Slavery Reversible computing Virtual reality Future of politics Democracy in the future Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
A whole brain emulation, or “em,” is a fully functional computational model of a specific human brain. As such, it thinks and feels much like the copied human mind would. Economist Robin Hanson predicts that the age of em is not that far off, and that copied human minds may soon be more common than biological ones. That’s a bold prediction, to be sure. Hanson’s new book, The Age of Em, explores the economic, social, and policy questions that we may face in this possible future. It also touches on the science of forecasting: What can we know about the future, using what tools, and with what degree of reliability? Even those who find farfetched his claims about brain emulation will do well to consider how sure they are of their own predictions of the future, and on what foundations they rest. See acast.com/privacy for privacy and opt-out information.
We’re back! After a prolonged hiatus, Ted and Jon return joined by guest Robin Hanson, the economics professor and blogger at Overcoming Bias, who discusses the central concept of his new book, The Age of Em: Work, Love and Life when Robots Rule the Earth. We discuss his assumption that whole brain emulations will emerge before theoretically-driven AGI, […]
We live at a time when artificial intelligence is booming and major breakthroughs are happening, with a lot of people thinking about what is coming and how will it impact society. Robin Hanson is an economics professor at GMU with a background that ranges from philosophy, to physics and computer research. He joins me today to talk about his book ‘The Age of Em: Work, Love and Life when Robots Rule the Earth' which is shipping as we speak, where he outlines what he thinks will happen when humans become able to emulate a human brain in a machine. We discuss what are the things that might be different, what are those that will change less than we expect, and how social institutions will change once AI reaches such a level. Don't skip his blog overcomingbias.com and you can order his new book from Amazon http://www.amazon.com/Age-Em-Work-Robots-Earth/dp/0198754620