POPULARITY
More To The Story: OpenAI became the world's most valuable private company last week after a stock deal pushed the value of the artificial intelligence developer to $500 billion. But when OpenAI was founded a decade ago, the company's approach to artificial intelligence wasn't taken seriously in Silicon Valley. Tech journalist Karen Hao has been covering OpenAI's astounding rise for years and recently wrote a book about the company, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. She says that while many in Silicon Valley warn of AI's sci fi–like threats, the real risks are already here. (The Center for Investigative Reporting, which produces Reveal and More To The Story, is currently suing OpenAI and Microsoft for copyright infringement.) On this week's More To The Story, Hao sounds the alarm about the risks to the planet from AI's growth, examines the Trump administration's efforts to deregulate the industry, and explains why the version of AI being developed by Silicon Valley could destabilize democracy.Producer: Josh Sanburn | Editor: Kara McGuirk-Allison | Theme music: Fernando Arruda and Jim Briggs | Copy editor: Nikki Frick | Deputy executive producer: Taki Telonidis | Executive producer: Brett Myers | Executive editor: James West | Host: Al Letson Donate today at Revealnews.org/more Subscribe to our weekly newsletter at Revealnews.org/weekly Follow us on Instagram and Bluesky Read: America's Worst Polluters See a Lifeline in Power-Gobbling AI—and Donald Trump (Mother Jones)Listen: Is AI Pushing Us Closer to Nuclear Disaster? (More To The Story)Read: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (Penguin Press)Read: The Center for Investigative Reporting Sues OpenAI, Microsoft for Copyright Violations (Mother Jones) Learn about your ad choices: dovetail.prx.org/ad-choices
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comKatie is a journalist, podcaster, and longtime friend of the Dish. She's a former staff writer at The Stranger, and she's contributed to The Atlantic, The Guardian, The Free Press, and The Weekly Dish. She hosts the podcast “Blocked and Reported” alongside Jesse Singal, and she just wrote her first book, Drink Your Way Sober: The Science-Based Method to Break Free from Alcohol.For two clips of our convo — how Katie's drinking became a problem, and why naltrexone isn't widely known — head to our YouTube page.Other topics: raised in western Carolina; Katie's first drink; studying abroad in England for the lower drinking age; Churchill's boozing; pub culture in the UK; being energized by alcohol vs sedated; chasing the buzz; the cycle of denial; the AA notion that one drink is too many; how rats react to alcohol; the parallels with Ozempic; why I started smoking weed; Ken Burns on Prohibition; the founder of AA; the belladonna and antabuse treatments; the Sinclair Method; why Mormons are so great; why Gen Z is drinking less; Covid alcoholism; the unsightly effects of booze; drinking in secret; the shame of addiction; PrEP; the meth crisis among gays; the high rates of lesbian divorce; Nancy Mace and Megyn Kelly going radical; the belief that recovery should be hard and medication is cheating; AA's hold on the legal system; opioids; and the massive death toll of alcohol.Browse the Dishcast archive for an episode you might enjoy. Coming up: Michael Wolff on Epstein, Karen Hao on AI, Charles Murray on finding religion, Michel Paradis on Eisenhower, David Ignatius on the Trump effect globally, Mark Halperin on the domestic front, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comWesley is an essayist and podcaster. He's written extensively for Tablet, Esquire, and New York Magazine, and many of his essays were compiled in a book, The Souls of Yellow Folk. More of his writing and podcasting can be found on his substack, “Year Zero.” He's been chronicling the gender revolution aspect of the successor ideology on X these past few years — and he eloquently lets rip in this conversation.For two clips of our convo — on the violence that can spring from trans ideology, and the paralysis of Dems on trans issues — head to our YouTube page.Other topics: his lifelong musical talent; getting a song on Gilmore Girls; Judith Butler and critical gender theory; postmodernism vs nature; Germaine Greer and TERFs; the woke targeting Chimamanda Adichie; tomboys and effeminate boys; fearing puberty; Jazz Jennings; the Dutch protocol and gatekeeping; the gray market of puberty blockers and HRT; Planned Parenthood; gender identity as “mystical”; adults unable to pass; Chase Strangio against gay marriage; autism; the surge of girls seeking transition; Tumblr and social contagion; the suicide canard; the “cisfag” slur; women's shelters; Tavistock; the Cass Review; Hannah Barnes' Time to Think; JK Rowling; Labour backpedaling; the NC bathroom bill and corporate boycotts; Dave Chappelle; Eric Adams' working-class defense of sexed bathrooms; Mamdani; Newsom and fairness in sports; detransitioners; Charlie Kirk; the Minneapolis killer Robin Westman; Zizians; authoritarian vs totalitarian; MLK envy; the empty promises of Dem leaders; the private regret of parents; and how trans ideology helped Trump.Browse the Dishcast archive for an episode you might enjoy. Coming up: Katie Herzog on drinking your way sober, Michael Wolff on Epstein, Karen Hao on AI, Michel Paradis on Ike, Charles Murray on finding religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
In „Empire of AI“ verfolgt Karen Hao den Aufstieg von openAI, dem Unternehmen hinter ChatGPT. Die Autorin zeichnet ein umfassendes Bild von der Gründung bis zum Jahr 2025, porträtiert die involvierten Personen und zeigt den eindrücklichen Wandel von openAI von einem Non-Profit zu einem For-Profit-Unternehmen. Nebenbei streift sie bedenkenswerte Aspekte des aktuell vorherrschenden Entwicklungsprozesses von Künstlicher Intelligenz: Die Machtkonzentration, die Ideologie und schließlich die Ausbeutung natürlicher und menschlicher Ressourcen.
Guest: Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series. She is the author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. The post Empire of AI with Karen Hao appeared first on KPFA.
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comJohn is a journalist, media consultant, old friend, and George W Bush's cousin. He's worked for NBC News as a political analyst and the Boston Globe as a columnist. In 2016, he launched a morning brief called “News Items” for News Corp, and later it became the Wall Street Journal CEO Council's morning newsletter. News Items jumped to Substack in 2019 (and Dishheads can subscribe now for 33% off). John also co-hosts two podcasts — one with Joe Klein (“Night Owls”) and the other with Richard Haas (“Alternate Shots”).For two clips of our convo — on the nail-biting Bush-Gore race that John was involved in, and Trump's mental decline — head to our YouTube page.Other topics: born and raised in Concord; his political awakening at 15 watching the whole '68 Dem convention with a fever in bed; his fascination with Nixon; the Southern Strategy; Garry Wills' book Nixon Agonistes; Kevin Phillips and populism; Nixon parallels with Trump — except shame; Roger Ailes starting Fox News; Matt Drudge; John's uncle HW Bush; HW as a person; the contrasts with his son Dubya; the trauma of 9/11; Iraq as a war of choice — the wrong one; Rumsfeld; Jeb Bush in 2016; the AI race; Geoffrey Hinton (“the godfather of AI”); John's optimism about China; tension with Taiwan; Israel's settlements; Bibi's humiliation of Obama; Huckabee as ambassador; the tariff case going to SCOTUS; the Senate caving to Trump; McConnell failing to bar Trump; the genius of his demagoguery; the Kirk assassination; Brexit; immigration under Boris; Reform's newfound dominance; the huge protest in London last week; Kirk's popularity in Europe; the AfD; Trump's war on speech; a Trump-Mamdani showdown; Epstein and Peter Mandelson; and grasping for reasons to be cheerful.Browse the Dishcast archive for an episode you might enjoy. Coming up: Wesley Yang on the trans question, Michael Wolff on Epstein, Karen Hao on artificial intelligence, Katie Herzog on drinking your way sober, Michel Paradis on Ike, Charles Murray on finding religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
For this week's very special episode on Rational Security's 10th anniversary, Scott sat down with a slew of co-hosts emeritus, each of whom brought their own topic to discuss.Shane Harris flagged the connections between online radicalization and the young men behind many recent public acts of gun violence, including the alleged perpetrator of the Charlie Kirk killing;Benjamin Wittes insisted we are STILL not talking enough about Russian drone incursions and other forms of gray zone warfare in Eastern Europe;Quinta Jurecic brought some statistics about the success (or not) of the Trump administration's federalization of law enforcement in D.C. (and elsewhere) that are worth contemplating; andAlan Rozenshtein asked how it can still be true that no one but him seems to care about the Trump administration blatantly disregarding the TikTok ban.In object lessons, Shane is basking in the glory of HBO's “Somebody Somewhere”—partly because his buddy Jeff Hiller just won an Emmy for his role in it. Ben praises Adam Boehler—and even Donald Trump—for helping secure Elizabeth Tsurkov's release. Alan is definitely not using this opportunity to use Pacific Rim for his object lesson—really—instead recommending fantasy mystery “The Tainted Cup,” by Robert Jackson Bennett. Scott takes us into orbit with Samantha Harvey's “Orbital,” a lyrical meditation on life and reflection in space. And Quinta dives into the future with “Empire of AI,” by Karen Hao, a deeply reported look at OpenAI and its role in shaping the technology's trajectoryLast call to help us celebrate Lawfare's 15th anniversary! Get your tickets now to join us this Friday, 9/19, to hear from some of your favorite Lawfare people past and present, take a look back on the key moments that have shaped our first 15 years, and get a sneak peek into what's coming next.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
For this week's very special episode on Rational Security's 10th anniversary, Scott sat down with a slew of co-hosts emeritus, each of whom brought their own topic to discuss.Shane Harris flagged the connections between online radicalization and the young men behind many recent public acts of gun violence, including the alleged perpetrator of the Charlie Kirk killing;Benjamin Wittes insisted we are STILL not talking enough about Russian drone incursions and other forms of gray zone warfare in Eastern Europe;Quinta Jurecic brought some statistics about the success (or not) of the Trump administration's federalization of law enforcement in D.C. (and elsewhere) that are worth contemplating; andAlan Rozenshtein asked how it can still be true that no one but him seems to care about the Trump administration blatantly disregarding the TikTok ban.In object lessons, Shane is basking in the glory of HBO's “Somebody Somewhere”—partly because his buddy Jeff Hiller just won an Emmy for his role in it. Ben praises Adam Boehler—and even Donald Trump—for helping secure Elizabeth Tsurkov's release. Alan is definitely not using this opportunity to use Pacific Rim for his object lesson—really—instead recommending fantasy mystery “The Tainted Cup,” by Robert Jackson Bennett. Scott takes us into orbit with Samantha Harvey's “Orbital,” a lyrical meditation on life and reflection in space. And Quinta dives into the future with “Empire of AI,” by Karen Hao, a deeply reported look at OpenAI and its role in shaping the technology's trajectoryLast call to help us celebrate Lawfare's 15th anniversary! Get your tickets now to join us this Friday, 9/19, to hear from some of your favorite Lawfare people past and present, take a look back on the key moments that have shaped our first 15 years, and get a sneak peek into what's coming next.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
Esta é a versão editada de uma live que realizamos com Renata Gomes, em 29/08/2025, sobre o livro Empire of AI, de Karen Hao (link afiliado), e sobre outros temas relacionados ao impacto, muitas vezes negativo, da inteligência artificial em nossas vidas.Renata Gomes (também conhecida como Renata Games, nas redes sociais: Bluesky e Instagram) é professora no CECULT – Centro de Cultura, Linguagens e Tecnologias Aplicadas da Universidade Federal do Recôncavo da Bahia (UFRB), em Santo Amaro, na Bahia. É graduada em Comunicação com habilitação em Jornalismo pela Universidade Federal do Ceará, Mestre e Doutora em Comunicação e Semiótica pela PUC-SP. Atua como vice-coordenadora do Bacharelado Interdisciplinar em Cultura, Linguagens e Tecnologias Aplicadas (link 1, link 2).LattesLinkedInE-mail: renatafgomes@ufrb.edu.brBlogBlueskyInstagramEntrevistadoresAdolfo Neto é Professor da UTFPR Curitiba e coordenador da Rede Emílias de Podcasts.Andréia Vasconcelos Farias é Professora na Rede Estadual de Ensino do Paraná e integrante da equipe do Emílias Podcast (Instagram).Editor: Adolfo Neto
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comJill is a writer and scholar. She's a professor of American history at Harvard, a professor of law at Harvard Law, and a staff writer at The New Yorker. She's also the host of the podcast “X-Man: The Elon Musk Origin Story.” Her many books include These Truths: A History of the United States (which I reviewed for the NYT in 2017) and her new one, We the People: A History of the U.S. Constitution — out in a few days; pre-order now.For two clips of our convo — on FDR's efforts to bypass the Constitution, and the worst amendment we've had — head to our YouTube page.Other topics: raised by public school teachers near Worcester; dad a WWII vet; her struggles with Catholicism as a teen (and my fundamentalism then); joining ROTC; the origins of the Constitution; the Enlightenment; Locke; Montesquieu; the lame Articles of Confederation; the 1776 declaration; Paine's Common Sense; Madison; Jefferson; Hamilton; Adams; New England town meetings; state constitutional conventions; little known conventions by women and blacks; the big convention in Philly and its secrecy; the slave trade; the Three-Fifths Clause; amendment provisions; worries over mob rule; the Electoral College; jury duty; property requirements for voting; the Jacksonian Era; Tocqueville; the Civil War; Woodrow Wilson; the direct election of senators; James Montgomery Beck (“Mr Constitution”); FDR's court-packing plan; Eleanor's activism; Prohibition and its repeal; the Warren Court; Scalia; executive orders under Trump; and gauging the intent of the Founders.Browse the Dishcast archive for an episode you might enjoy. Coming up: John Ellis on Trump's mental health, Michael Wolff on Epstein, Karen Hao on artificial intelligence, Katie Herzog on drinking your way sober, Michel Paradis on Eisenhower, Charles Murray on religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
Hasan sits down with reporter Karen Hao on her new book Empire of AI on how AI companies are taking over the world, what can be done about it, and when they might start bombing brown countries.
In this episode of Speaking Out of Place, investigative journalist Karen Hao explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: “Inside the reckless race of total domination.” In our conversation we flesh out the overlap between these two points of emphasis. Hao argues that in general the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, EMPIRE OF AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program training thousands of journalists around the world on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comNiall is one of my oldest and dearest friends, stretching back to when we were both history majors and renegade rightists at Magdalen, Oxford. He is the Milbank Family Senior Fellow at the Hoover Institution and a senior faculty fellow of the Belfer Center for Science and International Affairs at Harvard. He's also the founder and managing director of Greenmantle LLC, an advisory firm. He's written 16 books, including Kissinger, 1923-1968: The Idealist and Doom: The Politics of Catastrophe (which we discussed on the pod in 2021), and he writes a column for The Free Press.For two clips of our convo — a historical view of Trump's authoritarianism, and the weakness of Putin toward Ukraine — head to our YouTube page.Other topics: attending Niall's 60th birthday party in Wales with an all-male choir; Covid; Cold War II; China's surprisingly potent tech surge; the race for semiconductors and AI; Taiwan; global fertility; Brexit; the explosion of migrants under Boris and Biden; the collapse of the Tories; Reform rising; Yes Minister; assimilation in the UK; grooming gangs; the failure of “crushing” sanctions on Russia; the war's shift toward drones; Putin embraced by Xi and Modi; Trump's charade in Alaska; debating Israel and Gaza; the strike on Iran; the Abraham Accords; the settlements; America becoming less free; Trump's “emergencies”; National Guard in DC; the groveling of the Cabinet; the growth of executive power over many presidents; Trump's pardons; Kissinger; tariffs and McKinley; the coming showdown with SCOTUS; Jack Goldsmith's stellar work; Mamdani; Stephen Miller's fascism; the unseriousness of Hegseth; the gerrymandering crisis; the late republic in Rome; Tom Holland's Rubicon; Niall's X spat with Vance; Harvard's race discrimination; Biden re-electing Trump; wokeness; and South Park saving the republic.Browse the Dishcast archive for an episode you might enjoy. Coming up: Jill Lepore on the history of the Constitution, Karen Hao on artificial intelligence, Katie Herzog on drinking your way sober, Michel Paradis on Eisenhower, Charles Murray on religion, David Ignatius on the Trump effect globally, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Note aux auditrices et auditeurs : cet épisode a été diffusé une première fois le 20 juin.L'Américaine Karen Hao a commencé sa vie professionnelle au sein d'une start up de Google, mais très vite, elle est devenue journaliste spécialiste de l'intelligence artificielle en couvrant le secteur des technologies pour le Wall Street Journal, la revue sur la technologie du MIT (Massachusetts Institute of Technology) et le magazine américain The Atlantic.Elle vient tout juste de publier “Empire of AI, Dreams and Nightmares in Sam Altman's OpenAI" (Penguin Press), la première grande enquête sur OpenAI, cette association à but non lucratif qui a donné naissance à l'une des entreprises les plus valorisées en bourse à l'échelle mondiale.Dans un entretien avec Sur le Fil, elle raconte les coulisses de cette mutation et les conséquences très concrètes pour l'environnement et certains travailleurs employés notamment dans le cadre de l'entraînement des grands modèles d'IA, de la création d'OpenAI et de la compétition sans merci entre géants de la tech pour décrocher le graal, autrement dit l'intelligence artificielle générale.Réalisation : Michaëla Cancela-KiefferDoublage : Catherine TriompheSur le Fil est le podcast quotidien de l'AFP. Vous avez des commentaires ? Ecrivez-nous à podcast@afp.com. Vous pouvez aussi nous envoyer une note vocale par Whatsapp au + 33 6 79 77 38 45. Si vous aimez, abonnez-vous, parlez de nous autour de vous et laissez-nous plein d'étoiles sur votre plateforme de podcasts préférée pour mieux faire connaître notre programme.Ce podcast fait l'objet d'une clause de opt-out:Sous réserve des dispositions de l'article L.122-5-3.II. du code de la propriété intellectuelle, tout accès à ou utilisation (tels que, à titre non exhaustif, la reproduction, l'agrégation et l'archivage) du contenu de ce podcast et de sa description, pour toute activité systématique ou automatisée liée à la récupération, la fouille, l'extraction, l'agrégation, l'analyse, l'exploration ou la collecte de textes, d'extraits sonores, et/ou de données, par l'intermédiaire de tout "robot", "bot", "spider", "scraper", ou de tout autre dispositif, programme, technique, outil, procédé ou méthode, réalisé dans le but de créer, développer, entraîner, tester, évaluer, modifier et/ou permettre l'exécution de logiciels, algorithmes et modèles d'apprentissage automatique/d'intelligence artificielle ou à une quelconque autre fin, sans l'autorisation préalable écrite de l'AFP, est strictement interdit. La présente disposition des CG vaut opposition expresse de l'AFP au sens des articles L. 122-5-3.III. et R. 122-28 du Code de la propriété intellectuelle. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
For months, we have been hearing from people all over the world who say they have been unfairly banned from their Facebook and Instagram accounts. Meta has, so far, said nothing publicly about what's going on. Plus, we chart the rise and fall of an American tech giant, and the author of Empire of AI, Karen Hao, tells us what she thinks the industry gets wrong.Presenter: Shiona McCallum Producer: Imran Rahman-Jones Editor: Monica Soriano(Image: An illustration of a mobile phone with barbed wire surrounding it. Credit: Getty Images)
In the space of a few short years, generative AI has exploded into our daily lives, impacting the way we learn, work and understand the world around us.Open AI, the American artificial intelligence company cofounded by Sam Altman and Elon Musk in 2015 which runs ChatGPT, claims its non-profit “mission” is to ensure these systems “benefit all of humanity”.And while the launch of ChatGPT has undoubtedly lightened the workload of many, engineer, journalist and AI expert Karen Hao says the AI race for world domination carries a huge human and environmental cost.In 2019, Hao spent three days embedded in the offices of OpenAI and discovered this company, which claims to be transparent and operating “for the good of humanity”, was in fact highly secretive.In her bestselling book ‘Empire of AI: Inside the reckless race for total domination', Hao warns that the world has entered a new and ominous age of empire, where a small handful of companies are writing the future of humanity.Today, on the In The News podcast, how the race for total AI domination is repeating the worst of human history.Karen Hao discusses the severe cost of the seemingly unstoppable growth of OpenAI.Presented by Sorcha Pollak. Produced by Aideen Finnegan. Hosted on Acast. See acast.com/privacy for more information.
Scripture ReferencedGenesis 1; Matthew 22:34-40; 1 Timothy 5:8; Ephesians 4Books ReferencedThe Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip by Stephen Witthttps://www.amazon.com/Thinking-Machine-Jensen-Coveted-Microchip/dp/0593832698The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hageyhttps://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961/Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao https://www.amazon.com/Empire-AI-Dreams-Nightmares-Altmans/dp/0593657500Links Referenced and Recommended Episode 4 - Technology and a Human Future https://www.gospelunderground.org/podcast/2017/9/29/episode-4-technology-and-our-human-futureEpisode 165 - Just about a year ago - My Guy, AI https://www.gospelunderground.org/podcast/2024/8/29/episode-165-my-guy-aiReasonable Faith Podcast AI and the Argument from ContingencyPart 1 - https://www.reasonablefaith.org/media/reasonable-faith-podcast/ai-and-the-argument-from-contingency-part-onePart 2https://www.reasonablefaith.org/media/reasonable-faith-podcast/ai-and-the-argument-from-contingency-part-twoAlex O'Connor 24 min video - I made ChatGPT believe in God (seriously) https://youtu.be/HdH8rNnvKT0?si=utyxZByjQe36WIt_AI and Energy Usage - The Climate Footprint of Big Tech - MIT Technology Review https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/Nuclear Power Plants and AI https://apnews.com/article/meta-facebook-constellation-energy-nuclear-ai-a2d5f60ee0ca9f44c183c58d1c05337cSam Altman and Cold Fusion - https://fortune.com/2025/05/07/nuclear-fusion-energy-ai-sam-altman-helion-pacific-commonwealth-timelines/Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task https://arxiv.org/pdf/2506.08872v1Knowing Facts? https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mannRelying on AI in Colonoscopies May Erode Clinicians' Skills - Study this week on Colonoscopy https://www.medpagetoday.com/gastroenterology/coloncancer/116968
Jeff De Cagna is an association contrarian, foresight practitioner, futurist, and keynote speaker. He's the executive advisor for Foresight First LLC, a governing orientation that “prioritizes the future and challenges association and non-profit boards to question their orthodox beliefs.” As of August 26, 2025, ChatGPT, a popular generative AI-chatbot, has been out for 1000 days. In those 1000 days, the AI landscape has changed tremendously. While there have been positive developments as a result of the generative AI boom, Jeff and Dave take a closer look at how AI has impacted the world. Links: MIT AI Risk Repository: https://airisk.mit.edu/ Foresight First: https://foresightfirst.io/ Empire of AI by Karen Hao: https://www.penguinrandomhouse.com/books/743569
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comScott is a war correspondent and author. His non-fiction books include Lawrence in Arabia, Fractured Lands, and The Quiet Americans, and his novels include Triage and Moonlight Hotel. He's also a contributing writer for the New York Times Magazine. His new book is King of Kings: The Iranian Revolution: A Story of Hubris, Delusion and Catastrophic Miscalculation.For two clips of our convo — on Jimmy Carter's debacle with the Shah, and the hero of the Iran hostage crisis — head to our YouTube page.Other topics: growing up in East Asia and traveling the world; his father the foreign service officer; their time in Iran not long before the revolution; Iran a “chew toy” between the British and Russian empires; the Shah's father's affinity for Nazi Germany; Mosaddegh's move to nationalize the oil; the 1953 coup; the police state under the Shah; having the world's 5th biggest military; the OPEC embargo; the rise of Khomeini and his exile; the missionary George Braswell and the mullahs; Carter's ambitious foreign policy; the US grossly overestimating the Shah; selling him arms; Kissinger; the cluelessness of the CIA; the prescience of Michael Metrinko; the Tabriz riots; students storming the US embassy; state murder under Khomeini dwarfing the Shah's; the bombing of Iran's nuke facilities; and Netanyahu playing into Hamas' hands.Browse the Dishcast archive for an episode you might enjoy. Coming up: a fun chat with Johann Hari, Jill Lepore on the history of the Constitution, Karen Hao on artificial intelligence, and Katie Herzog on drinking your way sober. Please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
Invia le tue domande per l'Ask Me Anything di Actually a questo link: https://www.speakpipe.com/ActuallyPodcast Nel suo recente libro "Empires of AI" la giornalista Karen Hao descrive Openai e i grandi player dell'ai moderna paragonandoli ai grandi imperi del passato - che sfruttavano persone e risorse di ogni dove per arricchirsi. Ne discutono Riccardo e Simone Pieranni - giornalista di Chora esperto di Oriente e tecnologia. Dal 26 al 28 settembre a Torino Chora&Will Days, il primo festival di Chora e Will: scopri il programma e come partecipare su days.chorawill.com Firma la proposta di legge di iniziativa popolare per chiedere una legge sul voto fuorisede: https://shor.by/GcvZ Learn more about your ad choices. Visit megaphone.fm/adchoices
Later this month, OpenAI is expected to release the latest version of ChatGPT – the groundbreaking AI chatbot that became the fastest growing app in history when it was launched in 2022.When Sam Altman first pitched an ambitious plan to develop artificial intelligence, he likened it to another world changing, potentially world destroying endeavor: the Manhattan Project, in which the U.S. raced to build an atomic bomb.The sales pitch he made to Elon Musk worked. Altman was promised a billion dollars for the project and was even given a name: OpenAI.In a new book, “Empire of AI: Dreams and Nightmares of Sam Altman's OpenAI,” tech journalist Karen Hao chronicles the company's secretive and zealous pursuit of artificial general intelligence.Today, Hao joins the show to not only pull back the curtain on the company's inner workings through its astronomical rise and very public controversies, but also on the very real human and environmental impacts it has had, all in the name of advancing its technology.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
I'm in Shaxi, a wonderful little town in the Dali Bai Autonomous Prefecture, and I was joined here by the Columbia economic historian Adam Tooze, who shared his thoughts on what he sees happening on the ground in China. Adam's been in China for the last month and reflects on his experiences learning about the country — and even attempting the language!03:49 - The economic situation in China10:42 - Patterns of consumption in China14:38 - China's industrial policy and renewable energy 18:52 - China vs. the U.S. on renewables26:15 - China's economic engagement with the Global South33:13- Beijing's strategic shift and Europe's rethinking37:49- The recent European Parliament paper42:43 - Learning about China as an “Outsider” 51:31 - Adam's evolving views on China 59:30 - Paying it Forward01:01:07 - Recommendations Paying it Forward: Kyle Chan, Pekingology.Recommendations: Adam: Caught by the Tide, Jia Zhangke (movie).Kaiser: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen Hao (book), Vera, or Faith, Gary Shteyngart (book).See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, Brynn is joined by court reporter and tech-savvy expert Matt Moss to explore one of the hottest topics in the legal world today—AI in court reporting. With widespread concern about artificial intelligence replacing human professionals, Matt brings clarity to the conversation, breaking down what's real, what's hype, and how court reporters can stay ahead of the curve.You'll hear how Matt went from waiting tables to becoming a respected realtime reporter, how he relearned his theory mid-training, and why his curiosity for lifelong learning led him deep into the world of artificial intelligence. He also explains the crucial distinction between AI, AGI, and LLMs—and why understanding these terms matters.This episode is essential for anyone who's ever wondered:Will AI take over court reporting?How do tools like ChatGPT actually work?What's irreplaceable about a human court reporter?Plus, Matt gives a sneak peek into his upcoming panel at the NCRA Convention and shares his favorite resources to become more tech-literate in today's fast-moving landscape.
It's hard to keep track of all the conversations about AI in the workplace. So don't feel bad if you find yourself getting overwhelmed or lost in it all.I recently listened to an interview with Karen Hao, an award-winning journalist and author of the book Empire of AI. One thing she said that I wanted to share with you is:“AI is such an interesting word because it's sort of like the word transportation in that you have bicycles, you have gas guzzling trucks, you have rocket ships. They're all forms of transportation, but they all serve different purposes and they have different cost-benefit trade-offs." It's a good reminder. When you're thinking about AI in the work place, take a moment to clarify the conversation and what, specifically, people are talking about when they're talking about AI.Listen to Karen Hao's interview on Spotify or Apple Podcasts.What conversations are you having -- or not having -- about AI at work? Tell me in the comments, or message me on LI or my website.**After the Episode**Follow me on LinkedIn:https://www.linkedin.com/in/kimnicol/Contact me for private coaching, and get support focused on who you are, your goals and your situation:https://kimnicol.com/Join the waitlist for Communication Strategies:https://maven.com/kimnicol/communication-strategies
“What to the Slave Is the 4th of July?”: James Earl Jones Reads Frederick Douglass’s Historic Speech; “Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World; Journalist Karen Hao on Sam Altman, OpenAI & the “Quasi-Religious” Push for Artificial Intelligence
“What to the Slave Is the 4th of July?”: James Earl Jones Reads Frederick Douglass’s Historic Speech; “Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World; Journalist Karen Hao on Sam Altman, OpenAI & the “Quasi-Religious” Push for Artificial Intelligence
Silicon Valley has long proclaimed that their technologies would change the future. But to tech leaders like OpenAI executive Sam Altman, AI is not just a business; it's a matter of life and death. But where are the ethics of a company such as an OpenAI as they transition from nonprofit to for-profit? How have AI companies grown to wield such immense power and wealth? Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman's AI joins David Rothkopf to discuss her recent book and the modern colonialism of AI tech giants. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Looking for More from the DSR Network? Click Here: https://linktr.ee/deepstateradio Learn more about your ad choices. Visit megaphone.fm/adchoices
As AI begins to fundamentally alter our lives, we're in urgent need of expert voices who both understand how the technology works and are capable of scrutinising its effects. In her new book, Empire of AI, tech reporter and former Silicon Valley engineer Karen Hao debunks the myths that surround AI and exposes the true […]
Emily M. Bender & Alex Hanna share about their book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want on episode 576 of the Teaching in Higher Ed podcast. Quotes from the episode What's going on with the phrase artificial intelligence is not that it means something else than what we're using it to mean, it's that it doesn't have a proper referent in the world. -Emily M. Bender There's a much broader range of people who can have opinions on AI. -Alex Hanna The boosters say AI is a thing. It's inevitable, it's imminent, it's going to be super powerful, and it's going to solve all of our problems. And the doomers say AI is a thing, it's inevitable, it's imminent, it's going to be super powerful, and it's going to kill us all. And you can see that there's actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum. -Emily M. Bender Teachers' working conditions are students' learning conditions. -Alex Hannay Resources The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by Emily M. Bender and Alex Hanna Distributed AI Research Institute (DAIR) The Princess Bride Emily Tucker, Executive Director, Center on Privacy & Technology at Georgetown Law On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? By Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell Emily M. Bender's website How the right to education is undermined by AI, by Helen Beetham How We are Not Using AI in the Classroom, by Sonja Drimmer & Christopher J. Nygren Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao
Paris Marx is joined by Karen Hao to discuss how Sam Altman's goal of scale at all costs has spawned a new empire founded on exploitation of people and the environment, resulting in not only the loss of valuable research into more inventive AI systems, but also exacerbated data privacy issues, intellectual property erosion, and the perpetuation of surveillance capitalism.Karen Hao is an award-winning journalist and the author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Karen was the first journalist to profile OpenAI.Karen has reported on the environmental impacts and human costs of AI.The New York Times reported on Why We're Unlikely to Get Artificial General Intelligence Anytime Soon. Support the show
John is joined by journalist Karen Hao to discuss her new book, “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI,” and both the promise and the perils of the coming age of artificial intelligence. Hao explains how OpenAI went from being an altruistic nonprofit dedicated to ensuring that A.I. would “benefit all of humanity” to a burgeoning commercial colossus valued at north of $300 billion; how Altman wrested control of the company from his co-founder Elon Musk; why skepticism is warranted regarding the claims that superhuman A.I. is inevitable; and how that narrative, true or not, serves the economic and political interests of the cabal of tech bros who are A.I.'s most fervent boosters. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
National Guard units who earlier arrived in Los Angeles have engaged with protestors opposed to Donald Trump's immigration policies. They fired tear gas to try to force back the demonstrators. President Trump ordered their deployment after two days of confrontations between residents of a predominantly Hispanic area and local police.Also in the programme:Inside the reckless race for total domination - tech journalist Karen Hao on her new book 'Empire of AI': and Carlos Alcaraz has pulled off a triumphant comeback from two sets down to win the longest ever French Open tennis final in Paris against the world number one Jannik Sinner of Italy.(Photo: National Guard deployed by President Trump as anti-ICE protests continue in Los Angeles, USA - 08 Jun 2025. Credit:Caroline Brehman/EPA-EFE/Shutterstock)
OpenAI, the company behind ChatGPT, is one of the most famous and secretive companies in the world working to develop artificial general intelligence that would match or surpass the cognitive abilities of humans across every task. Investigative journalist Karen Hao joins Ali Rogin to discuss her new book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI," which delves into the company. PBS News is supported by - https://www.pbs.org/newshour/about/funders
Extended interview with Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. The book documents the rise of OpenAI and how the AI industry is leading to a new form of colonialism.
Headlines for June 04, 2025; “Death Traps”: U.S.-Israeli Aid Scheme Paused in Gaza After 100+ Palestinians Killed While Waiting for Food; Greta Thunberg Speaks from Aid Ship Heading to Gaza Despite Israeli Threats: It’s My Moral Obligation; “Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World
Headlines for June 04, 2025; “Death Traps”: U.S.-Israeli Aid Scheme Paused in Gaza After 100+ Palestinians Killed While Waiting for Food; Greta Thunberg Speaks from Aid Ship Heading to Gaza Despite Israeli Threats: It’s My Moral Obligation; “Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World
Extended interview with Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. The book documents the rise of OpenAI and how the AI industry is leading to a new form of colonialism.
There's a good chance that before November of 2022, you hadn't heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they've become household names with the explosive growth of the generative AI tool called ChatGPT. What's been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.
EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/adamconover Try it risk-free now with a 30-day money-back guarantee!Silicon Valley has started treating AI like a religion. Literally. This week, Adam sits down with Karen Hao, author of EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI to talk about what it means for all of us when tech bros with infinite money think they're inventing god. Find Karen's book at factuallypod.com/books--SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.