POPULARITY
Categories
President Trump's speech before the General Assembly has sparked debate over its style and substance, raising questions about UN organizations that do not serve American interests. As we continue to foot its ever-growing bill, the United Nations system appears to be failing in peacekeeping and security. How did Trump's speech signal a shift in our relationship with the international organization? When will the 180-day review be released? And what should it say about long-awaited UN reform?Brett D. Schaefer is a senior fellow at the American Enterprise Institute (AEI), where he focuses on multilateral treaties, peacekeeping, and the United Nations and international organizations. Before joining AEI, Mr. Schaefer was the Jay Kingham Senior Research Fellow in International Regulatory Affairs at the Heritage Foundation. Previously, he was a member of the United Nations Committee on Contributions and an expert on the UN Task Force for the United States Institute of Peace. Read the transcript here.Subscribe to our Substack here.
Today on Political Economy, I'm talking with Tobias Peter about housing: From homeownership rates to construction types, we go over the many factors that play into a healthy housing market and explore what is holding back US homeowners.Tobias is the codirector of the Housing Center at AEI. As a senior fellow, his research focuses on housing risk and mortgage markets. Tobias has testified before Congress and has contributed to major publications from the Wall Street Journal to Business Insider.
I'm thrilled to welcome Thomas Chatterton Williams to the podcast this week. Williams is a colleague of mine at AEI, a staff writer at The Atlantic, and the author of the provocative new book, Summer of Our Discontent: The Age of Certainty and the Demise of Discourse, which examines how the year 2020 broke American politics:Taking aim at the ideology of critical race theory, the rise of an oppressive social media, the fall from Obama to Trump, and the twinned crises of COVID-19 and the murder of George Floyd, Williams documents the extent to which this transition has altered media, artistic creativity, education, employment, policing, and, most profoundly, the ambient language and culture we use to make sense of our lives.Williams also decries how liberalism—the very foundation of an open and vibrant society—is in existential crisis, under assault from both the right and the left, especially in our predominantly networked, Internet-driven monoculture.Please listen in and check out Williams's new book!A transcript of this podcast is available on the post page on our website. Get full access to The Liberal Patriot at www.liberalpatriot.com/subscribe
My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
0:00 - Remembering Charlie Kirk 36:58 - James A. Gagliano, retired FBI supervisory special agent and a doctoral candidate in homeland security at St. John’s University, looks at culture issues on college campuses - "They claim to be liberals but they don't want to hear your side" 57:38 - Reaction from the Left 01:18:25 - Patrick Maloney, Chicago FD Special Operations Chief – retired – shares his experience as one of the many CFD members who deployed to Ground Zero on September 12, 2001 01:31:11 - "There was no security" 01:35:40 - Ian Rowe, founder of Vertex Partnership Academies and senior fellow at AEI, on the dehumanizing narratives in schools that brand dissenting views as evil. Ian is also the author of Agency: The Four Point Plan (F.R.E.E.) for ALL Children to Overcome the Victimhood Narrative and Discover Their Pathway to Power 01:54:42 - RealClearPolitics’ Susan Crabtree on security for the president, vice president, and conservative figures — and whether the Secret Service is up to the task. Susan is also the author of Fool’s Gold: The Radicals, Con Artists, and Traitors Who Killed the California Dream and Now Threaten Us All 02:13:00 - Retired FBI Special Agent & Criminal Profiler from the Unabomber case, James Fitzgerald, breaks down the FBI investigation into Charlie Kirk's shooter. James is also the author of the book series A Journey to the Center of the Mind 02:25:22 - Sage Steele remembers KirkSee omnystudio.com/listener for privacy information.
A lot has happened in education over the last couple of months. A new school year started for students across the country. State governors began announcing whether they would be opting in to the new federal tax credit scholarship program. Penny Schwinn, former Education Commissioner of Tennessee, withdrew her nomination to be Linda McMahon's number two at the Department of Education. A federal judge ruled that the Trump administration's shutdown of the Comprehensive Centers and Regional Educational Laboratories was unlawful. And the Trump administration continued waging its battles with elite universities.On this episode of The Report Card, Nat Malkus discusses these developments, and more, with Andy Rotherham and Rick Hess.Andrew J. Rotherham is a co-founder and senior partner at Bellwether and the author of the Eduwonk blog.Frederick M. Hess is a senior fellow and the director of education policy studies at AEI.Show Notes:Why Did Penny Schwinn Withdraw Her Bid to Be No. 2 in Trump's Ed. Dept.?The Greatest Trick Randi Weingarten Ever Pulled. Plus, What's the Freezing Temperature in Trump World? A Penny for Your Thoughts. Dems in Voucher Disarray.Everyone's a HypocriteRestoring Free Inquiry on CampusTear Down This Wall: The Case for a Radical Overhaul of Teacher CertificationBreaking Down The New Federal School Choice Program With Shaka MitchellCommentary: Virginia Students Deserve Better. Close the ‘Honesty Gap'PragerU Teacher Qualification TestIs Online Sports Betting a Risk to Public Health?
What's lost when we opt for the convenience of technology over the difficult, awkward, thrilling realities of human interaction? With so much tech to reach for, when do we lose the ability to interact with each other – or even understand ourselves? And with the AI revolution already afoot, is humanity just f*cked? Vanessa's back from mat leave and ready to dive into our tech-saturated, under-socialized world with Christine Rosen — senior fellow at AEI, co-host of the Commentary podcast, and author of The Extinction of Experience: Being Human in a Disembodied World.On the agenda:-Happy returns and the pleasure of ambiguity [0:00-5:55]-Information isn't knowledge [5:56-9:03]-The pleasure of ambiguity and the value of discomfort [9:04-20:47]-How tech mediates and impairs us [20:48-47:38]-Humanity in the age of AI [47:39-1:24:18]Mentioned in this episode:* Panic Porn and Trauma Creep (w/ Christine Rosen)* The Extinction of Experience: Being Human in a Disembodied WorldUncertain Things is hosted and produced by Adaam James Levin-Areddy and Vanessa M. Quirk. For more doomsday thoughts, subscribe to: http://uncertain.substack.com. Get full access to Uncertain Things at uncertain.substack.com/subscribe
PREVIEW: MODI AND XI: Colleague Sadanand Dhume of AEI and WSJ comments on the long standing distrust between India and China -- unlikely to be solved by photos of Modi with Xi and Putin. More. 1922 BOMBAY
The topic of this episode is, “Was James Madison the first majority leader?”Both the Senate and the House of Representatives have a majority leader. At the time of the recording this podcast, Republican John Thune of South Dakota is the Senate majority leader, and Republican Steve Scalise of Louisiana is the House majority leader.Now, congressional scholars tend to argue that the majority leader emerged as a position in each chamber in 1899. Democrat Arthur B Gorman of Maryland was the first Senate majority leader, and Republican Sereno Elisha Payne of New York was the first House majority leader.My AEI colleague Jay Cost has a different view. He thinks the first majority leader appeared on Capitol Hill far earlier, and it was Virginia's James Madison. So, we're going to discuss that claim, which you can find in his recent piece, "Icons of Congress: James Madison — The First Majority Leader."So, we're going to discuss that claim.Dr. Jay Cost is the Gerald R. Ford nonresident senior fellow at AEI and the author of the superb book, James Madison: America's First Politician (2021), and other fine volumes on politics and history. Regular readers of UnderstandingCongress.org no doubt have seen Jay's various reports and essays, and if you have not seen them, do have a look.Click here to read the full transcript.
Chris is joined by his AEI colleague Thomas Chatterton Williams, whose latest book Summer of Our Discontent: The Age of Certainty and the Demise of Discourse was published earlier this month. The two discuss Thomas's analysis of the events and ideas that led to the protests, riots, and all-around madness of the summer of 2020; […]
Preview: Delhi-DC Colleague Sadanand Dhume of AEI outlines a remedy for the present friction between PM Modi and POTUS Trump. More. 1865 KOLKATA
0:30 - Rhode Island Assistant AG Devon Hogan Flanagan arrested for trespassing 13:56 - Brian Glenn (Real America's Voice and MTG beau), Trump on Zelensky suit 35:47 - Border/migrants/deportations 59:52 - Justin Logan, director of defense and foreign policy studies at the Cato Institute, says it's time to come to the realization that one more round of sanctions isn't going to do anything - “it is very difficult to bring Russia to its knees” Follow Justin on X @JustinTLogan 01:14:13 - In-depth History with Frank from Arlington Heights 01:17:30 - Christina Bobb, former Marine and Trump attorney - now takes on government corruption as an attorney with Judicial Watch, shares details from her new book Defiant: Inside the Mar-a-Lago Raid and the Left’s Ongoing Lawfare. Defiant is available Sept 9 - preorder your copy today 01:36:21 - Founder and Executive Editor of Wirepoints, Mark Glennon, reacts to the required mental health screenings coming to Illinois schools: "stop telling our kids thier mentally ill - teach them to read and write, that's what we need" Get Mark’s latest at wirepoints.org 01:52:25 - Trump on getting rid of mail-in ballots 02:12:26 - Benjamin Zycher, senior fellow at AEI, shrugs off the rise in “climate lawfare” — noting nearly every case has already lost in court.See omnystudio.com/listener for privacy information.
Today on Political Economy, I talk with Mackenzie Eaglen about the Pentagon's evolving strategy to confront today's national defense challenges. Mackenzie and I take a look at the military doctrine of recent administrations compared to that of today. We discuss America's state of preparedness, the changing defense-industrial base, and the role of automation.Eaglen is a senior fellow here at AEI where her research focuses on defense strategy, budgets, and readiness. She is a member of the Commission on the Future of the Navy and is one of 12 members of the US Army War College Board of Visitors. She serves on the US Army Science Board, and was a staff member on both the National Defense Strategy Commission and the National Defense Panel.
Ask Me How I Know: Multifamily Investor Stories of Struggle to Success
Rest isn't a luxury you earn — it's the rhythm you were made for. In this episode, we dismantle hustle culture and reclaim rest as a spiritual, identity-rooted recalibration for High Capacity Humans.If rest feels unfamiliar, unsafe, or like something you have to earn — this episode is for you.In today's recalibration, Julie Holly speaks to the High Capacity Human who's learned to associate rest with weakness, laziness, or falling behind. If your worth has ever been tied to your work, rest probably hasn't felt like safety. But what if it's actually where identity gets restored?Through client stories and the powerful example of Arthur Brooks — former AEI president turned happiness scholar — we'll explore why real rest is not passive, but powerful. It's not what you earn after performance — it's what you return to when you remember who you are.In this episode, you'll learn:Why rest often feels unsafe for high achieversHow hustle culture rewires our nervous systemWhat Arthur Brooks' life shift teaches us about identity over legacyA practical recalibration to begin trusting stillness again
Kicking off our annual What the Hell's summer book series, Zach Cooper discusses his new book, Tides of Fortune: The Rise and Decline of Great Militaries (Yale University Press, 2025). How will the United States and China evolve militarily in the years ahead? Many experts believe the answer to this question is largely unknowable. But in his book, Zack Cooper argues that the American and Chinese militaries are following a well-trodden path. For centuries, the world's most powerful militaries have adhered to a remarkably consistent pattern of behavior, determined largely by their leaders' perceptions of relative power shifts. WTH is China on this path? And importantly, WTH is the US?Zack Cooper is a senior fellow at the American Enterprise Institute, where he studies US strategy in Asia, including alliance dynamics and US-China competition. He also teaches at Princeton University and serves as chair of the board of the Open Technology Fund. Before joining AEI, Dr. Cooper was the senior fellow for Asian security at the Center for Strategic and International Studies (CSIS). Find Tides of Fortune: The Rise and Fall of Great Militaries here.Find the transcript here.
AEI's Sadanand Dhume joins the podcast to discuss Zohran Mamdani's ideological origins, why government stores are not a fresh, new idea, Indian democracy, poverty, capitalism, and how Bangladesh went its own way.
The topic of this episode is, “Does Congress's power to declare war mean anything?”In June of 2025, President Donald J. Trump directed US aircraft to drop 30,000 pound bombs on nuclear facilities in Iran. Some legislators in Congress and some media complained that this was a violation of the US Constitution. They note that Article I, Section 8 declares, “Congress shall have the power to declare war.” That same article of the Constitution also empowers the legislature to “provide for the common defense.”So, was the President's action constitutional or not? And does Congress's power to declare war mean anything?To help us think through these questions I have with me my AEI colleague, Gary Schmitt. He is the author of many books and articles on American government and he has written extensively on legislative and presidential war-making.
Yascha Mounk and Thomas Chatterton Williams explore what the summer of 2020 showed about America. Thomas Chatterton Williams is a staff writer at The Atlantic and the author of Losing My Cool, Self-Portrait in Black and White, and Summer of Our Discontent. He is a visiting professor of humanities and senior fellow at the Hannah Arendt Center at Bard College, a 2022 Guggenheim fellow, and a visiting fellow at AEI. In this week's conversation, Yascha Mounk and Thomas Chatterton Williams discuss why the summer of 2020 played out as it did, the subsequent backlash, and why ideas core to the 2020 protests have now been quietly abandoned. Podcast production by Jack Shields and Leonora Barclay. Connect with us! Spotify | Apple | Google X: @Yascha_Mounk & @JoinPersuasion YouTube: Yascha Mounk, Persuasion LinkedIn: Persuasion Community Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on Breaking Battlegrounds, Chuck and Sam are joined by Congressman Dusty Johnson of South Dakota to discuss his latest bills, including the No DOT Funds for Sanctuary Cities Act, the FASTER Act, and legislation to protect women's sports at military academies. Johnson also shares insights from his committee work on Agriculture, Infrastructure, and the China Select Committee and explains why he's running for governor. Next, AEI's Daniel Buck dives into the broken world of American education, from Harvard's “Queering Education” course to why no one actually likes high expectations, laying out how ideology has replaced academics and what real reform could look like. Finally, Congressman Andy Biggs joins us in studio to talk about federalism in the Trump era, what Congress must prioritize before year's end, and why he's running to replace Katie Hobbs as Arizona's governor. And as always, stay tuned for Kiley's Corner, where she discusses the Devil's Den double homicide and what one furious woman did after catching her boyfriend cheating. Don't miss this packed episode! Subscribe at BreakingBattlegrounds.Vote or wherever you get your podcasts to never miss an episode.www.breakingbattlegrounds.voteTwitter: www.twitter.com/Breaking_BattleFacebook: www.facebook.com/breakingbattlegroundsInstagram: www.instagram.com/breakingbattlegroundsLinkedIn: www.linkedin.com/company/breakingbattlegroundsTruth Social: https://truthsocial.com/@breakingbattlegroundsShow sponsors:Santa Has A Podcast - This episode of Breaking Battlegrounds is brought to you by Santa Has a Podcast — a show for the whole family filled with kindness challenges, North Pole stories, elf updates, and a sprinkle of Christmas magic all year long. Listen now at SantaHasAPodcast.com.Invest Yrefy - investyrefy.comOld Glory DepotSupport American jobs while standing up for your values. OldGloryDepot.com brings you conservative pride on premium, made-in-USA gear. Don't settle—wear your patriotism proudly.Learn more at: OldGloryDepot.comDot VoteWith a .VOTE website, you ensure your political campaign stands out among the competition while simplifying how you reach voters.Learn more at: dotvote.vote4Freedom MobileExperience true freedom with 4Freedom Mobile, the exclusive provider offering nationwide coverage on all three major US networks (Verizon, AT&T, and T-Mobile) with just one SIM card. Our service not only connects you but also shields you from data collection by network operators, social media platforms, government agencies, and more.Use code ‘Battleground' to get your first month for $9 and save $10 a month every month after.Learn more at: 4FreedomMobile.comAbout our guest:Dusty Johnson brings an energetic and optimistic style to Washington as South Dakota's lone voice in the U.S. House of Representatives. An outspoken leader on issues related to border security, countering China, and welfare reform, he serves on the Select Committee on China, Agriculture Committee, and Transportation and Infrastructure Committee. He also chairs the Republican Main Street Caucus, a group of 80 solutions-focused conservatives. Prior to being elected to Congress, he served as chief of staff to the Governor and as vice president of an engineering firm specializing in rural telecommunications. Dusty lives in Mitchell with his wife and three sons.-Daniel Buck is a research fellow at the American Enterprise Institute (AEI), director of the Conservative Education Reform Network (CERN), and an affiliate of AEI's James Q. Wilson Program in K–12 Education Studies, where his work focuses on K–12 education, charter schooling, curriculum reform, and school safety and discipline.Before joining AEI, Mr. Buck was a senior fellow at the Thomas B. Fordham Institute, an assistant principal at Lake County Classical Academy, and a classroom teacher at Hope Christian Schools, Holy Spirit Middle School, and Green Bay Area Public Schools.His work has appeared in the popular press, including The Wall Street Journal, National Affairs, and National Review. Mr. Buck is the author of What Is Wrong with Our Schools? (2022).Mr. Buck has a master's degree and a bachelor's degree from the University of Wisconsin–Madison. You can follow him on X @MrDanielBuck.-Congressman Andy Biggs is an Arizona native and currently serving his third term in the U.S. House of Representatives, representing Arizona's Fifth District. He lives in Gilbert with his wife of 40 years, Cindy. They have six children and seven grandchildren.Congressman Biggs received his bachelor's degree in Asian Studies from Brigham Young University; his M.A. in Political Science from Arizona State University; and his J.D. degree from the University of Arizona. He is a retired attorney, who has been licensed to practice law in Arizona, Washington, and New Mexico.Before being elected to Congress, Congressman Biggs served in the Arizona Legislature for 14 years – the last four as the Arizona Senate President.Congressman Biggs is a member of the House Judiciary and Oversight and Reform committees. He is chairman of the House Freedom Caucus, co-chair of the Border Security Caucus, co-chair of the War Powers Caucus, and Chief Regulatory Reform Officer of the Western Caucus.Congressman Biggs has a lifetime rating of 100% with the Club for Growth, 98% lifetime score with FreedomWorks, 95% lifetime score with Heritage Action, 100% rating in the 116th Congress for National Right to Life, and a 99% career grade from NumbersUSA.The Arizona Republic named Congressman Biggs as one of its "10 Arizona people you'll want to watch in 2019," arguing that "Biggs makes the public case for the conservative position and often in defense of the Trump administration. He's very good at it. His advocacy tends to be well-reasoned and persuasive, not inflammatory...To keep an eye on what congressional conservatives are thinking and advocating, Biggs is increasingly one to watch." biggsforarizona.com Get full access to Breaking Battlegrounds at breakingbattlegrounds.substack.com/subscribe
Seattle's low-rise multifamily zones have produced more than 20,000 townhomes over the past 30 years. Tobias Peter discusses the impacts on affordability, homeownership, and more — including lessons for other cities.Show notes:Peter, T., Pinto, E., & Tracy, J. (2025). Low-Rise Multifamily and Housing Supply: A Case Study of Seattle. Journal of Housing Economics, 102082.The full catalog of AEI Housing Supply Case Studies.The Urban Institute study on upzoning effectiveness: Stacy, C., Davis, C., Freemark, Y. S., Lo, L., MacDonald, G., Zheng, V., & Pendall, R. (2023). Land-use reforms and housing costs: Does allowing for increased density lead to greater affordability? Urban Studies, 60(14), 2919-2940.AEI's review and critique of the Urban Institute study: Peter, T., Tracy, J., & Pinto, E. (2024). Exposing Severe Methodological Gaps: A Critique of the Urban Institute's Panel Study on Land Use Reforms. American Enterprise Institute.Episode 77 of UCLA Housing Voice: Upzoning with Strings Attached with Jacob Krimmel and Maxence Valentin.
Join Matt Lewis and AEI senior fellow Christopher Scalia as they dive into the new William F. Buckley biography, Brian Wilson's musical legacy, and Scalia's book, '13 Novels Conservatives Will Love (but Probably Haven't Read).' https://www.amazon.com/Novels-Conservatives-Will-Probably-Havent/dp/1510782397Discover insights on conservatism, culture, and literature in this engaging podcast. Perfect for fans of political history and literary fiction.Support "Matt Lewis & The News" at Patreon: https://www.patreon.com/mattlewisFollow Matt Lewis & Cut Through the Noise:Facebook: https://www.facebook.com/MattLewisDCTwitter: https://twitter.com/mattklewisInstagram: https://www.instagram.com/mattklewis/YouTube: https://www.youtube.com/channel/UCVhSMpjOzydlnxm5TDcYn0A– Who is Matt Lewis? –Matt K. Lewis is a political commentator and the author of Filthy Rich Politicians.Buy Matt's book: https://www.amazon.com/Filthy-Rich-Politicians-Creatures-Ruling-Class/dp/1546004416Copyright © 2025, BBL & BWL, LLC
Stories are the way we communicate our values, explore complex ideas, and learn to empathize with those who fundamentally differ from ourselves.Christopher Scalia's most recent book, 13 Novels Conservatives Will Love (but Probably Haven't Read), delves into the particular benefit conservatives may find in literature they likely hadn't considered.Today on Political Economy, I talk with Chris about the unique role of novels in the development of strong morals, leadership, and sense of self.Chris is a senior fellow in the Social, Cultural, and Constitutional Studies department here at AEI. He previously served as director of AEI's Academic Programs department. Chris is a former professor of 18th- and early 19th-century British literature at the University of Virginia's College at Wise. He is the coeditor of On Faith: Lessons from an American Believer, and Scalia Speaks: Reflections on Law, Faith, and Life Well Lived.
A lot has happened in education over the last few weeks. Among other things, Congress passed a national school choice program and reshaped the student loan system. The Justice Department pressured the University of Virginia's president to step down. And the Trump administration began withholding nearly seven billion dollars in education funds that were set to go out by the beginning of July.On this episode of The Report Card, Nat Malkus discusses these developments, and more, with Andy Rotherham and Rick Hess.Note: Since this episode was recorded, twenty-four states have sued the Trump administration for withholding education funds, and the Supreme Court blocked a May order ruling that the Department of Education must reinstate over one thousand employees who were fired earlier in the year.Andrew J. Rotherham is a co-founder and senior partner at Bellwether and the author of the Eduwonk blog.Frederick M. Hess is a senior fellow and the director of education policy studies at AEI.Show Notes:The Impoundment Wars, Begun They Have. Plus, Wait, What Just Happened at UVA?
Elon Musk's embrace of President Trump and his campaign marked a pivotal moment in the 2024 presidential election. Musk was eventually appointed to head the newly established Department of Government Efficiency (DOGE), where he was tasked with cutting federal spending and reducing the national debt. DOGE moved quickly and decisively, triggering lawsuits and further enraging Trump's critics. Although Musk has since left the Trump administration and experienced a very public fallout with President Trump, DOGE continues to operate and make an impact. Matthew Continetti, Director of Domestic Policy Studies at the American Enterprise Institute, joined FOX News Rundown host Jessica Rosenthal to discuss DOGE, highlighting where it was effective in cutting waste, fraud, and abuse, and where it fell short of the expectations set by Musk and the administration. Continetti, who is featured in FOX Nation's new documentary "DOGE vs. DC," also weighs in on the public spat between Musk and the President, as well as the challenges politicians face when addressing America's debt seriously. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with AEI's Matthew Continetti on the legacy of DOGE. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Elon Musk's embrace of President Trump and his campaign marked a pivotal moment in the 2024 presidential election. Musk was eventually appointed to head the newly established Department of Government Efficiency (DOGE), where he was tasked with cutting federal spending and reducing the national debt. DOGE moved quickly and decisively, triggering lawsuits and further enraging Trump's critics. Although Musk has since left the Trump administration and experienced a very public fallout with President Trump, DOGE continues to operate and make an impact. Matthew Continetti, Director of Domestic Policy Studies at the American Enterprise Institute, joined FOX News Rundown host Jessica Rosenthal to discuss DOGE, highlighting where it was effective in cutting waste, fraud, and abuse, and where it fell short of the expectations set by Musk and the administration. Continetti, who is featured in FOX Nation's new documentary "DOGE vs. DC," also weighs in on the public spat between Musk and the President, as well as the challenges politicians face when addressing America's debt seriously. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with AEI's Matthew Continetti on the legacy of DOGE. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Elon Musk's embrace of President Trump and his campaign marked a pivotal moment in the 2024 presidential election. Musk was eventually appointed to head the newly established Department of Government Efficiency (DOGE), where he was tasked with cutting federal spending and reducing the national debt. DOGE moved quickly and decisively, triggering lawsuits and further enraging Trump's critics. Although Musk has since left the Trump administration and experienced a very public fallout with President Trump, DOGE continues to operate and make an impact. Matthew Continetti, Director of Domestic Policy Studies at the American Enterprise Institute, joined FOX News Rundown host Jessica Rosenthal to discuss DOGE, highlighting where it was effective in cutting waste, fraud, and abuse, and where it fell short of the expectations set by Musk and the administration. Continetti, who is featured in FOX Nation's new documentary "DOGE vs. DC," also weighs in on the public spat between Musk and the President, as well as the challenges politicians face when addressing America's debt seriously. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with AEI's Matthew Continetti on the legacy of DOGE. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Today on Political Economy, I'm talking with Edward Glaeser about the problem with American housing supply and the many hurdles to building affordable homes. Ed and I look at the past century of urban and suburban construction and the attitudes and policies that have held back the US housing market.Ed is the chair of the economics department at Harvard University, where he has been a professor since 1992. He is also a visiting senior fellow here at AEI where his research focuses on urban economic policy. His most recent co-authored paper, “America's Housing Supply Problem: The Closing of the Suburban Frontier?” is published in the National Bureau of Economic Research.
We're on theme this week! As Americans head toward the celebration of another free and independent trip around the sun, Henry sits down with AEI's Karlyn Bowman to discuss the latest findings on how citizens of different stripes feel about their country and their sentiments about being Americans, along with their handling of flags and familiarity […]
On this episode of Future of Freedom, host Scot Bertram is joined by two guests with different viewpoints concerning when the President needs the approval of Congress to engage in military action. First on the show is John Yoo, the Emanuel Heller Professor of Law at the University of California at Berkeley, Senior Research Fellow at the Civitas Institute, University of Texas at Austin, and a Nonresident Senior Fellow at the American Enterprise Institute. Later, we hear from Charles C. W. Cooke, Senior Writer at National Review and host of The Charles C. W. Cooke Podcast. You can find AEI on X @AEI and Charles at @CharlesCWCooke.
Disclaimer: Portions of this episode experienced audio challenges and are of varying quality. Unintelligible sections were edited out. In this episode of No Brainer, Geoff Livingston and Greg Verdino discuss the impact of AI on workforce displacement with special guest Brent Orrell, senior fellow at the American Enterprise Institute. They explore the challenges and opportunities posed by AI, how it affects different sectors, and the need for policy planning to support displaced workers. Brent, Greg and Geoff weigh the validity of news-generating outlier statements about dramatic AI workforce impacts. Then they discuss Brent's upcoming paper, which will be released this week, on AI impacts to the larger workforce called, “Deskilling the Knowledge Economy,” including potential policy recommendations. Finally, the three conclude with a conversation about the challenges facing the AI market. Chapters 00:00 Intro 02:15 AIand Workforce Impacts 05:57 Upskilling and Personal Responsibility 08:49 Future of Jobs and AI 12:40 Policy and Economic Implications 22:42 Challenges in AI Adoption About BrentBrent on AEI.org - https://www.aei.org/profile/brent-orrell/ Brent on LinkedIn - https://www.linkedin.com/in/brent-orrell-b503617/ Brent Orrell is a senior fellow at the American Enterprise Institute (AEI), specializing in job training and workforce development with a special focus on disconnected and disadvantaged populations, including youth, justice-involved, veterans, and neurodivergent persons. His recent work has focused on the workforce opportunities and challenges resulting from generative AI and automation, as well as strategies for improving economic mobility in rural, redeveloping, and non-metropolitan areas throughout America. Brent has spearheaded AEI's involvement with the Workforce Futures Initiative, in collaboration with the Brookings Institution and the Harvard Kennedy School, which has produced multiple reports, working group sessions, and interest from communities across the US. He has written, coauthored, and edited multiple reports for AEI, and frequently contributes to the popular press, including The Bulwark, Deseret News, The Dispatch, Law and Liberty, The Hill and RealClearPolicy. About AEI The American Enterprise Institute is a public policy think tank dedicated to defending human dignity, expanding human potential, and building a freer and safer world. The work of their scholars and staff advances ideas rooted in the belief in democracy, free enterprise, American strength and global leadership, solidarity with those at the periphery of our society, and a pluralistic, entrepreneurial culture. Learn more about your ad choices. Visit megaphone.fm/adchoices
About one month ago, the House passed the One Big Beautiful Bill Act, a massive bill aimed at advancing President Trump's domestic policy agenda. Now, the bill is with the Senate.Included in the bill are huge changes to student lending. In particular, the One Big Beautiful Bill Act would make drastic changes to loan limits, repayment plans, and the rules for which programs are eligible to participate in the student loan program.What is the rationale behind these changes? How would these changes affect students and schools? And will the One Big Beautiful Bill Act become law?On this episode of The Report Card, Nat Malkus discusses these questions, and more, with Preston Cooper.Preston Cooper is a senior fellow at AEI, where he studies higher education policy. He also serves on the Board of Visitors for George Mason University.Show Notes:Senate Embraces “Do No Harm” for Higher EducationThe Senate's Higher Education Reforms Are Strong (But Could Be Stronger)How The “One Big Beautiful Bill Act” Would Hold Colleges Accountable For Outcomes
Of course this weekend’s Big Weekend Pod is all about Israel’s strikes on Iran and whether President Trump should direct the American military to join in the attempt to smash Iran’s nuclear weapons and ballistic missile programs. Hugh’s guests include Jim Geraghty of National Review, AEI’s Matt Continetti (who is also with Commentary and The Free Press), Ben Domenech of the Spectator and Fox News, and Eli Lake of The Free Press – Eli’s new “Breaking History” podcast episode on the Iranian nuclear program is not to be missed.See omnystudio.com/listener for privacy information.
You remember your fourth grade history textbook: The British Empire unfairly taxed the American colonies. Tea was dumped in the Boston Harbor. Colonists refused taxation without representation. Therefore, the American Revolution was driven by economics, right? Well, maybe not.Today on Political Economy, I'm talking with Deirdre McCloskey about the core ideas that drove the Revolution. We explore American capitalism and the idea of equal opportunity as America grows closer to its 250th birthday.Deirdre is a senior fellow at the Cato Institute. She is also a distinguished professor emerita of economics and history at the University of Illinois at Chicago, as well as a professor emerita of English and communication. She is the author of some two dozen books, including the Bourgeois trilogy, and has a wonderful article, “Economic Causes and Consequences of the American Revolution,” published in AEI's recent book, Capitalism and the American Revolution, part of our America at 250 series.
On this episode of Future of Freedom, host Scot Bertram is joined by two guests with different viewpoints about zoning laws and America's housing supply. First on the show is Tobias Peter, a senior fellow at AEI and the codirector of the American Enterprise Institute's Housing Center. Later, we hear from Judge Glock, director of research and a senior fellow at the Manhattan Institute and a contributing editor at City Journal. You can find Tobias on X @TobiasPeterAEI and Judge at @JudgeGlock.
Today on Political Economy, I'm talking with Andrew Biggs on why policymakers, the media, and most Americans are convinced of a retirement crisis that Biggs argues . . . doesn't exist. Andrew and I discuss why this misperception continues to persist, and where the real flaws are in the American retirement system.Andrew is a senior fellow here at AEI where he researches Social Security reform, public and private sector compensation, and state and local government pensions.Prior to AEI, Biggs was principal deputy commissioner of the Social Security Administration. In 2005, he served as the associate director of the White House National Economic Council. He is also the author of the new book, The Real Retirement Crisis: Why (Almost) Everything You Know About the US Retirement System Is Wrong.
The Kremlin has been using freelancers to carry out dirty deeds across Europe with increasing frequency — and those freelancers can be anyone. The strategy is as sinister as it is effective. It's also a law enforcement nightmare. But do our governments have the will to tackle the issue, and the leadership qualities that will be required to fully mobilise resources, and be frank with electorates? ----------Elisabeth Braw is a senior fellow at the Atlantic council. She is also a columnist with Foreign Policy, where she writes on national security and the globalised economy. Before joining AEI, Elisabeth was a Senior Research Fellow at RUSI, where she led the Modern Deterrence project. She is published in a wide range of publications, including Politico, The Times and Wall Street Journal. Elisabeth is also the author of highly regarded books – including Goodbye Globalization: The Return of a Divided World.----------LINKS:https://twitter.com/elisabethbrawhttps://www.linkedin.com/in/elisabethbraw/https://rusi.org/people/brawhttps://www.aei.org/profile/elisabeth-braw/https://www.europeanleadershipnetwork.org/person/elisabeth-braw/https://foreignpolicy.com/author/elisabeth-braw/https://reutersinstitute.politics.ox.ac.uk/people/elisabeth-brawhttps://cepa.org/author/elisabeth-braw/----------ARTICLES:https://www.politico.eu/article/gig-model-russian-subversion-nightmare-western-intelligence-shopping/ https://www.politico.com/news/magazine/2022/01/16/russia-ukraine-gray-zone-warfare-autocrats-democracy-527022https://www.ft.com/content/0ac9e1a9-2aad-47d9-83fb-4839e9b31b33https://www.thetimes.co.uk/article/china-is-master-of-grey-zone-aggression-t6z2khp69https://www.prospectmagazine.co.uk/politics/60291/create-a-psychological-defence-agency-to-prebunk-fake-newshttps://www.aei.org/podcast/elisabeth-braw-on-gray-zone-warfare/----------BOOKS: ‘God's Spies: The Stasi's Cold War Espionage (2019)The Defender's Dilemma: Identifying and Deterring Gray-zone Aggression (2022)Goodbye Globalization: The Return of a Divided World (2024)----------SUMMER FUNDRAISERSNAFO & Silicon Curtain community - Let's help help 5th SAB together https://www.help99.co/patches/nafo-silicon-curtain-communityWe are teaming up with NAFO 69th Sniffing Brigade to provide 2nd Assault Battalion of 5th SAB with a pickup truck that they need for their missions. With your donation, you're not just sending a truck — you're standing with Ukraine.https://www.help99.co/patches/nafo-silicon-curtain-communityWhy NAFO Trucks Matter:Ukrainian soldiers know the immense value of our NAFO trucks and buses. These vehicles are carefully selected, produced between 2010 and 2017, ensuring reliability for harsh frontline terrain. Each truck is capable of driving at least 20,000 km (12,500 miles) without major technical issues, making them a lifeline for soldiers in combat zones.In total we are looking to raise an initial 19 500 EUR in order to buy 1 x NAFO truck 2.0 Who is getting the aid? 5 SAB, 2 Assault Battalion, UAV operators.https://www.help99.co/patches/nafo-silicon-curtain-community----------Car for Ukraine has once again joined forces with a group of influencers, creators, and news observers during this summer. Sunshine here serves as a metaphor, the trucks are a sunshine for our warriors to bring them to where they need to be and out from the place they don't.https://car4ukraine.com/campaigns/summer-sunshine-silicon-curtainThis time, we focus on the 6th Detachment of HUR, 93rd Alcatraz, 3rd Assault Brigade, MLRS systems and more. https://car4ukraine.com/campaigns/summer-sunshine-silicon-curtain- bring soldiers to the positions- protect them with armor- deploy troops with drones to the positions----------
In this special episode, the poet and critic Dana Gioia delivers a talk titled “Conservatives and Culture: A Failure of Imagination.” Recorded as a part of AEI's American Dream Lecture Series, Gioia's talk is an important assessment of why the right abdicated the arts, the disastrous consequences of that withdrawal—and how conservatives can reclaim the […]
On the sixty-second episode of the Constitutionalist, Ben, Shane, and Matthew discuss the Mayflower Compact, and its implications for American political life as one of the nation's earliest constitutional compacts. We want to hear from you! Constitutionalistpod@gmail.com The Constitutionalist is proud to be sponsored by the Jack Miller Center for Teaching America's Founding Principles and History. For the last twenty years, JMC has been working to preserve and promote that tradition through a variety of programs at the college and K-12 levels. Through their American Political Tradition Project, JMC has partnered with more than 1,000 scholars at over 300 college campuses across the country, especially through their annual Summer Institutes for graduate students and recent PhDs. The Jack Miller Center is also working with thousands of K-12 educators across the country to help them better understand America's founding principles and history and teach them effectively, to better educate the next generation of citizens. JMC has provided thousands of hours of professional development for teachers all over the country, reaching millions of students with improved civic learning. If you care about American education and civic responsibility, you'll want to check out their work, which focuses on reorienting our institutions of learning around America's founding principles. To learn more or get involved, visit jackmillercenter.org. The Constitutionalist is a podcast co-hosted by Professor Benjamin Kleinerman, the RW Morrison Professor of Political Science at Baylor University and Founder and Editor of The Constitutionalist Blog, Shane Leary, a graduate student at Baylor University, and Dr. Matthew Reising, a John and Daria Barry Postdoctoral Research Fellow at Princeton University. Each week, they discuss political news in light of its constitutional implications, and explore a unique constitutional topic, ranging from the thoughts and experiences of America's founders and statesmen, historical episodes, and the broader philosophic ideas that influence the American experiment in government.
Marc Thiessen, columnist at The Washington Post, Fox News contributor, AEI fellow, and former chief speechwriter to President George W. Bush, joined The Guy Benson Show today to unpack the explosive rift between Donald Trump and Elon Musk, including Musk's wild claim that Trump is hiding something about the Epstein list. Thiessen explained why he believes GOP voters will continue to stand firmly behind Trump despite the DOGE fallout. Guy and Thiessen also reacted to Karine Jean-Pierre's departure from the Democratic Party and her upcoming tell-all memoir, and blasted the sudden media pivot on Biden's mental decline. Listen to the full interview below! Learn more about your ad choices. Visit podcastchoices.com/adchoices
Today on the show, former Israeli Prime Minister Ehud Olmert speaks with Fareed about his op-ed in the Israeli newspaper Haaretz this week, in which he accuses Israel of committing war crimes in Gaza. Then, Financial Times US national editor Edward Luce and AEI senior fellow Kori Schake join the show to discuss the latest developments in President Trump's tariff war, and Russia's renewed offensive in Ukraine. Finally, former CNN correspondent and founder of the charity organization INARA Arwa Damon speaks with Fareed about the extent of the humanitarian catastrophe in Gaza. She says that if the Western press were allowed in to witness the devastation, the war would end tomorrow. GUESTS: Ehud Olmert, Edward Luce (@EdwardGLuce), Kori Schake, Arwa Damon (@IamArwaDamon) Learn more about your ad choices. Visit podcastchoices.com/adchoices
Preview: Colleague Sadanand Dhume of AEI and WSJ reports that the PRC leaned on Pakistan to end the combat exchanges. More later. 1900 KARACHI
#KASHMIR: ESCALATORY PATH. SADANAND DHUME, AEI, WSJ. 1947 MOUNTBATTEN
Good evening: The show begins downtown Las Vegas.. JANUARY 1930. CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9:00-9:15 #PACIFICWATCH: #VEGASREPORT: NICK AND DIME STRIP, DOWNTOWN BOOM @JCBLISS 9:15-9:30 #LANCASTER REPORT: JOBS FAIR SUCCESS FOR MANUFACTURERS. JIM MCTAGUE, FORMER WASHINGTON EDITOR, BARRONS. @MCTAGUEJ. AUTHOR OF THE "MARTIN AND TWYLA BOUNDARY SERIES." #FRIENDSOFHISTORYDEBATINGSOCIETY 9:30-9:45 #SMALLBUSINESSAMERICA: TARIFF WORRIES ON THE WEST COAST CONTAINERS. @GENEMARKS @GUARDIAN @PHILLYINQUIRER 9:45-10:00 #SMALLBUSINESSAMERICA: AI AND FRONT EDGE EXPERIMENT. @GENEMARKS @GUARDIAN @PHILLYINQUIRER SECOND HOUR 10:00-10:15 #KEYSTONEREPORT: JOHN FETTERMAN ICONOCLAST DEMOCRAT. SALENA ZITO, MIDDLE OF SOMEWHERE, @DCEXAMINER PITTSBURGH POST-GAZETTE, NEW YORK POST, SALENAZITO.COM 10:15-10:30 #PRC: CHINESE AIR TO AIR MISSILE OVER KASHMIR. JIM FANELL, AUTHOR "EMBRACING COMMUNIST CHINA." @GORDONGCHANG, GATESTONE, NEWSWEEK, THE HILL 10:30-10:45 #SPACEX: FAA COOPERATION. BOB ZIMMERMAN BEHINDTHEBLACK.COM 10:45-11:00 #SUNSPOTS: MAXIMUM. BOB ZIMMERMAN BEHINDTHEBLACK.COM THIRD HOUR 11:00-11:15 #KASHMIR: ESCALATORY PATH. SADANAND DHUME, AEI, WSJ. 11:15-11:30 #ITALY: WHITE SMOKE WITH AN ITALIAN GRANDFATHER. LORENZO FIORI 11:30-11:45 1/2: #USA: ROSY IN COMPARISON TO THE GLOBAL NEIGHBORS. JOEL KOTKIN, CIVITAS INSTITUTE 11:45-12:00 2/2: #USA: ROSY IN COMPARISON TO THE GLOBAL NEIGHBORS. JOEL KOTKIN, CIVITAS INSTITUTE FOURTH HOUR 12:00-12:15 #IRAN: WHAT GETS 67 VOTES IN THE US SENATE. HENRY SOKOLSKI NPEC 12:15-12:30 #POTUS: WHAT IS THE GOLDEN DOME. HENRY SOKOLSKI NPEC 12:30-12:45 #POTUS: SKINNY BUDGET AND DISCONTENT. RICHARD EPSTEIN, CIVITAS INSTITUTE 12:45-1:00 AM #ANTISEMITISM: COLUMBIA ATTACKED AGAIN. RICHARD EPSTEIN, CIVITAS INSTITUTE
This week, we say hello to a new pope and goodbye to Nate Moore—Chris's AEI research assistant, but more importantly, a fellow wretch who's been a big part of the podcast. We also talk about the Pulitzer winners and the media's coverage of the declining health of John Fetterman and Joe Biden. Wretch on! Time Stamps: Front Page: 02:22Obsessions: 27:43Reader Mail: 36:10Favorite Items: 38:53 Show Notes: New York Magazine: The Hidden Struggle of John Fetterman AP News: Sen. John Fetterman raises alarms with outburst at meeting with union officials, AP sources say BBC: Five takeaways from Biden's BBC interview The Pulitzer Prizes: 2025 Pulitzer Priz Herald Leader: The favorite didn't win the KY Derby again. Here's what happened behind Sovereignty The Washington Free Beacon: Exclusive Analysis: Kamala's Stepdaughter, ‘Textile Artist' Ella Emhoff, Skips Leg Day as Often as She Shaves Her Armpits (Never) Politico: Biden enlists veteran Democratic operative to help defend his reputation (edited)
AEI senior fellow Christine Emba joins Jonah Goldberg to discuss degenerating dating dynamics, the gender divide and its effect on politics, and the roots of the late-stage culture war. Show Notes:—Christine's AEI page—Order Rethinking Sex: A Provocation The Remnant is a production of The Dispatch, a digital media company covering politics, policy, and culture from a non-partisan, conservative perspective. To access all of The Dispatch's offerings—including Jonah's G-File newsletter, regular livestreams, and other members-only content—click here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Preview: Colleague Sadanand Dhume of AEI and WSJ Reports Small Steps Toward an Amending of Dialogue Between the Two Giants of Eurasia, India and the PRC — Prior to the Kashmir Crisis. More. 1850 DELHI
Marc Thiessen, Washington Post columnist, Fox News contributor, AEI fellow, and co-host of the podcast What the Hell Is Going On, joined The Guy Benson Show to assess the current state of the Democratic Party as AOC seems to eye a possible presidential run. Thiessen warned that the party's continued shift left, including elevating David Hogg's pact to primary incumbent Democrats, only highlights why Trump is back in office. Thiessen and Guy also detailed the uncovered internal friction between Biden, Obama, and Harris, and called out Democrats for their newfound concern over the rule of law amid the ongoing immigration crisis. Listen to the full interview below! Learn more about your ad choices. Visit podcastchoices.com/adchoices
2/2: #TRADE: AND CONGRESS FOR JEFFERSON, MADISON, HOOVER, ROOSEVELT, KENNEDY, NIXON AND TRUMP. PHILIP WALLACH, AEI, CIVITAS INSTITUTE. 1920 TRADE HIGH END
1/2: #TRADE: AND. CONGRESS FOR JEFFERSON, MADISON, HOOVER, ROOSEVELT, KENNEDY, NIXON AND TRUMP. PHILIP WALLACH, AEI, CIVITAS INSTITUTE. 1929 HOOVER
Part of the reason for the market bloodbath is because the finance wizzes didn't factor in that Trump would actually do the truly moronic thing he kept saying he would. Their shock over his recklessness is intensifying the crash. Meanwhile, a trio of administration fools trying to defend the tariffs—Lutnick, Bessent, and Hassett—showed there is no grand design to the trade war, White House infighting is getting hot enough that even Elon is subtweeting Trump, and the folks we elected over on the Hill could actually do something to try to stop the market carnage. Plus, new reporting on our government's kidnapping of migrants, Republicans in North Carolina are trying to steal a supreme court seat, and where is JD Vance? Bill Kristol joins Tim Miller for the weekend pod. show notes JVL on the end of the American Age Lauren on the backlash against Dems in major law firms who are bending the knee 60 Minutes segment on migrants sent to the Salvadoran penal colony Tim's 'Bulwark Take' responding to the 60 Minutes report Tim talking with AEI's Stan Veuger about Trump's terrible tariff math The book, "The Captive Mind" by Polish poet Czeslaw Milosz