The Dynamist, a podcast by the Foundation for American Innovation, brings together the most important thinkers and doers to discuss the future of technology, governance, and innovation. The Dynamist is hosted by Evan Swarztrauber, former Policy Advisor at the Federal Communications Commission. Subscribe now!
Foundation for American Innovation
Most American parents say technology makes it harder to raise kids than in the pre-social media era. And while social scientists debate the exact impact of ubiquitous Internet access on children, policymakers are increasingly responding to parents' concerns. The Kids Online Safety Act, which aims to address the addictive features of social media that hook kids, was recently reintroduced by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). The legislation would also require tech platforms to take steps to prevent and mitigate specific dangers to minors, including the promotion of suicide, eating disorders, drug abuse, and sexploitation. Senator Mike Lee (R-UT) and Rep. John James (R-MI) are promoting the App Store Accountability Act, which would require Google and Apple to verify users' ages before downloading apps. And Senators Cruz (R-TX) and Schatz (D-HI) propose banning kids from using social media altogether.There is clearly a lot of interest from parents and policymakers in addressing these concerns over the impact of technology on children. But there is also a robust and ongoing debate about the actual harm to kids, and whether concerns are well founded or overblown. Jonathan Haidt's book, The Anxious Generation made quite a splash, but many social psychologists have pushed back on his findings. And while the surgeon general under President Biden advocated a warning label for social media, a recent study by researchers at the University of South Florida found that kids with smartphones were better off than those without smartphones, while acknowledging harms from cyber bullying and otherwise.The fundamental question seems to be: Is this just another moral panic, or are we letting Big Tech conduct a massive unregulated experiment on our children's brains?Evan is joined by Clare Morell, Director of the Technology and Human Flourishing Project at the Ethics and Public Policy Center. She is the author of The Tech Exit: A Practical Guide to Freeing Kids and Teens from Smartphones, and her work has appeared in The New York Times, Wall Street Journal, and Fox News.
America's infrastructure future isn't being decided in Washington—it's being fought permit by permit in state capitals across the country. While politicians talk about building more, the real bottlenecks are happening where rubber meets bureaucratic road.From Donald Trump to Pete Buttigieg, everyone agrees: America has forgotten how to build things. But even if Washington cleared every federal rule tomorrow, states would still hold the keys to actually breaking ground. Whether it's Clean Air Act permits, water discharge approvals, or the maze of mini-NEPAs and local reviews, states issue most of the paperwork that determines if your project lives or dies.This isn't just red tape—it can be competitive advantage. States that master the art of streamlined permitting without sacrificing environmental standards can capture billions in reshoring investment. Digital dashboards, consolidated reviews, shot-clocks with automatic approvals—these bureaucratic innovations are becoming economic development superpowers.Federal dollars from infrastructure, CHIPS, and climate bills are queued up, but shovels aren't hitting the ground. From geothermal in California to advanced nuclear in Montana, nearly every clean technology faces its first real test at the state level. Joining us are Emmet Penney, Senior Fellow at FAI focusing on Infrastructure and Energy, and Thomas Hochman, Director of Infrastructure Policy at FAI. For more on what's working and what's not, check out their State Permitting Playbook and the new State Permitting Scorecard.
In an era where government tech projects often end in billion-dollar failures and privacy nightmares, there's a tiny Baltic nation that's quietly revolutionized what's possible. Estonia—a country of just 1.3 million people—has built what might be the world's most efficient digital government. Every public service is online. Digital signatures save 2% of GDP annually. And in a twist that should intrigue American conservatives, they've done it with smaller government, not bigger.How did a former Soviet republic become a model of lean digital governance? What's their secret for avoiding the "big-bang IT project" disasters that plague Washington? And most importantly—can America's divided political system learn anything from Estonia's success?Joining for this episode are two experts who've studied Estonia's digital miracle up close. Dr. Keegan McBride is senior policy advisor in emerging technology and geopolitics at the Tony Blair Institute. He's also a Non-Resident Senior Fellow at the Foundation for American Innovation. Joel Burke is the author of Rebooting a Nation: the Incredible Rise of Estonia, E-Government, and the Startup Revolution, and Senior Public Policy Analyst at Mozilla.
Mark Meador is the newest commissioner on the Federal Trade Commission, which plays a dual role: enforcing both antitrust and consumer protection laws. It also serves as America's de facto technology regulator, including overseeing digital privacy and cybersecurity issues.Commissioner Meador embodies the political realignment reshaping conservative views on big business, capitalism, and free trade. The Trump Administration's antitrust cases against Big Tech represent arguably the clearest expression of this shift. While the Biden administration aggressively targeted mergers and acquisitions—Wall Street's bread and butter—many financial elites hoped Donald Trump's return would restore a laissez-faire approach to antitrust. They've been in for disappointment.A recent speech by Meador laid out a conservative vision for antitrust, challenging long-held Republican Party orthodoxies and sparking backlash from libertarians. He joins Evan to discuss the tensions at the heart of the this realignment: how free-market principles can coexist with robust antitrust enforcement; how skeptics of big government find common cause with critics of big business; and how conservatives are crafting their own distinctive approach to antitrust while embracing the bipartisan consensus that has emerged over the past eight years.
For decades, conservatives treated unions like an economic flu—tolerable in small doses, but best avoided altogether. But starting with Trump's election in 2016, that narrative began to unravel, with prominent Republicans increasingly taking pro-union positions.Perhaps the most striking example was Teamsters President Sean O'Brien speaking at the 2024 Republican National Convention. Despite both parties courting working class voters, union membership has cratered to just 10%, down from over 20% in the early '80s.This puts the Trump administration in an interesting position. The old conservative playbook misses that many workers fueling this movement are now Republican voters. The question isn't just whether conservatives should oppose unions, but whether they can afford to.Joining today is Liya Palagashvili, Senior Research Fellow at the Mercatus Center, whose new paper "Do More Powerful Unions Generate Better Pro-Worker Outcomes?" examines these questions and argues for a moderate stance on unions.
The race to harness AI for scientific discovery may be the most consequential technological competition of this time—yet it's happening largely out of public view. While many AI headlines focus on chatbots writing essays and tech giants battling over billion-dollar models, a quiet revolution is brewing in America's laboratories.AI systems like AlphaFold (which recently won a Nobel Prize for protein structure prediction) are solving scientific problems that stumped humans for decades. A bipartisan coalition in Congress is now championing what they call the "American Science Acceleration Project" or ASAP—an audacious plan to make U.S. scientific research "ten times faster by 2030" through strategic deployment of AI. But as federal science funding faces pressure and international competition heats up, can America build the AI-powered scientific infrastructure we need? Will the benefits reach beyond elite coastal institutions to communities nationwide? And how do we ensure that as AI transforms scientific discovery, it creates opportunities instead of new divides?Joining us is Austin Carson, Founder and President of SeedAI, a nonprofit dedicated to expanding AI access and opportunity across America. Before launching SeedAI, Carson led government affairs at NVIDIA and served as Legislative Director for Rep. Michael McCaul. He's been deep in AI policy since 2016—ancient history in this rapidly evolving field—and recently organized the first-ever generative AI red-teaming event at DEF CON, collaborating with the White House to engage hundreds of college students in identifying AI vulnerabilities.
It's easy to take for granted how much social media pervades our lives. Depending on the survey, upwards of 75-80 percent of Americans are using it daily—not to mention billions of people around the world. And over the past decade, we've seen a major backlash over the various failings of Big Tech. Much of the ire of policymakers has been focused on content moderation choices—what content gets left up or taken down. But arguably there hasn't been much focus on the underlying design of social media platforms.What are the default settings? How are the interfaces set up? How do the recommendation algorithms work? And what about transparency? What should the companies disclose to the public and to researchers? Are they hiding the ball?In recent years, policymakers have started to take these issues head on. In the U.S. more than 75 bills have been introduced at the state and federal level since 2023—these bills target the design and operation of algorithms, and more than a dozen have been passed into law. Last year, New York and California passed laws attempting to keep children away from “addictive feeds.” Other states in 2025 have introduced similar bills. And there's a lawsuit from 42 attorney generals against Meta over its design choices. While Congress hasn't done much, if anything, to regulate social media, states are clearly filling that void—or at least trying to.So what would make social media better, or better for you? Recently, a group of academic researchers organized by the Knight Georgetown Institute put out a paper called Better Feeds: Algorithms that Put People First They outline a series of recommendations that they argue would lead to better outcomes. Evan is joined by Alissa Cooper, co-author of the paper and Executive Director of the Knight-Georgetown Institute. She previously spent over a decade at Cisco Systems, including in engineering roles. Her work at KGI has focused on how platforms can design algorithms that prioritize long-term user value rather than short-term engagement metrics.
When it comes to AI policy, and AI governance, Washington is arguably sending mixed signals. Overregulation is a concern—but so is underregulation. Stakeholders across the political spectrum and business world have a lot of conflicting thoughts. More export controls on AI chips, or less. More energy production, but what about the climate? Less liability, or more liability. Safety testing, or not? “Prevent catastrophic risks”, or “don't focus on unlikely doom scenarios.” While Washington looks unlikely to pass comprehensive AI legislation, states have tried, and failed. In a prior episode, we talked about SB 1047, CA's ill-fated effort. Colorado recently saw its Democratic governor take the unusual step of delaying implementation of a new AI bill in his signing letter, due to concerns it would stifle innovation the state wants to attract.But are we even asking the right questions? What problem are we trying to solve? Should we be less focused on whether or not AI will make a bioweapon, or more focused on how to make life easier and better for people in a world that looks very different from the one we inhabit today? Is safety versus innovation a distraction, a false binary? Is there a third option, a different way of thinking about how to govern AI? And if today's governments aren't fit to regulate AI, is private governance the way forward?Evan is joined by Andrew Freedman, is the co-founder and Chief Strategy Officer of Fathom, a nonprofit building solutions society needs to thrive in an AI-driven world. Prior to Fathom, Andrew served as Colorado's first Director of Marijuana Coordination, often referred to as the state's "Cannabis Czar.” You can read Fathom's proposal for AI governance here, and former FAI fellow Dean Ball's writing on the topic here.
In this week's episode of The Dynamist, guest host Jon Askonas is joined by Katherine Boyle, (General Partner at a16z), and Neil Chilson, (AI Policy at the Abundance Institute), to tackle a critical yet often overlooked question: How is technology reshaping the American family? As tech giants like TikTok and Instagram come under scrutiny for their effects on children's mental health, and remote work continues to redefine domestic life, the conversation around technology's role in family dynamics has never been more urgent.Katherine shares insights from her recent keynote at the American Enterprise Institute, highlighting how the core objective of technological innovation, which she calls "American Dynamism," should be empowering the family rather than centralizing state control. Neil provides a fresh perspective on how decentralized systems and emergent technologies can enhance—not hinder—family autonomy and resilience. Amid rising debates about homeschooling, screen time, and the shift toward a remote-first lifestyle, the guests discuss whether tech-driven changes ultimately strengthen or undermine families as society's fundamental institution.Together, they explore the possibility of a new era in which technology revitalizes family autonomy, reshapes education, and reignites productive home economies.
During the Biden Administration, few figures in Washington sparked so much debate and caused so much spilled ink as Lina Khan. The Wall Street Journal published over 80 editorials criticizing her approach, while politically opposed tech titans like LinkedIn's Reid Hoffman and Tesla's Elon Musk called for her firing. Meanwhile, an unlikely coalition of progressive Democrats like Elizabeth Warren and populist Republicans like JD Vance rallied behind her vision of more aggressive antitrust enforcement.For many, her ambitious cases against Microsoft, Amazon, and Meta weren't merely legal challenges. They represented a fundamental break from the antitrust philosophy that had dominated for decades across administrations. These cases now transfer to Trump's FTC, creating a test of regulatory continuity at a time when Big Tech CEOs are looking to curry favor with the White House.In this conversation, Khan reflects on her legacy, discusses what critics may have misunderstood about her approach, and explores how the movement she catalyzed might evolve.
BlueSky was once a research initiative within Jack Dorsey's Twitter aimed at decentralizing the architecture or the platform social media writ large. Today, BlueSky is an independent platform with remarkable momentum. Following Elon Musk's acquisition of Twitter and subsequent policy shifts, BlueSky has experienced unprecedented growth, expanding from 3 million to 30 million users since February 2024.That “X-odus” of frustrated progressives to BlueSky has perhaps inadvertently shaped public perception of it as "Lib Twitter"—a characterization reinforced by its prominent progressive voices and more restrictive community moderation tools. However, this political framing obscures BlueSky's fundamental innovation: the AT Protocol, which reimagines social media as a decentralized ecosystem rather than a platform controlled by a master algorithm ruled by a CEO.Unlike conventional social networks, BlueSky's architectural philosophy challenges the centralized control model by introducing a "marketplace of algorithms" where users select or create their own content curation systems. Imagine a feed that skews left, one that skews right, or one that avoids politics altogether.This "algorithmic choice" approach could represent the biggest challenge yet to the centralized engagement machines that have dominated—and arguably degraded—our digital discourse. But can Bluesky outgrow its political bubbles and fulfill its techno-utopian promise? Or will it remain just another partisan bunker in our increasingly fragmented online world?Evan and Luke are joined by Jay Graber, CEO of Bluesky.
AI has emerged as a critical geopolitical battleground where Washington and Beijing are racing not just for economic advantage, but military dominance. Despite these high stakes, there's surprising little consensus on how—or whether—to respond to frontier AI development.The polarized landscape features techno-optimists battling AI safety advocates, with the formerdismissing the latter as "doomers" who exaggerate existential risks. Meanwhile, AI business leaders face criticism for potentially overstating their companies' capabilities to attract investors and secure favorable regulations that protect their market positions.Democrats and civil rights advocates warn that focusing solely on catastrophic risks versus economic prosperity distracts from immediate harms like misinformation, algorithmic discrimination, and synthetic media abuse. U.S. regulatory efforts have struggled, with California's SB 1047 failing last year and Trump repealing Biden's AI Executive Order on inauguration day. Even the future of the U.S. government's AI Safety Institute remains uncertain under the new administration.With a new administration in Washington, important questions linger: How should government approach AI's national security implications? Can corporate profit motives align with safer outcomes? And if the U.S. and China are locked in an AI arms race, is de-escalation possible, or are we heading toward a digital version of Mutually Assured Destruction?Joining me to explore these questions are Dan Hendrycks, AI researcher and Director of the Center for AI Safety and co-author of "Superintelligence Strategy," a framework for navigating advanced AI from a national security and geopolitical perspective, and FAI's own Sam Hammond, Senior Economist and AI policy expert.
It's an understatement that U.S.-China relations have been tense in recent years. Policymakers and industry leaders have elevated concerns around China's trade practices, including currency manipulation, intellectual property theft, and allegations that China is directing or enabling fentanyl to flood into the U.S.Trade and public health are increasingly linked, as COVID revealed the vulnerability of medical supply chains when U.S. overreliance on China led to delays and shortages of masks and personal protective equipment. Another issue that's getting more attention from lawmakers and parents is the prevalence of Chinese-made, counterfeit electronic cigarettes or “vapes” throughout the U.S. Politicians from Senator Ashley Moody (R-FL) to President Trump himself have raised the alarm. At the same time, American manufacturers have bemoaned the slow and stringent regulatory process they have faced at the FDA, which they say has enabled China to flood the market with cheap, sketchy alternatives. With a new FDA administrator set to take the helm, key questions remain. How did we end up in this situation, and what are the lessons not just for public health, but for other areas where the U.S. is looking to tighten up its trade policy. Is it possible for the U.S. to maintain the ideal of a relatively free market without adversaries exploiting that freedom?Evan is joined by Joel Thayer, President of the Digital Progress Institute. You can read his op-ed on illicit vapes, the Bloomberg report we discuss in the episode, as well as Aiden Buzzetti's op-ed in CommonPlace.
Since President Trump returned to office, tariffs have once again dominated economic policy discussions. Recent headlines have highlighted escalating trade tensions with China, renewed disputes with Canada and Mexico, and the ongoing controversy surrounding Trump's proposal to repeal the CHIPS Act—a $52 billion semiconductor initiative that enjoys wide support in Congress as essential for U.S. technological independence.But while tariffs capture public attention, beneath these headlines is a much broader debate over America's industrial strategy—how the nation can rebuild its manufacturing base, ensure economic prosperity, and strengthen national security in an increasingly competitive global environment. Critics argue that the shortcomings of recent attempts at industrial policy, such as the CHIPS Act, prove why government can't outperform free markets.Our guests today have a different view. Marc Fasteau and Ian Fletcher of the Coalition for a Prosperous America authored a new book, Industrial Policy for the United States: Winning the Competition for Good Jobs and High-Value Industries. They argue that a bold, comprehensive industrial strategy is not only achievable but essential—calling for targeted tariffs, strategic currency management, and coordinated investments to rejuvenate American industry and secure the nation's future. But will their approach overcome the challenges of bureaucracy, political division, and international backlash? And can industrial policy truly deliver on its promise of economic renewal?
Everyone wants government to work better, and part of that is updating outdated systems and embracing modern technology. The problem? Our federal government faces a critical tech talent crisis. Only 6.3% of federal software engineers are under the age of 30, which is lower than the percentage of total federal workers under 30. That means that federal tech talent skews older than lawyers, economists, etc. Not to mention, Silicon Valley pays 2-3x more than the feds, which makes it hard to attract computer science majors into government. The shortage threatens America's ability to navigate an era of technological disruption across AI, quantum computing, defense tech, and semiconductors.While recent initiatives like Elon Musk's temporary team of young engineers and the $500 billion Stargate program highlight the urgency, they don't solve the fundamental problem: creating a sustainable pipeline of technical talent willing to take a pay cut for public service. This talent gap could hamper innovation despite the current AI boom that's receiving 60% of venture funding. How can the private sector and federal government work to bridge this gap?Evan is joined by Arun Gupta, who pivoted from 18 years as a Partner at Columbia Capital investing in cybersecurity and AI startups to leading NobleReach Foundation, which works to bring some of the best assets of the private sector into public service. They explore how to bridge the gap between Silicon Valley and government service to ensure America can effectively regulate, adopt, and leverage emerging technologies for the national interest.
Fusion energy, potentially a fuel source that could last a thousand years, is transitioning from science fiction to business reality. Helion Energy recently signed the first fusion power purchase agreement with Microsoft, promising 50 megawatts by 2028. But the story isn't just about the physics breakthroughs that make fusion possible. The U.S. and China are tussling for global leadership in fusion, as is the case in so many fields. And as China is outspending the US on fusion research by about $1.5 billion annually, concerns mount that they could make a serious challenge to America's lead in fusion. After all, while the US pioneered advances in clean energy technologies like solar panels and EVs, America ultimately lost manufacturing leadership to China.With fusion, the stakes could be much higher, given that fusion has the potential to be the world's "last energy source," with significant economic and national security implications. Evan is joined by Sachin Desai, General Counsel at Helion Energy and former Nuclear Regulatory Commission official, and Thomas Hochman, Director of Infrastructure Policy at FAI. They discuss the technical, regulatory, and geopolitical dimensions of what could be this decade's most consequential technology race.
Mark Zuckerberg sent shockwaves around the world when Meta announced the end of its fact-checking program in the U.S. on its platforms Facebook, Instagram, and Threads. Critics lamented the potential for more mis/disinformation online while proponents (especially conservatives) rejoiced, as they saw the decision as a rollback of political censorship and viewpoint discrimination. Beneath the hot takes lie bigger questions around who should control what we see online. Should critical decisions around content moderation that affect billions of users be left to the whims of Big Tech CEOs? If not, is government intervention any better—and could it even clear First Amendment hurdles? What if there is a third option between CEO decrees and government intrusion?Enter middleware: third-party software that sits between users and platforms, potentially offering a "third way" beyond what otherwise appears as a binary choice between. Middleware holds the potential to enable users to select different forms of curation on social media by third-parties—anyone from your local church to news outlets to political organizations. Could this technology put power back in the hands of users while addressing concerns about bias, misinformation, harassment, hate speech, and polarization?Joining us are Luke Hogg, Director of Technology Policy at FAI, and Renee DiResta, Georgetown University professor and author of "Invisible Rulers: The People Who Turned Lies Into Reality." They break down their new paper, “Shaping the Future of Social Media with Middleware,” on and explore whether this emerging technology could reshape our social media landscape for the better.
During the pandemic from 2020 to 2021, Congress dropped $190 billion to help reopen schools, provide tutoring, and assist with remote learning. The results? Fourth graders' math scores have plummeted 18 points from 2019-2023, eighth graders' have dropped 27 points - the worst decline since testing began in 1995. Adult literacy is deteriorating too, with Americans in the lowest literacy tier jumping from 19% to 28% in just six years. Are we watching the collapse of academic achievement in real time?In this episode, education policy veteran Chester Finn joins us to examine this crisis and potential solutions. Drawing on his experience as a Reagan administration official and decades of education reform work, Finn discusses why accountability measures have broken down, whether school choice can right the ship, and if the federal government's education R&D enterprise is fixable. Joining the conversation are FAI's Dan Lips and Robert Bellafiore, who recently authored new work on leveraging education R&D to help address America's learning challenges.This is part one of a two-part series examining the state of American education and exploring paths forward as a new administration takes office with ambitious - and controversial - plans for reform.
During the pandemic, Congress spent an unprecedented $190 billion to help reopen schools and address learning loss. But new test scores show the investment isn't paying off - fourth and eighth grade reading levels have hit record lows, performing worse than even during COVID's peak. As the Trump administration signals dramatic changes to federal education policy, from eliminating the Department of Education to expanding school choice, questions about federal involvement in education are moving from abstract policy debates to urgent national security concerns.In part two of our series on education R&D, we explore these developments with Sarah Schapiro and Melissa Moritz of the Alliance for Learning Innovation, a coalition working to build better research infrastructure in education. Drawing on their extensive experience - from PBS Education to the Department of Education's STEM initiatives - they examine how shifting federal policy could reshape educational innovation and America's global competitiveness. Can a state-centered approach maintain our edge in the talent race? What role should the private sector play? And how can evidence-based practices help reverse these troubling trends in student achievement?Joining them are FAI's Dan Lips and Robert Bellafiore, who bring fresh analysis on reforming federal education R&D to drive better outcomes for American students. This wide-ranging discussion tackles the intersection of education, national security, workforce development and technological innovation at a pivotal moment for American education policy.
The newly established Department of Government Efficiency (DOGE) has put state capacity back in the spotlight, reigniting debates over whether the federal government is fundamentally broken or just mismanaged. With Elon Musk at the helm, DOGE has already taken drastic actions, from shutting down USAID to slashing bureaucratic redundancies. Supporters argue this is the disruption Washington needs; critics warn it's a reckless power grab that could erode public accountability. But regardless of where you stand, one thing is clear: the ability of the U.S. government to execute policy is now under scrutiny like never before.That's exactly the question at the heart of this week's episode. From the Navy's struggles to build ships to the Department of Education's FAFSA disaster, our conversation lays out why the government seems incapable of delivering even on its own priorities. It's not just about money or political will—it's about outdated hiring rules, a culture of proceduralism over action, and a bureaucracy designed to say "no" instead of "go." These failures aren't accidental; they're baked into how the system currently operates. Jennifer Pahlka, former U.S. Deputy Chief Technology Officer under President Obama and Senior Fellow at Niskanen Center and Andrew Greenway, co-founder of Public Digital, join.The solution? A fundamental shift in how government works—not just at the leadership level, but deep within agencies themselves. She advocates for cutting procedural bloat, giving civil servants the authority to make real decisions, and modernizing digital infrastructure to allow for rapid adaptation. Reform, she argues, isn't about breaking government down; it's about making it function like a system designed for the 21st century. Whether DOGE is a step in that direction or a warning sign of what happens when frustration meets executive power remains to be seen.
At Trump's second inauguration, one of the biggest stories, if not the biggest, was the front-row presence of Big Tech CEOs like Google's Sundar Pichai and Meta's Mark Zuckerberg—placed even ahead of Cabinet members. As the plum seating signaled a striking shift in Silicon Valley's relationship with Washington, just 24 hours later, the administration announced Stargate, a $500 billion partnership with OpenAI, Oracle, and other tech giants to build AI infrastructure across America.But beneath the spectacle of billionaire CEOs at state functions lies a deeper question about the "Little Tech" movement—startups and smaller companies pushing for open standards, fair competition rules, and the right to innovate without being crushed by either regulatory costs or Big Tech copycats. As China pours resources into AI and semiconductors, American tech policy faces competing pressures: Trump promises business-friendly deregulation while potentially expanding export controls and antitrust enforcement against the very tech giants courting his favor.To explore this complex new paradigm, Evan and FAI Senior Fellow Jon Askonas are joined by Garry Tan, CEO of Y Combinator, the startup accelerator behind Airbnb, DoorDash, and other alumni. As both a successful founder and venture capitalist, Tan discusses what policies could help startups thrive without dipping into overregulation, and whether Silicon Valley's traditionally progressive culture can adapt to Trump's tech alliances. You can read more about YC's engagement with Washington, DC here.
Chinese AI startup DeepSeek's release of AI reasoning model R1 sent NVIDIA and other tech stocks tumbling yesterday as investors questioned whether U.S. companies were spending too much on AI development. That's because DeepSeek claims it made this model for only $6 million, a fraction of the hundreds of millions that OpenAI spent making o1, its nearest competitor. Any news coming out of China should be viewed with appropriate skepticism, but R1 nonetheless challenges the conventional American wisdom about AI development—massive computing power and unprecedented investment will maintain U.S. AI supremacy.The timing couldn't be more relevant. Just last week, President Trump unveiled Stargate, a $500 billion public-private partnership with OpenAI, Oracle, SoftBank, and Emirati investment firm MGX aimed at building AI infrastructure across America. Meanwhile, U.S. efforts to preserve its technological advantage through export controls face mounting challenges and skepticism. If Chinese companies can innovate despite restrictions on advanced AI chips, should the U.S. rethink its approach?To make sense of these developments and their implications for U.S. technological leadership, Evan is joined by Tim Fist, Senior Technology Fellow at the Institute for Progress, a think tank focused on accelerating scientific, technological, and industrial progress, and FAI Senior Economist Sam Hammond.
As revelations about Meta's use of pirated books for AI training send shockwaves through the tech industry, the battle over copyright and AI reaches a critical juncture. In this final episode of The Dynamist's series on AI and copyright, Evan is joined by FAI's Senior Fellow Tim Hwang and Tech Policy Manager Joshua Levine to discuss how these legal battles could determine whether world-leading AI development happens in Silicon Valley or Shenzhen.The conversation examines the implications of Meta's recently exposed use of Library Genesis - a shadow library of pirated books - to train its LLaMA models, highlighting the desperate measures even tech giants will take to source training data. This scandal crystallizes a core tension: U.S. companies face mounting copyright challenges while Chinese competitors can freely use these same materials without fear of legal repercussions. The discussion delves into potential policy solutions, from expanding fair use doctrine to creating new statutory licensing frameworks, that could help American AI development remain competitive while respecting creator rights.Drawing on historical parallels from past technological disruptions like Napster and Google Books, the guests explore how market-based solutions and policy innovation could help resolve these conflicts. As courts weigh major decisions in cases involving OpenAI, Anthropic, and others in 2024, the episode frames copyright not just as a domestic policy issue, but as a key factor in national technological competitiveness. What's at stake isn't just compensation for creators, but whether IP disputes could cede AI leadership to nations with fewer or no constraints on training data.
In the third installment of The Dynamist's series exploring AI and copyright, FAI Senior Fellow Tim Hwang leads a forward-looking discussion about how market dynamics, technological solutions, and geopolitics could reshape today's contentious battles over AI training data. Joined by Jason Zhao, co-founder of Story AI, and Jamil Jaffer, Executive Director of the National Security Institute at George Mason University, the conversation moves beyond current lawsuits to examine practical paths forward.The discussion challenges assumptions about who really stands to gain or lose in the AI copyright debate. Rather than a simple creator-versus-tech narrative, Zhao highlights how some creators and talents have embraced AI while others have shown resistance and skepticism. Through Story's blockchain-based marketplace, he envisions a world where creators can directly monetize their IP for AI training without going through traditional gatekeepers. Jaffer brings a crucial national security perspective, emphasizing how over-regulation of AI training data could threaten American technological leadership - particularly as the EU prepares to implement strict new AI rules that could effectively set global standards.Looking ahead to 2025, both guests express optimism about market-based and technological solutions winning out over heavy-handed regulation. They draw parallels to how innovations like Spotify and YouTube's Content ID ultimately resolved earlier digital disruptions. However, they warn that the US must carefully balance innovation and IP protection to maintain its AI edge, especially as competitors like China take a more permissive approach to training data. The episode frames copyright not just as a domestic policy issue, but as a key factor in national competitiveness and security in the AI era.
From the SAG-AFTRA picket lines to the New York Times lawsuit against OpenAI, the battle over AI's role in creative industries is heating up. In this second episode of The Dynamist's series on AI and copyright, we dive into the messy reality of how artists, creators, and tech companies are navigating this rapidly evolving landscape.Our guests bring unique perspectives to this complex debate: Mike Masnick, CEO of Techdirt, who's been chronicling the intersection of tech and copyright for decades; Alex Winter, the filmmaker and actor known for Bill & Ted's Excellent Adventure, who offers boots-on-the-ground insight from his involvement in recent Hollywood labor negotiations; and Tim Hwang, Senior Fellow at FAI, who explores how current legal battles could shape AI's future.The conversation covers everything from "shakedown" licensing deals between AI companies and media outlets to existential questions about artistic value in an AI age. While the guests acknowledge valid concerns about worker protection and fair compensation, they challenge the notion that restricting AI development through copyright law is either practical or beneficial. Drawing parallels to past technological disruptions like Napster, they explore how industries might adapt rather than resist change while still protecting creators' interests.
Copyright law and artificial intelligence are on a collision course, with major implications for the future of AI development, research, and innovation. In this first episode of The Dynamist's four-part series exploring AI and copyright, we're joined by Professor Pamela Samuelson of Berkeley Law, a pioneering scholar in intellectual property law and a leading voice on copyright in the digital age. FAI Senior Fellow Tim Hwang guest hosts. The conversation covers the wave of recent lawsuits against AI companies, including The New York Times suit against OpenAI and litigation facing Anthropic, NVIDIA, Microsoft, and others. These cases center on two key issues: the legality of using copyrighted materials as training data and the potential for AI models to reproduce copyrighted content. Professor Samuelson breaks down the complex legal landscape, explaining how different types of media (books, music, software) might fare differently under copyright law due to industry structure and existing precedent.Drawing on historical parallels from photocopying to the Betamax case, Professor Samuelson provides crucial context for understanding today's AI copyright battles. She discusses how courts have historically balanced innovation with copyright protection, and what that might mean for AI's future. With several major decisions expected in the coming months, including potential summary judgments, these cases could reshape the AI landscape - particularly for startups and research institutions that lack the resources of major tech companies.
As we approach the three-year mark of the war in Ukraine, and conflict continues to rage in the Middle East, technology has played a key role in these arenas—from cyber attacks and drones to propaganda efforts over social media. In Ukraine, SpaceX's Starlink has blurred the lines between commercial and military communications, with the satellite broadband service supporting the Ukrainian army while becoming a target for signal jamming by Russia. What can we learn from these conflicts in Europe and the Middle East? What role will cyber and disinformation operations play in future wars? What has Ukraine taught us about the U.S. defense industrial base and defense technology? As China increases its aggression toward Taiwan and elsewhere in the Indo-Pacific, how will technology play a role in either deterring a conflict or deciding its outcome? Evan is joined by Kevin B. Kennedy, a recently retired United States Air Force lieutenant general who last served as commander of the Sixteenth Air Force. He previously served as Director for Operations at U.S. Cyber Command.
2024 has been a whirlwind year for tech policy, filled with landmark moments that could shape the industry for years to come. From the high-profile antitrust lawsuits aimed at Big Tech to intense discussions around data privacy and online safety for kids, the spotlight on how technology impacts our daily lives has never been brighter. Across the Atlantic, Europe continued its aggressive regulatory push, rolling out new frameworks with global implications. Meanwhile, back in the U.S., all eyes are on what changes might come to tech regulation after the election.With all this upheaval, one thing remains constant: people love posting their Spotify Wrapped playlists at the end of the year. It's a fun way to reflect on the hits (and maybe a few misses) of the past twelve months, so we thought, why not take a similar approach to tech policy?In this episode ofThe Dynamist, Evan is joined by Luke Hogg, FAI's Director of Policy and Outreach, and Josh Levine, FAI's Tech Policy Manager, for a lively conversation breaking down the year's biggest stories. Together, they revisit the key moments that defined 2024, from courtroom dramas to legislative battles, and share their thoughts on what's next for 2025. Will AI regulations dominate the agenda? Could new leaders at U.S. agencies take tech in a bold new direction? Tune in to hear their reflections, predictions, and maybe even a few hot takes as they wrap up 2024 in tech policy.
There is growing concern among parents and policymakers over the Internet's harms to children—from online pornography to social media. Despite that, Congress hasn't passed any legislation on children's online safety in decades. And while psychologists continue to debate whether and to what extent certain Internet content harms children, several states have stepped into the fray, passing legislation aimed at protecting kids in the digital age. One such state is Texas where Governor Greg Abbott signed HB 1181 in June of 2023.The bill requires adult or online pornography websites to verify the age of users to prevent users under the age of 18 from accessing those sites. A group representing online porn sites sued, and the bill was enjoined by a district court, then partially upheld by the Fifth Circuit, and will now be heard by the Supreme Court in Free Speech Coalition v. Paxton, with oral arguments scheduled for January 15.The ruling in this case could have major implications for efforts to regulate the online world both at the state and federal level—not just for porn but other online content social media. On today's show, Evan moderates a debate on the following resolution: Texas's Age Verification (AV) Law is Constitutional and AV laws are an effective means of protecting children from online harms.Arguing for the resolution is Adam Candeub, senior fellow at Center for Renewing America, professor of law at michigan state university, and formerly acting assistant secretary of commerce for telecommunications and information under President Trump. Arguing against the resolution is Robert Corn-Revere, chief counsel at the Foundation for Individual Rights and Expression (FIRE). Before that he was a partner at Davis Wright Tremaine law firm for 20 years and served in government as chief counsel to former Federal Communications Commission Chairman James Quello. You can read FIRE's brief in the case here.
Is Medicare a valley of death for medical innovation? While the U.S. is seen as a global leader in medical device innovation, the $800+ billion program that covers healthcare costs for senior citizens has been slow to reimburse certain medical devices, even when those devices are approved by the Food and Drug Administration. On average, it takes Medicare 4.5 years to cover a new FDA-approved medical device. This length of time has been dubbed the “Valley of Death,” referring to the human cost of delay. While members of Congress and advocates in the med tech industry are pushing Medicare to streamline its process, CMS, the Center for Medicare and Medicaid services, has sounded a note of caution, warning that moving too quickly fails to account for the unique needs and considerations of the Medicare population, Americans over 65 years old. Is this simply bureaucratic foot dragging, or are there legitimate safety and health risks with Medicare giving its blessing to new technologies and treatments? Is there a policy balance to be struck, where government health officials give seniors the unique consideration they need without denying them access to potentially life-saving treatments and devices?Evan is joined by Katie Meyer, Vice President of Public Affairs at Novacure, a global oncology company working to extend survival in some of the most aggressive forms of cancer. Prior to that, she served in Congress in various roles, including as Deputy Health Policy Director at the Senate Finance Committee.
The Federal Trade Commission (FTC) is a once sleepy, three-letter agency in Washington that serves as the nation's general purpose consumer protection regulator—dealing with everything from deceptive advertising to fraud. In recent years, however, the FTC has become somewhat of a household name thanks to current chair Lina Khan and high-profile cases against tech giants Microsoft, Meta, and Amazon. While some populists on right and left have praised the agency for taking on big business, others, particularly in the business community, have railed against the agency for an anti-business stance and preventing legitimate mergers and acquisitions.Conservatives and Republicans have generally been skeptical of antitrust enforcement and government regulation, but in recent years they have been rethinking how to apply their philosophy in an era when trillion-dollar tech behemoths could be threats to online free speech. And as concerns around other tech issues like data privacy and children's online safety continue to persist, the FTC sits at the center of it all as the nation's de facto tech regulator. Is there a balance to be struck between Khan's aggressive enforcement and the lax treatment preferred by the business world? And how should the agency tackle challenges like artificial intelligence?Who better to help answer these questions than one of agency's five commissioners. Evan is joined by Andrew Ferguson, one of two Republican commissioners at FTC. Prior to that, he was the solicitor general of Virginia and chief counsel to Republican senate leader Mitch McConnell.
President-elect Trump recently announced that entrepreneurs Elon Musk and Vivek Ramaswamy will lead the Department of Government Efficiency. Musk had forecast the idea in the tail end of the presidential election, championing a commission focused on cutting government spending and regulation. In a statement posted to Truth Social, the president-elect said DOGE would “pave the way for my administration to dismantle government bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure federal agencies.” For his part, Musk said “this will send shockwaves through the system, and anyone involved in government waste, which is a lot people.”Government waste has long been a focus for Republicans in Washington. The phrase “waste, fraud, and abuse” often generates a chuckle in DC circles, given how much the federal bureaucracy, government spending, and the national debt have grown despite decades of professed fiscal hawkishness. While critics of Trump and Musk are rolling their eyes at what they perceive as a toothless commission, proponents welcome the focus on government efficiency from the president-elect and the world's richest man, and are optimistic that Musk and Ramaswamy's expertise in the business world would bring much-needed outside perspectives on how to optimize the federal government.The Foundation for American Innovation has operated a project on government efficiency and tech modernization since 2019. FAI fellows just published a new paper on the topic of “An Efficiency Agenda for the Executive Branch.” To discuss DOGE, the challenges of streamlining bureaucracy, how AI might play a role in the efforts, and what Congress can do to help make DOGE a success, Evan is joined by Sam Hammond, Senior Economist at FAI and Dan Lips, Head of Policy at FAI. For a quick take on FAI's recommendations, check out Dan's oped in The Hill linked here.
Donald Trump won the 2024 presidential election, Republicans won control of the Senate, and the GOP is slated to maintain control of the House. If you turn on cable news, you will see many pundits playing monday morning quarterback in the wake of this Republican trifecta, arguing about the merits of how people voted, speculating on cabinet secretaries, and pointing fingers on who to blame, or who to give credit to, for the results. But this is The Dynamist, not CNN. In today's show, we focus on what the results mean for tech policy and tech politics. There are ongoing antitrust cases against Meta, Google, Apple, and Amazon. Investigations into Microsoft, Open AI, and Nvidia. How might the new president impact those cases? Congress is considering legislation to protect children from the harms of social media. Will we see action in the lame duck session or will the issue get kicked to January when the new Congress settles in? What about AI? Trump has vowed to repeal Biden's Executive Order on artificial intelligence. What, if anything, might replace it? And for those in Silicon Valley who supported Trump, from Elon Musk to Peter Thiel, how might they wield influence in the new administration?Evan is joined by Nathan Leamer, CEO of Fixed Gear Strategies and Executive Director of Digital First Project, and Ellen Satterwhite, Senior Director at Invariant, a government relations and strategic communications firm in DC. Both Nathan and Ellen previously served in government at the Federal Communications Commission—Nathan under President Trump and Ellen under President Obama.
When people hear 'quantum physics,' they often think of sci-fi movies using terms like 'quantum realm' to explain away the impossible. But today we're talking about quantum computing, which has moved beyond science fiction into reality. Companies like IBM and Google are racing to build machines that could transform medicine, energy storage, and our understanding of the universe.But there's a catch: these same computers could potentially break most of the security protecting our digital lives, from WhatsApp messages to bank transfers to military secrets. To address this threat, the National Institute of Standards and Technology recently released quantum-safe cryptography standards, while new government mandates are pushing federal agencies to upgrade their security before quantum systems become cryptographically relevant—in other words, vulnerable to hacks by quantum computers.To help us understand both the promise and peril of quantum computing, we're joined by Travis Scholten, Technical Lead in the Public Sector at IBM and former quantum computing researcher at the company. He's also a former policy hacker at FAI, author of the Quantum Stack newsletter and co-author of a white paper on the benefits and risks of quantum computers.
When voters head to the polls next week, tech policy won't be top of mind—polling shows immigration, the economy, abortion, and democracy are the primary concerns. Yet Silicon Valley's billionaire class is playing an outsized role in this election, throwing millions at candidates and super PACs while offering competing visions for America's technological future.The tech industry is in a much different place in 2024 than in past elections. Big Tech firms, who once enjoyed minimal government oversight, now face a gauntlet of regulatory challenges—from data privacy laws to antitrust lawsuits. While some tech leaders are hedging their bets between candidates, others are going all in for Harris or Trump—candidates who offer different, if not fully developed, approaches to regulation and innovation.Trump's vision emphasizes a return to American technological greatness with minimal government interference, attracting support from figures like Elon Musk and Marc Andreessen despite Silicon Valley's traditionally Democratic lean. Harris presents a more managed approach, a generally pro-innovation stance tempered by a desire for government to help shape AI and other tech outcomes. Democratic donors like Mark Cuban and Reid Hoffman are backing Harris while hoping she'll soften Biden's tough antitrust stance. Meanwhile, crypto billionaires are flexing their political muscle, working to unseat skeptics in Congress after years of scrutiny under Biden's financial regulators.What are these competing visions for technology, and how would each candidate approach tech policy if elected? Will 2024 reshape the relationship between Silicon Valley and Washington? Evan is joined by Derek Robertson, a veteran tech policy writer who authors the Digital Future Daily newsletter for Politico.*Correction: The audio clip of Trump was incorrectly attributed to his appearance on the Joe Rogan Experience. The audio is from Trump's appearance on the Hugh Hewitt Show
Over the past few years, Elon Musk's political evolution has been arguably as rapid and disruptive as one of his tech ventures. He has transformed from a political moderate to a vocal proponent of Donald Trump and the MAGA movement and his outspokenness on issues like illegal immigration make him an outlier among tech entrepreneurs and CEOs.Musk's increasing political involvement has added a layer of scrutiny to his businesses, particularly as SpaceX aims to secure more contracts and regulatory permissions. Labor tensions also loom, with Tesla facing unionization efforts and accusations of unfair labor practices, adding a wrinkle into an election where both presidential candidates are vying for the labor vote in the midst of several high-profile strikes this year.Through all this, Musk's companies—SpaceX, Tesla, and X—are pressing forward, but the stakes have arguably never been higher with regulatory bodies and the court of public opinion keeping a close watch. Many conservatives have embraced Musk as a Randian hero of sorts, a champion of free speech and innovation. Others sound a note of caution, warning that his emphasis on “efficiency” could undermine certain conservative values, and question whether his record on labor and China are worth celebrating. So, should conservatives embrace, or resist, Musk-ification? Evan is joined by Chris Griswold, Policy Director at American Compass, a New Right think tank based in DC. Check out his recent piece, “Conservatives Must Resist Musk-ification.” Previously, he served as an advisor to U.S. Senator Marco Rubio, where he focused on innovation, small business, and entrepreneurship.
Have tech companies become more powerful than governments? As the size and reach of firms like Google and Apple have increased, there is growing concern that these multi-trillion dollar companies are too powerful and have started replacing important government functions.The products and services of these tech giants are ubiquitous and pillars of modern life. Governments and businesses increasingly rely on cloud services like Microsoft Azure and Amazon Web Services to function. Elon Musk's Starlink has provided internet access in the flood zones of North Carolina and the battlefields of Ukraine. Firms like Palantir are integrating cutting-edge AI into national defense systems.In response to these rapid changes, and resulting concerns, regulators in Europe and the U.S. have proposed various measures—from antitrust actions to new legislation like the EU's AI Act. Critics warn that overzealous regulation could stifle the very innovation that has driven economic growth and technological advancement, potentially ceding Western tech leadership to China. Others, like our guest, argue that these actions to rein in tech don't go nearly far enough, and that governments must do more to take back the power she says that tech companies have taken from nation states.Evan and Luke are joined by Marietje Schaake, a former MEP and current fellow at Stanford's Cyber Policy Center. She is the author of The Tech Coup: How to Save Democracy from Silicon Valley. You can read her op-ed in Foreign Affairs summarizing the book.
On September 29th, Governor Newsom vetoed SB 1047, a controversial bill aimed at heading off catastrophic risks of large AI models. We previously covered the bill on The Dynamist in episode 64. In a statement, Newsom cited the bill's “stringent standards to even the most basic functions” and said he does “not believe this is the best approach to protecting the public from real threats posed by the technology.” Senator Scott Wiener, the bill's author, responded, “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers[.]”The bill had passed the California senate back in August by a vote of 30-9, having been the subject of fierce debate between AI companies big and small and researchers and advocates who fear a catastrophic AI event. Proponents want to get ahead of AI cyberattacks, AI weapons development, or doomsday scenarios by making developers liable to implement safety protocols. Opponents argue that the bill will stifle innovation in California, calling it an “assault on open source” and a “harm to the budding AI ecosystem.”Aside from the merits of the legislation, it is arguably the first major political fight over AI in the U.S. where competing interests fought all the way to the governor's desk, attempting to sway the pen of Governor Newsom. The story featured a cast of characters from California Democrats like Nancy Pelosi to billionaires like Elon Musk to major companies like Google and OpenAI. What does this battle say about who holds sway in emerging AI politics? What are the factions and alignments? And what does this all mean for next year in California and beyond?Evan is joined by Sam Hammond, Senior Economist at FAI and author of the Substack Second Best, and Dean Ball, a research fellow at the Mercatus Center, author of the Substack Hyperdimensional, and a non-resident fellow at FAI.
Since the advent of platforms like Uber, Instacart, and DoorDash, the so-called gig economy has been intertwined with technology. While the apps no doubt created loads of opportunity for people seeking flexible work on their own schedules, they have been lambasted by critics who say they don't provide drivers and grocery shoppers with a minimum wage and health benefits.This tech-labor debate has largely played out in state legislatures and in the courts. Voters have weighed in as well, with gig companies DoorDash and Lyft spending some $200 million to win the Prop 22 ballot initiative in California that exempted their workers from new labor laws. Should Uber be forced to provide benefits to employees? Should government stay out and let these markets continue to operate?As labor leaders and progressive lawmakers continue to battle with the companies, and governments, companies, and unions struggle to apply old principles to an increasingly digital economy, some argue for a third way, including our guest today. Wingham Rowan is the founder and managing director of Modern Markets for All, a non-profit that develops infrastructure for people working outside of traditional 9-5 jobs. Prior to that, he was a TV host and producer at the BBC. Read more about his work at PeoplesCapitalism.org.
When the average person thinks of nuclear energy, there's a good chance they're thinking in terms influenced by pop culture—Homer Simpson's union job at the Springfield plant, or the HBO miniseries Chernobyl, which dramatized the world's biggest meltdown.For all its promise in the mid-20th century, U.S. nuclear energy largely stalled in the 1970s and 80s. While public anxiety over its safety played a role, experts have pointed to the hefty cost of building plants and poor regulatory/policy decisions as having more impact. But in recent years, as demand for low-carbon energy surges and companies like OpenAI, Microsoft, and Google are burning through energy to train artificial intelligence, there is a renewed interest in making nuclear work in this century.But concerns over cost and safety remain, and even among proponents of nuclear energy, there is a robust debate about exactly how to approach future builds, whether to rely on conventional methods or hold off until new research potentially yields a smaller, more cost-effective method of unlocking atomic energy. What is the state of nuclear power in the U.S. and around the world today? What policies could shape its future? And how might AI, other market dynamics, geopolitics, and national security concerns impact the debate and its outcomes?Evan is joined by Emmet Penney, the creator of Nuclear Barbarians, a newsletter and podcast about industrial history and energy politics, and a contributing editor at COMPACT magazine. Thomas Hochman, Policy Manager at FAI, is also joining. You can read Emmet's recent piece on how why nuclear energy is a winning issue for the populist GOP here. You can read Thomas's piece for The New Atlantis on “nuclear renaissance” here, and his writeup of the ADVANCE Act here.
The recent riots in the United Kingdom raise new questions about online free speech and misinformation. Following the murder of three children in Southport, England, false rumors spread across social media about the killer's identity and religion, igniting simmering resentment over the British government's handling of immigration in recent years. X, formerly Twitter, has come under fire for allowing the rumors to spread, and the company's owner Elon Musk has publicly sparred with British politicians and European Union regulators over the issue. The incident is the latest in an ongoing debate abroad and in the U.S. about free speech and the real-world impact of online misinformation. In the U.S., politicians have griped for years about the content policies of major platforms like YouTube and Facebook—generally with conservatives complaining the companies are too censorious and liberals bemoaning that they don't take down enough misinformation and hate speech. Where should the line be? Is it possible for platforms to respect free expression while removing “harmful content” and misinformation? Who gets to decide what is true and false, and what role, if any, should the government play? Evan is joined by Renee Diresta who studies and writes about adversarial abuse online. Previously, she was a research manager at the Stanford Internet Observatory where she researched and investigated online political speech and foreign influence campaigns. She is the author of Invisible Rulers: The People Who Turn Lies into Reality. Read her recent op-ed in the New York Times here.
Minnesota Governor Tim Walz has made headlines for being picked as Vice President Kamala Harris's running mate. One underreported aspect of his record is signing Minnesota's first “right to repair” law last year. The bill took effect last month.The concept sounds simple enough: if you buy something like a phone or a car, you should have the right to fix it. But as our world becomes more digitized, doing it yourself, or having your devices repaired by third-party mechanics or cell phone shops, can be complicated. Everything from opening a car door to adjusting your refrigerator can now involve complex computer code, giving manufacturers more control over whether, and how, devices can be repaired. Frustrations over this dynamic sparked the “right to repair” movement, which advocates for legislation to require manufacturers to provide parts, tools, and guides to consumers and third parties. While powerful companies like John Deere and Apple have cited cybersecurity and safety concerns with farmers and iPhone users tinkering with their devices, right-to-repair advocates say irreparability undermines consumer rights, leads to higher prices and worse quality, and harms small businesses that provide third-party repair services.As more states continue to adopt and debate these laws, which industries will be impacted? And will the federal government consider imposing the policy nationwide? Evan and Luke are joined by Kyle Wiens, perhaps the most vocal proponent of the right to repair in the U.S. Wiens is the co-founder and CEO of IFixit, which sells repair parts and tools and provides free how-to-guides online. Read Kyle's writing on repair rights and copyright in Wired and his article in The Atlantic on how his grandfather helped influence his thinking. See Luke's piece in Reason on how the debate impacts agriculture.
OpenAI unleashed a controversy when the famed maker of Chat GPT debuted its new voice assistant Sky. The problem? For many, her voice sounded eerily similar to that of Scarlett Johansson, who had ironically starred in the dystopian movie Her about a man, played by Joaquin Phoenix, who developed a romantic relationship with a virtual assistant. While OpenAI claimed that Sky's voice belonged to a different actress, the company pulled it down shortly after the launch given the furor from Johansson and the creative community. But a flame had already been lit in the halls of Congress, as the controversy has inspired multiple pieces of legislation dealing with serious questions raised by generative AI.Should AI companies be allowed to train their models without compensating artists? What exactly is “fair use” when it comes to AI training and copyright? What are the moral and ethical implications of training AI products with human-created works when those products could compete with, or replace, those same humans? What are the potential consequences of regulation in this area, especially as the U.S. government wants to beat out China in the race for global AI supremacy?Evan is joined by Josh Levine, Tech Policy Manager at FAI, and Luke Hogg, Director of Policy and Outreach at FAI. Read Josh's piece on the COPIED Act here, and Luke's piece on the NO AI FRAUD Act here.
Trump's pick of J.D. Vance as his running mate is seen by many as the culmination of a years-long realignment of Republican and conservative politics—away from trickle-down economics toward a more populist, worker-oriented direction. While the pick ushered in a flood of reactions and think pieces, it's unclear at this stage what Vance's impact would truly be in a Trump second term. Will Vance be able to overcome some of Trump's more establishment-friendly positions on taxes and regulation? Will he advocate that Trump continue some of Biden's policies on tech policy, particularly the administration's actions against companies like Google, Amazon, and Apple? How might Vance influence policies on high-tech manufacturing, defense technology, and artificial intelligence? Evan is joined by Oren Cass, Chief Economist and Founder of American Compass and the author of The Once and Future Worker: A Vision for the Renewal of Work in America. Read his recent op-ed in the New York Times on populism and his recent piece in Financial Times on Vance. Subscribe to his Substack, “Understanding America.”Evan is also joined by Marshall Kosloff, co-host of The Realignment podcast, sponsored by FAI, that has been chronicling the shifting politics of the U.S. for several years, as well as by Jon Askonas, professor of politics at Catholic University and senior fellow at FAI.
On July 1, the Supreme Court issued a 9-0 ruling in NetChoice v. Moody, a case on Florida and Texas's social media laws aimed at preventing companies like Facebook and YouTube from discriminating against users based on their political beliefs. The court essentially kicked the cases back down to lower courts, the Fifth and Eleventh Circuits, because they hadn't fully explored the First Amendment implications of the laws, including how they might affect direct messages or services like Venmo and Uber. While both sides declared victory, the laws are currently enjoined until the lower court complete their remand, and a majority of justices in their opinions seemed skeptical that regulating the news feeds and content algorithms of social media companies wouldn't violate the firms' First Amendment rights. Other justices like Samuel Alito argued the ruling is narrow and left the door open for states to try and regulate content moderation.So how will the lower courts proceed? Will any parts of the Florida and Texas laws stand? What will it mean for the future of social media regulation? And could the ruling have spillover effects into other areas of tech regulation, such as efforts to restrict social media for children or impose privacy regulations? Evan and Luke are joined by Daphne Keller, Platform Regulation Director at Stanford's Cyber Policy Center. Previously, she was Associate General Counsel at Google where she led work on web search and other products. You can read her Wall Street Journal op-ed on the case here and her Lawfare piece here.
It's time for American industry's Lazarus moment. At least, that's what a growing coalition of contrarian builders, investors, technologists, and policymakers have asserted over the past several years. American might was built on our industrial base. As scholars like Arthur Herman detail in Freedom's Forge, the United States won World War 2 with industrial acumen and might. We built the broadest middle class in the history of the world, put men on the moon, and midwifed the jet age, the Internet, semiconductors, green energy, revolutionary medical treatments, and more in less than a century. But the optimism that powered this growth is fading, and our public policy ecosystem has systematically deprioritized American industry in favor of quick returns and cheap goods from our strategic competitors. Is there a way to restore our domestic industry? What does movement-building in this space look like? We're joined by Austin Bishop, a partner at Tamarack Global, co-founder of Atomic Industries, and co-organizer of REINDUSTRIALIZE, and Jon Askonas, Senior Fellow with FAI and Professor of Politics at the Catholic University of America. You can follow Austin on X here and Jon here. Read more about REINDUSTRIALIZE and the New American Industrial Alliance here and check out some of Jon's research on technological stagnation for American Affairs here.
For this special edition episode, FAI Senior Fellow Jon Askonas flew down to Palm Bay, FL to mix and mingle with the brightest minds in aerospace, manufacturing, and defense at the Space Coast Hard Tech Hackathon, organized by stealth founder Spencer Macdonald (also an FAI advisor). Jon sits down with a friend of the show and Hyperstition founder Andrew Côté for a wide-ranging conversation on the space tech revolution, the “vibe shift” towards open dialogue, AI's role in shaping reality, and the challenges Silicon Valley faces in fomenting new innovation. They critique regulatory moats that hamper entrepreneurship, safetyism's risk to progress, and explore the concept of “neural capitalism,” where AI enhances decentralized decision-making. You can follow Jon at @jonaskonas and Andrew at @andercot. Andrew recently hosted Deep Tech Week in San Francisco, and he's gearing up to host the next one in New York City.
Silicon Valley was once idolized for creating innovations that seemed like modern miracles. But the reputations of tech entrepreneurs have been trending downward of late, as Big Tech companies are blamed for any number of societal ills, from violating users' privacy and eroding teenagers' mental health, to spreading misinformation and undermining democracy.As the media and lawmakers focus on modern gripes with Big Tech, the origin stories of companies like Meta and Google feel like ancient history or almost forgotten. Our guest today argues that these stories, filled with youthful ambitions and moral tradeoffs—even “original sins”—help explain how the companies came to be, amass profits, and wield power. And the lessons learned could provide a path for more responsible innovations, especially as the gold rush for artificial intelligence heats up.Evan is joined by Rob Lalka, Professor at Tulane University's Freeman School of Business and Executive Director of the Albert Lepage Center for Entrepreneurship and Innovation. He is the author of a new book, The Venture Alchemists: How Big Tech Turned Profits Into Power. Previously he served in the U.S. State Department.
This is how many assume the tech economy is supposed to work. Big, established companies are at risk of getting disrupted as they get set in their ways; the internal bureaucracies grow too large and they lose their nimbleness and take fewer risks. The pressure from upstarts forces larger firms to innovate – otherwise, they lose market share and may even fold. But is that how it works in practice? An increasing share of policymakers believe Big Tech giants don't face meaningful competition because their would-be competitors get bought, copied, or co-opted by essentially the same five companies: Google, Amazon, Apple, Meta, and Microsoft. While antitrust regulators have been focusing a lot on what they believe are “killer acquisitions,” such as then-Facebook buying Instagram, there seems to be less focus on what some experts call “co-opting disruption,” where large firms seek to influence startups and steer them away from potentially disruptive innovations. So what does that look like in practice? And is this a fair characterization of how the tech market works?Evan is joined by Adam Rogers, senior tech correspondent at Business Insider. Prior to that, he was a longtime editor and writer at Wired Magazine. You can read his article on co-opting disruption, “Big Tech's Inside Job,” here. He is also the author of Full Spectrum: How the Science of Color Made Us Modern.
Tornado Cash is a decentralized cryptocurrency mixing service built on Ethereum. Its open-source protocol allows users to obscure the trail of their cryptocurrency transactions by pooling funds together, making it difficult to trace the origin and destination of any given transfer.In August 2022, the U.S. Treasury Department took the unprecedented step of sanctioning Tornado Cash, effectively criminalizing its use by American citizens and businesses. Authorities accused the service of facilitating money laundering, including processing hundreds of millions in stolen funds linked to North Korean hackers. In the wake of the sanctions, Tornado Cash's website was taken down, its GitHub repository removed, and one of its developers arrested in Amsterdam.The crackdown has sent shockwaves through the crypto and privacy advocacy communities. Proponents argue that Tornado Cash is a neutral tool, akin to VPNs or Tor, with many legitimate uses beyond illicit finance. They warn that banning a piece of code sets a dangerous precedent and undermines fundamental rights to privacy and free speech. On the other hand, regulators contend that mixers like Tornado Cash have become a haven for cybercriminals and rogue state actors, necessitating more aggressive enforcement.As the legal and political battle unfolds, Coin Center, a leading crypto policy think tank, has taken up the mantle of defending Tornado Cash and its users. Director of Research Peter Van Valkenburgh, who also serves as a board member for Zcash, joins The Dynamist today to walk through this crackdown and the implications for decentralized finance and open-source software today. Luke Hogg, director of policy and outreach, guest hosts this episode. You can read more from Peter on this issue here.
Social media undermines democracy. Small businesses are more innovative than big ones. Corporate profits are at all-time highs. America's secret weapon is laissez-faire capitalism. These are widely held beliefs, but are they true? Our guest today argues that these statements aren't just wrong, but that they're holding America back—discouraging talented people from entering the technology field and making companies too cautious and wary of regulators. Is America losing its faith in innovation? If so, what can companies and governments do to turn the tide? Has America's “free-market” really been as free as we think, and what can policymakers learn from Alexander Hamilton when it comes to industrial policy?Evan is joined by Robert Atkinson, Founder and President of the Information Technology and Innovation Foundation, an independent, nonpartisan research and educational institute, often recognized as the world's leading think tank on science and tech policy. He is also the co-author of the Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today's Innovation Economy. Read his article on Hamiltonian industrial policy here.