POPULARITY
Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. There is deep concern over the effects of these interactions on children, and a growing number of stories—and lawsuits—about when things go wrong, particularly for teens. In this conversation, Justin Hendrix is joined by three legal experts who are thinking deeply about how to address questions related to chatbots, and about the need for substantially more research on human-AI interaction: Clare Huntington, Barbara Aronstein Black Professor of Law at Columbia Law School;Meetali Jain, founder and director of the Tech Justice Law Project; and Robert Mahari, associate director of Stanford's CodeX Center.
In this episode, Justin Hendrix speaks with Nerima Wako-Ojiwa, director of Siasa Place, and Odanga Madung, a tech and society researcher and journalist, about the intersection of technology, labor rights, and political power in Kenya and across Africa. The conversation explores the ongoing struggles of content moderators and AI data annotators, who face exploitative working conditions while performing essential labor for major tech companies; the failure of platforms fail to address harmful biases and disinformation that particularly affect African contexts; the ways in which governments increasingly use platform failures as justification for internet censorship and surveillance; and the promise of youth and labor movements that point to a more just and democratic future.
Earlier this year, an entity called the Observatory on Information and Democracy released a major report called INFORMATION ECOSYSTEMS AND TROUBLED DEMOCRACY: A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance. The report is the result of a combination of three research assessment panels comprised of over 60 volunteer researchers all coordinated by six rapporteurs and led by a scientific director that together considered over 1,600 sources on topics at the intersection of technology media and democracy ranging from trust in news and to mis- and disinformation is linked to societal and political polarization. Justin Hendrix spoke to that scientific director, Robin Mansell, and one of the other individuals involved in the project as chair of its steering committee, Courtney Radsch, who is also on the board of Tech Policy Press.
Emily M. Bender and Alex Hanna are the authors of a new book that The Guardian calls “refreshingly sarcastic” and Business Insider calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. Justin Hendrix spoke to them about their new book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, just out from Harper Collins.
On May 29, the Center for Civil Rights and Technology at The Leadership Conference on Civil and Human Rights released its Innovation Framework, which it calls a “new guiding document for companies that invest in, create, and use artificial intelligence (AI), to ensure that their AI systems protect and promote civil rights and are fair, trusted, and safe for all of us, especially communities historically pushed to the margins.” Justin Hendrix spoke to the Center's senior policy advisor on Civil Rights and Technology, Frank Torres, about the framework, the ideas that informed it, and the Center's interactions with industry.
In February, California Governor Gavin Newsom appointed Vera Zakem as California's State Chief Technology Innovation Officer at the California Department of Technology. Zakem brings deep experience from national security, democracy and human rights, and technology policy. Most recently, under former President Joe Biden, she served as the Chief Digital Democracy and Rights Officer at USAID, where she led global efforts to align emerging technologies with democratic values. Zakem assumes the role as California, like many governments, is accelerating its embrace of artificial intelligence. Justin Hendrix spoke with Zakem about the promise of state-led innovation and how to avoid its perils, what responsible AI governance might mean in practice, and how California might chart a course that's both ambitious and accountable to its citizens.
On Thursday, May 22, the United States House of Representatives narrowly advanced a budget bill that included the "Artificial Intelligence and Information Technology Modernization Initiative," which includes a 10-year moratorium on the enforcement of state AI laws. Tech Policy Press editor Justin Hendrix and associate editor Cristiano Lima-Strong discussed the moratorium, the contours of the debate around it, and its prospects in the Senate.
In his New York Times review of the book, Columbia Law School professor and former White House official Tim Wu calls journalist Karen Hao's new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, “a corrective to tech journalism that rarely leaves Silicon Valley.” Hao has appeared on this podcast before, to help us understand how the business model of social media platforms incentivizes the deterioration of information ecosystems, the series of events around OpenAI CEO Sam Altman's abrupt firing in 2023, and the furor around the launch of DeepSeek last year. This week, Justin Hendrix spoke with Hao about the book, and what she imagines for the future.
Last year, Elon Musk's xAI set up its "Colossus" supercomputer in an old Electrolux manufacturing facility in Memphis, Tennessee. Now, the residents of nearby neighborhoods are pushing for facts and fair treatment as the company looks to expand its footprint amid questions about its environmental impact. Justin Hendrix considers the state of play with Dara Kerr, a reporter for The Guardian; Amber Sherman, a Memphis activist; and artifacts from local media reporting over the past year.
Catherine Bracy is a civic technologist and community organizer whose work focuses on the intersection of technology and political and economic inequality. Justin Hendrix spoke with her about her new book, World Eaters: How Venture Capital is Cannibalizing the Economy. In it, she suggests how the venture capital industry must be reformed to deliver true innovation that advances society rather than merely outsized returns for an increasingly monolithic set of investors.
From visions of AI paradise to the project to defeat death, many dangerous and unscientific ideas are driving Silicon Valley leaders. Justin Hendrix spoke to Adam Becker, a science journalist and author of MORE EVERYTHING FOREVER: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, just out from Basic Books.
In early April 2025, the White House Office of Management and Budget (OMB) released two major policies on Federal Agency Use of AI and Federal Procurement of AI - OMB memos M-25-21 and M-25-22, respectively. These memos were revised at the direction of President Trump's January 2025 executive order, “Removing Barriers to American Leadership in Artificial Intelligence” and replaced the Biden-era guidance. Under the direction of the same executive order, the Department of Energy (DOE) also put out a request for information on AI infrastructure on DOE lands, following the announcement of the $500 billion Stargate project that aims to rapidly build new data centers and AI infrastructure throughout the United States. As the Trump administration is poised to unveil its AI Action Plan in the near future, the broader contours of its strategy for AI adoption and acceleration already seem to be falling into place.Is a distinct Trump strategy for AI beginning to emerge—and what will that mean for the United States and the rest of the world? Show Notes:Joshua GeltzerBrianna Rosen Just Security series, Tech Policy Under Trump 2.0Clara Apt and Brianna Rosen's article "Shaping the AI Action Plan: Responses to the White House's Request for Information" (Mar. 18, 2025)Justin Hendrix's article "What Just Happened: Trump's Announcement of the Stargate AI Infrastructure Project" (Jan. 22, 2025)Sam Winter-Levy's article "The Future of the AI Diffusion Framework" (Jan. 21, 2025)Clara Apt and Brianna Rosen's article, "Unpacking the Biden Administration's Executive Order on AI Infrastructure" (Jan. 16, 2025)Just Security's Artificial Intelligence Archive Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI
Last month, a group of researchers published a letter “Affirming the Scientific Consensus on Bias and Discrimination in AI.” The letter, published at a time when the Trump administration is rolling back policies and threatening research aimed at protecting people from bias and discrimination in AI, carries the signatures of more than 200 experts. To learn more about their goals, Justin Hendrix spoke to three of the signatories:J. Nathan Matias, an Assistant Professor in the Department of Communication and Information Science at Cornell University.Emma Pierson, an Assistant Professor of Computer Science at the University of California, Berkeley.Suresh Venkatasubramanian, a Professor of Computer Science and Data Science at Brown University.
Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President Donald Trump. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire Elon Musk to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.In this conversation, Justin Hendrix is joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:Eryk Salvaggio, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;Rebecca Williams, a senior strategist in the Privacy and Data Governance Unit at ACLU;Emily Tavoulareas, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and Matthew Kirschenbaum, Distinguished University Professor in the Department of English at the University of Maryland.
Every now and again, a story that has a significant technology element really breaks through and drives the news cycle. This week, the Trump administration is reeling after The Atlantic magazine's Jeffrey Goldberg revealed that he was on the receiving end of Yemen strike plans in a Signal group chat between US Secretary of Defense Pete Hegseth and other top US national security officials. User behavior, a common failure point, appears to be to blame in this scenario. But what are the broader contours and questions that emerge from this scandal? To learn more, Justin Hendrix spoke to:Ryan Goodman is the Anne and Joel Ehrenkranz Professor of Law at New York University School of Law and co-editor-in-chief of Just Security. He served as special counsel to the general counsel of the Department of Defense (2015-16).Cooper Quintin is a senior staff technologist at the Electronic Frontier Foundation (EFF). He has worked on projects including Privacy Badger, Canary Watch, and analysis of state-sponsored malware campaigns such as Dark Caracal.
Dr. Alondra Nelson holds the Harold F. Linder Chair and leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration's approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden's executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.This week, Justin Hendrix had the chance to speak with Dr. Nelson about how she's thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.
This week, RightsCon, which bills itself as "the world's leading summit on human rights in the digital age," descends on Taipei. To better understand the dynamics in the civil society community working on digital rights and tech policy matters in Taiwan, Justin Hendrix spoke to three experts:Liu I-Chen (劉以正), Asia Program Officer at ARTICLE 19Kuan-Ju Chou (周冠汝), Deputy Secretary-General of the Taiwan Association for Human Rights Grace Huang (黃寬心), Director for Global Justice and Digital Freedom at Judicial Reform Foundation
At the Paris AI Action Summit on February 10-11, remarks by EU and US leaders indicated significant divergence on how to think about AI. But on balance, nations are moving decisively toward innovation and exploitation of this technology and away from containing it or restricting it. In this episode, Justin Hendrix surfaces voices from the Summit, as well as reactions and discussion on these matters at this year's State of the Net conference on February 11 in Washington, DC, including comments by Center for Democracy & Technology vice president for policy Samir Jain, Abundance Institute head of AI policy Neil Chilson, and former Biden administration assistant director for AI policy Olivia Zhu.
As Donald Trump's second presidency enters its third week, Elon Musk is center stage as the Department of Government Efficiency moves to gut federal agencies. In this episode, Justin Hendrix speaks with two experts who are following these events closely and thinking about what they tell us about the relationship between technology and power:David Kaye, a professor of law at the University of California Irvine and formerly the UN Special Rapporteur on Freedom of Expression, andYaël Eisenstat, director of policy impact at Cybersecurity for Democracy at New York University.
Justin Hendrix speaks with Jathan Sadowski, a senior lecturer in the Faculty of Information Technology at Monash University in Melbourne, Australia; co-host of This Machine Kills, a weekly podcast on technology and political economy; and author of the new book The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism from the University of California Press.
If Chinese AI startup DeepSeek's efficiency and performance achievements stand up to scrutiny, it could have big implications for the AI race. It could call into question the strategic approach that the biggest US firms appear to be taking and the wisdom of the current American policy approach to AI. To discuss these issues, Justin Hendrix spoke to Karen Hao, a reporter who covers AI. In recent years, she's reported on China and tech for the Wall Street Journal, written about AI for The Atlantic, and run a program for the Pulitzer Center to teach other journalists how to report on AI. Hao has a book about OpenAI, the AI industry, and its global impacts that will be released later this year.
From Executive Orders on AI and cryptocurrency to "ending federal censorship," President Donald Trump had a busy first week in the White House. Justin Hendrix discussed the news with Damon Beres, a senior editor at The Atlantic, where he oversees the technology section. Beres wrote a piece reflecting on Trump's inauguration titled "Billions of People in the Palm of Trump's Hand."
Silicon Valley's biggest power players traded in their hoodies for suits and ties this week as they sat front and center to watch Donald Trump take the oath of office again.Seated in front of the incoming cabinet were Meta's Mark Zuckerberg, Google's Sundar Pichai, Amazon's Jeff Bezos, and Trump confidant and leader of the so-called Department of Government Efficiency, Elon Musk. Apple CEO Tim Cook, Sam Altman from OpenAI, and TikTok CEO Shou Zi Chew also looked on.For an industry once skeptical of Trump, this dramatic transformation in political allegiance portends changes for the country — and the world. From the relaxing of hate speech rules on Meta platforms to the mere hourslong ban of TikTok to the billions of government dollars being pledged to build data centers to power AI, it is still only the beginning of this realignment.On this week's episode of The Intercept Briefing, Justin Hendrix, the CEO and editor of Tech Policy Press, and Intercept political reporter Jessica Washington dissect this shift. “Three of the individuals seated in front of the Cabinet are estimated by Oxfam in its latest report on wealth inequality are on track to potentially become trillionaires in the next just handful of years: Mark Zuckerberg, Jeff Bezos, Elon Musk,” says Hendrix. “Musk is estimated to be the first trillionaire on the planet, possibly as early as 2027.”Washington says there's more at stake than just personal wealth. “These are people who view themselves as world-shapers, as people who create reality in a lot of ways. Aligning themselves with Trump and with power in this way is not just about their financial interests, it's about pushing their vision of the world.”To hear more of this conversation, check out this week's episode of The Intercept Briefing. Hosted on Acast. See acast.com/privacy for more information.
Today- Friday, January 17, 2025 - the US Supreme Court delivered its order upholding the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, a law passed by Congress and signed by President Joe Biden in April 2024. The Court found that the Act, which effectively bans TikTok in the US unless its Chinese parent company, ByteDance, sells it, does not violate the First Amendment rights of TikTok, its users, or creators.The decision clears the way for a ban to go into effect on January 19, 2025. Late this evening, TikTok issued a statement saying that “Unless the Biden Administration immediately provides a definitive statement to satisfy the most critical service providers assuring non-enforcement, unfortunately TikTok will be forced to go dark on January 19.” The White House had previously announced it would not enforce the ban before President Biden leaves office on Monday. Unless Biden takes action, this may set President-elect Donald Trump up to somehow come to TikTok's rescue. To learn more about the ruling and what may happen next, Justin Hendrix spoke to Kate Klonick, an associate professor of law at St. John's University and a fellow at Brookings, Harvard's Berkman Klein Center, and the Yale Information Society Project. The conversation also touches on recent moves by Meta's founder and CEO, Mark Zuckerberg, to ingratiate himself to the incoming Trump administration.
Even as the new year ushers in a new administration and Congress in the US at the federal level, dozens of states are kicking off new legislative sessions and are expected to pursue various tech policy goals. Justin Hendrix spoke to three experts to get a sense of the trends unfolding across the states on the regulation of AI, privacy, child online safety, and related issues: Keir Lamont, senior director at the Future of Privacy Forum (FPF) and author of The Patchwork Dispatch, a newsletter on state tech policy issues; Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), which runs a state privacy policy project and scores AI legislation; Scott Babwah Brennen, director of the Center on Technology Policy at New York University and an author of a recent report on trends in state tech policy.
This week's guest is Dr. Ruha Benjamin, Alexander Stewart 1886 Professor of African American Studies at Princeton University and Founding Director of the IDA B. WELLS Just Data Lab. Benjamin was recently named a 2024 MacArthur Fellow, and she's written and edited multiple books, including 2019's Race After Technology and 2022's Viral Justice. Last week she joined Justin Hendrix to discuss her latest book, Imagination: A Manifesto, published this year by WW Norton & Company.
This close to the end of 2024, it's clear that one of the most significant tech stories of the year was the outcome of the Google search antitrust case. It will also make headlines next year and beyond as the remedies phase gets worked out in the courts. For this episode, Justin Hendrix turns the host duties over to someone who has looked closely at this issue: Alissa Cooper, the Executive Director of the Knight-Georgetown Institute (KGI). Alissa hosted a conversation with three individuals who are following the remedies phase with an expert eye, including:Cristina Caffarra is a competition economist and an honorary Professor at University College London, and cofounder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London.Kate Brennan is associate director at the AI Now Institute; andDavid Dinielli is an attorney and a visiting clinical lecturer and senior research scholar at Yale Law School.
Kate Starbird is a professor in the Department of Human Centered Design & Engineering and director of the Emerging Capacities of Mass Participation Laboratory at the University of Washington, and co-founder of the University of Washington's Center for an Informed Public. Justin Hendrix interviewed her about her team's ongoing efforts to study online rumors, including during the 2024 US election; the differences between the left and right media ecosystems in the US; and how she believes the research field is changing.
At its November 21st "Summit of the Future of the Internet," billionaire Frank McCourt's Project Liberty hosted a panel discussion featuring Congresswoman Nancy Mace, a Republican from South Carolina, on a panel with Congressman Ro Khanna, a Democrat from California, that was moderated by the media personality Charlemagne the God. Last month, Congresswoman Mace led an effort to ban transgender women from using female bathrooms at the US Capitol in response to the election of Sarah McBride, who is set to be the first openly transgender person in Congress representing voters in Delaware. Evan Greer, director of Fight for the Future, a tech advocacy organization, took the opportunity to confront Congresswoman Mace's bigotry during the Project Liberty conference. Justin Hendrix spoke to Evan last week about the incident, where she believes the tech accountability and digital rights movement should draw the line when it comes to engaging with far-right politicians, and how we can go about building spaces where we can imagine a different future that is truly just and liberatory.
If you're trying to game out the potential role of technology in the post-election period in the US, there is a significant "X" factor. When he purchased the social media platform formerly known as Twitter, “Elon Musk didn't just get a social network—he got a political weapon.” So says today's guest, a journalist who is one of the keenest observers of phenomena on the internet: Charlie Warzel, a staff writer at The Atlantic and the author of its newsletter Galaxy Brain. Justin Hendrix caught up with him about what to make of Musk and the broader health of the information environment.
On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, Justin Hendrix spoke with three experts: Katie Harbath, CEO of Anchor Change and chief global affairs officer at Duco Experts;Nicole Schneidman, technology policy strategist at Protect Democracy; andDean Jackson, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.
Martin Husovec is an associate law professor at the London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of Principles of the Digital Services Act, just out from Oxford University Press. Justin Hendrix spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it.
In this episode, Justin Hendrix speaks with three researchers who recently published projects looking at the intersection of generative AI with elections around the world, including:Samuel Woolley, Dietrich Chair of Disinformation Studies at the University of Pittsburgh and one of the authors of a set of studies titled Generative Artificial Intelligence and Elections;Lindsay Gorman, Managing Director and Senior Fellow of the Technology Program at the German Marshall Fund of the United States and an author of a report and online resource titled Spitting Images: Tracking Deepfakes and Generative AI in Elections; andScott Babwah Brennen, Director of the NYU Center on Technology Policy and one of the authors of a deep dive into the literature on the effectiveness of AI content labels and another on the efficacy of recent US state legislation requiring labels on political ads that use generative AI.
With Sam Woolley, Mariana Olaizola Rosenblat and Inga K. Trauthig are authors of a new report from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." Justin Hendrix caught up with them to learn more about how political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.
A lot of folks frustrated with major social media platforms are migrating to alternatives like Mastodon and Bluesky, which operate on decentralized protocols. This summer, Erin Kissane and Darius Kazemi released a report on the governance on fediverse microblogging servers and the moderation practices of the people who run them. Justin Hendrix caught up with Erin Kissane about their findings, including the emerging forms of diplomacy between different server operators, the types of political and policy decisions moderators must make, and the need for more resources and tooling to enable better governance across the fediverse.
The results in this year's installment of the Freedom House Freedom on the Net report generally follow the same distressing trajectory as prior reports, marking a 14th consecutive year in declines in internet freedom around the world. But in this year of elections, the Freedom House analysts also identified a set of concerning phenomena related to this most fundamental act of democracy and how governments are asserting themselves, for better or worse. Justin Hendrix spoke to report authors Allie Funk and Kian Vesteinsson about their findings.
Barry Lynn is the executive director of the Open Markets Institute in Washington DC and the author of this month's cover essay in Harper's titled "The Antitrust Revolution: Liberal democracy's last stand against Big Tech." Justin Hendrix spoke to him about his essay, about the remedy framework proposed by the US Department of Justice following the ruling in the Google search antitrust trial, and about what to anticipate for the antitrust movement following the 2024 US presidential election.
Last week, Wall Street Journal technology reporter Jeff Horwitz first reported on details of an unredacted version of a complaint against Snap brought by New Mexico Attorney General Raúl Torrez. Tech Policy Press editor Justin Hendrix spoke to Horwitz about its details, and questions it leaves unanswered.
Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference, published September 24 by Princeton University Press. In this conversation, Justin Hendrix focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"
The Institute for Strategic Dialogue (ISD) recently assessed social media platforms' policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising and state-affiliated media. Justin Hendrix spoke to two of the report's authors: ISD's Director of Technology & Society, Isabelle Frances-Wright, and its Senior US digital Policy Manager, Ellen Jacobs. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.
Marietje Schaake is the author of The Tech Coup: How to Save Democracy from Silicon Valley. Dr. Alondra Nelson, a Professor at the Institute for Advanced Study, who served as deputy assistant to President Joe Biden and Acting Director of the White House Office of Science and Technology Policy (OSTP), calls Schaake “a twenty-first century Tocqueville” who “looks at Silicon Valley and its impact on democratic society with an outsider's gimlet eye.” Nobel prize winner Maria Ressa says Schaake's new book “exposes the unchecked, corrosive power that is undermining democracy, human rights, and our global order.” And author and activist Cory Doctorow says the book offers “A thorough and necessary explanation of the parade of policy failures that enshittified the internet—and a sound prescription for its disenshittification.” Justin Hendrix spoke to Schaake just before the book's publication on September 24, 2024.
In 2019, Thierry Breton, a French business executive who became the France's Minister of Finance from 2005 to 2007, was nominated by President Emmanuel Macron to become a member of the European Commission for the Internal Market. In that role his name and face were closely associated with Europe's push to regulate digital markets and the passage of legislation such as the Digital Services Act and the EU's AI Act. On Monday, September 16 - in a letter that called into question EU Commission President Ursula von der Leyen's governance - Breton resigned his post. While certain tech executives may be happy to see him go- Elon Musk posted “bon voyage” to the news - his departure spells change for Europe's approach to tech going forward. To learn more, Justin Hendrix reached out to a European journalist who is covering these matters closely, and who has been kind enough to share his reporting on the EU AI Act with Tech Policy Press in the past: MLex Senior AI Correspondent Luca Bertuzzi.
Paris Marx, a Canadian tech critic, recently authored a post under the headline "Pavel Durov and Elon Musk are not free speech champions: The actions against Telegram and Twitter/X are about sovereignty, not speech." Justin Hendrix spoke to Paris about his assessment of these matters, and why those making claims in defense of free speech in the wake of Brazil's ban on X and Telegram founder and CEO Pavel Durov's arrest in France may in fact be undermining free expression and internet freedoms in the long run.
Today is Monday, September 9th. Today Judge Leonie Brinkema of the US District Court for the Eastern District of Virginia is presiding over the start of a trial in which the United States Department of Justice accuses Google of violating antitrust law, abusing its power in the market for online advertising. Google contests the allegations against it. To get a bit more detail on what to expect, Justin Hendrix spoke to two individuals covering the case closely who take a critical view of Google, the government's allegations about its power in the online advertising market, and the company's effect on journalism and the overall media and information ecosystem:Sarah Kay Wiley, director of policy at Check My Ads, which is running a comprehensive tracker on the case;Karina Montoya, a senior reporter and policy analyst at the Center for Journalism and Liberty, a program of the Open Markets Institute, who has covered the case extensively for Tech Policy Press.
On August 26th, Justin Hendrix moderated a panel convened by the Social Science Research Council at its offices in Brooklyn, New York. The panel was titled “Platforms and Elections: the Global State of Play, and it featured:Dr. Shannon McGregor, associate professor at the UNC Hussman School of Journalism and Media and a principal investigator with the Center for Information Technology in Public Life (CITAP);Dr. Jonathan Corpus Ong, professor of global digital media. at the University of Massachusetts at Amherst, inaugural director of the Global Technology for Social Justice Lab; andDr. Chris Tenove, research associate and instructor at the School of Public Policy and Global Affairs and Assistant Director of the Center for the Study of Democratic Institutions, the University of British Columbia.This episode features a lightly edited recording of the conversation, which touches on topics ranging from the role of civil society and independent researchers in engaging with efforts to protect the integrity of elections and mitigate the spread of misinformation to current questions about how generative AI may impact politics.
Thirty tech bills went through the law making sausage grinder in California this past session, and now Governor Gavin Newsom is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature. To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, Justin Hendrix spoke to two journalists who live and work in the state and cover these issues regularly:Jesús Alvarado, a reporting fellow at Tech Policy Press and author of a recent post on SB 1047, a key piece of the California legislation; Khari Johnson, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a recent article on the California legislation.
Renée DiResta, who serves on the board of Tech Policy Press and has been an occasional contributor, is the author of Invisible Rulers: The People Who Turn Lies Into Reality, published by Hachette Book Group in June. Justin Hendrix had a chance to catch up with DiResta last week to discuss some of the key ideas in the book, and how she sees them playing out in current moment headed into the 2024 US election.
The billionaire owner of the social media platform X, Elon Musk, has been in a prolonged dispute with a Supreme Court Judge in Brazil regarding X's content moderation practices. Earlier this year, Judge Alexandre de Moraes launched an investigation into X after Musk defied a court order to block accounts that supported former right-wing president Jair Bolsonaro and were accused of spreading misinformation and hate speech.On Friday afternoon, August 30, following a standoff over an order requiring X to appoint a new legal representative in Brazil, the Judge issued an order to suspend X in the country. Justin Hendrix spoke to three people following the situation closely in Brazil: Laís Martins, a journalist at the The Intercept in Brazil; Sérgio Spagnuolo, executive director & founder of the data-driven tech news organization Nucleo Journalism; and Dr. Ivar Alberto Hartmann, an associate professor at the Insper Institute of Education and Research in Brazil.
Justin Hendrix speaks with Mark Surman, President of Mozilla, about Mozilla's work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman discusses Mozilla's initiatives in AI investment and development, and reflects on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, Surman shares his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people's values and dignity.
On Friday, August 16, the United States Ninth Circuit Court of Appeals issued a ruling in NetChoice v. Bonta, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The court affirmed that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data. In this episode, Justin Hendrix recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow Dean Jackson is joined by Tech Justice Law Project executive director Meetali Jain and USC Marshall School Neely Center managing director Ravi Iyer for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.