POPULARITY
This weekend, the Americans with Disabilities Act (ADA) turns 35. Signed into law on July 26, 1990, the law provides broad anti-discrimination protections for people with disabilities in the US, and has impacted how people with disabilities interact with various technologies. To discuss how the law has aged and what the fight for equity and inclusion looks like going forward, Tech Policy Press fellow Ariana Aboulafia spoke with three leaders working at the intersection of disability and technology:Maitreya Shah is the tech policy director at the American Association of People with Disabilities.Blake Reid is a professor at the University of Colorado.Cynthia Bennett is a senior research scientist at Google.
Yesterday, United States President Donald Trump took to the stage at the "Winning the AI Race Summit" to promote the administration's AI Action Plan. Shortly after it was published, Tech Policy Press editor Justin Hendrix sat down with Sarah Myers West, the co-director of the AI Now Institute; Maia Woluchem, the program director of the Trustworthy Infrastructures team at Data and Society; and Ryan Gerety, the director of the Athena Coalition, to discuss the plan and what it portends for the future.
Tech Policy Press fellow Anika Collier Navaroli is the host of Through to Thriving, a special podcast series where she talks with technology policy practitioners to explore futures beyond our current moment. For this episode, Anika spoke with two experts on Trust & Safety about balance and resilience in a notoriously difficult field. Alice Hunsberger is the head of Trust & Safety at Musubi, a firm that sells AI content moderation solutions. Jerrel Peterson is the director of content policy at Spotify. Hunsberger and Peterson discussed how they broke into the field, their observations about the current state of the industry, how to better the working relationship between civil society and industry, and their advice for the next generation of practitioners.
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU's Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.
If you've been reading Tech Policy Press closely over the last three weeks, you may have come across one or more posts from collaboration with Data & Society called “Ideologies of Control: A Series on Tech Power and Democratic Crisis.” The articles in the series examine how powerful tech billionaires and authoritarian leaders and thinkers are leveraging AI and digital infrastructure to advance anti-democratic agendas, consolidate control, and reshape society in ways that threaten privacy, labor rights, environmental sustainability, and democratic governance. For this episode, Justin Hendrix spoke to four of the authors who made contributions to the series, including:Jacob Metcalf, program director of the AI On the Ground Initiative at Data & Society;Tamara Kneese, program director of the Climate, Technology and Justice program at Data & Society;Reem Suleiman, outgoing US advocacy lead at the Mozilla Foundation and member of the city of Oakland's Privacy Advisory Commission; and Kevin De Liban, founder of TechTonic Justice.
For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in.The third episode in the series features her conversation with Dr. Timnit Gebru, the founder and executive director of the Distributed Artificial Intelligence Research Institute. Last year, Dr. Gebru wrote an New York Times opinion that asked, “Who Is Tech Really For?” In the piece, she also asked, “what would an internet that served my elders look like?” This year, DAIR has continued to ask these questions by hosting an event and a blog called Possible Futures that imagines “what the world can look like when we design and deploy technology that centers the needs of our communities. In one of these pieces, Dr. Gebru, along with her colleagues Asmelash Teka Hadgu and Dr. Alex Hanna describe “An Internet for Our Elders.”
In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act's rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? To help us unpack all of this, Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, Head of Office and Digital Policy Advisor to German MEP Axel Voss, and one of the more influential voices shaping the future of EU digital policy.
Lawfare Contributing Editor Renée DiResta sits down with Daphne Keller, Director of the Program on Platform Regulation at Stanford University's Cyber Policy Center; Dean Jackson, Contributing Editor at Tech Policy Press and fellow at American University's Center for Security, Innovation, and New Technology; and Joan Barata, Senior Legal Fellow at The Future of Free Speech Project at Vanderbilt University and fellow at Stanford's Program on Platform Regulation, to make European tech regulation interesting. They discuss the European Union's Disinformation Code of Practice and its transition, on July 1, from voluntary framework co-authored by Big Tech, to legally binding obligation under the Digital Services Act (DSA). This sounds like a niche bureaucratic change—but it's provided a news hook for the Trump Administration and its allies in far-right parties across Europe to allege once again that they are being suppressed by Big Tech, and that this transition portends the end of free speech on the internet.Does it? No. But what do the Code and the DSA actually do? It's worth understanding the nuances of these regulations and how they may impact transparency, accountability, and free expression. The group discusses topics including Senator Marco Rubio's recent visa ban policy aimed at “foreign censors,” Romania's annulled election, and whether European regulation risks overreach or fails to go far enough.For more on this topic:Hate Speech: Comparing the US and EU ApproachesThe European Commission's Approach to DSA Systemic Risk is Concerning for Freedom of ExpressionThe Far Right's War on Content Moderation Comes to Europe Regulation or Repression? How the Right Hijacked the DSA DebateLawful but Awful? Control over Legal Speech by Platforms, Governments, and Internet UsersThe Rise of the Compliant Speech PlatformThree Questions Prompted by Rubio's Threatened Visa Restrictions on ‘Foreign Nationals Who Censor Americans'Will the DSA Save Democracy? The Test of the Recent Presidential Election in RomaniaTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. The second episode in the series features her conversation with Dr. Desmond Upton Patton, who has long studied the intersection of technology and social issues and advised companies developing technologies and policies for social media and AI. Dr. Patton is the Brian and Randi Schwartz University Professor and Penn Integrates Knowledge University Professor at the University of Pennsylvania, and he serves on the board of Tech Policy Press.Recently, Dr. Patton has been teaching a class within Annenberg and the School of Social Policy & Practice called "Journey to Joy: Designing a Happier Life." In this episode, he discusses his personal and intellectual journey, and what the concept of joy has to do with technology and how we imagine the future.
Canadian political leaders are in a precarious moment. Fresh off the resignation of former Prime Minister Justin Trudeau and ascendancy of his successor, new Prime Minister and Liberal Party leader Mark Carney, the nation faces a brewing trade war with the United States and a deteriorating relationship with its president, Donald Trump.In addition to managing those global tensions, Canadian leaders have a long to-do list on tech policy, including figuring out the nation's approach to artificial intelligence and online harms. How will the new Carney-led government in Canada navigate those issues?Tech Policy Press associate editor Cristiano Lima-Strong spoke to three experts to get a sense:Renee Black is founder of goodbot, where she works on preventing harmful disinformation and bias, and establishing frameworks that protect digital rights.Maroussia Lévesque is a doctoral candidate and lecturer at Harvard Law School, an affiliate at the Berkman Klein Center, and a senior fellow at the Center for International Governance Innovation.Vass Bednar is a public policy entrepreneur working at the intersection of technology and public policy.
Earlier this year, an entity called the Observatory on Information and Democracy released a major report called INFORMATION ECOSYSTEMS AND TROUBLED DEMOCRACY: A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance. The report is the result of a combination of three research assessment panels comprised of over 60 volunteer researchers all coordinated by six rapporteurs and led by a scientific director that together considered over 1,600 sources on topics at the intersection of technology media and democracy ranging from trust in news and to mis- and disinformation is linked to societal and political polarization. Justin Hendrix spoke to that scientific director, Robin Mansell, and one of the other individuals involved in the project as chair of its steering committee, Courtney Radsch, who is also on the board of Tech Policy Press.
Emily M. Bender and Alex Hanna are the authors of a new book that The Guardian calls “refreshingly sarcastic” and Business Insider calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. Justin Hendrix spoke to them about their new book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, just out from Harper Collins.
On Thursday, May 22, the United States House of Representatives narrowly advanced a budget bill that included the "Artificial Intelligence and Information Technology Modernization Initiative," which includes a 10-year moratorium on the enforcement of state AI laws. Tech Policy Press editor Justin Hendrix and associate editor Cristiano Lima-Strong discussed the moratorium, the contours of the debate around it, and its prospects in the Senate.
Last year, a United States federal judge ruled that Google is a monopolist in the market for online search. For the past three weeks, the company and the Justice Department have been in court to hash out what remedies might look like. Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts who are following the case closely, including Karina Montoya, a senior reporter and analyst for Center for Journalism and Liberty at the Open Markets Institute, and Joseph Coniglio, the director of antitrust and innovation at the Information Technology and Innovation Foundation (ITIF).
For a special series of episodes that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting a series of discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. Dubbed Through to Thriving, the first episode in the series features a discussion on how to build community and solidarity with Ellen Pao, currently the co-founder of a nonprofit called Project Include, which focuses on advancing diversity and inclusion in the tech sector. Previously, Pao was the interim CEO of Reddit and a venture capitalist.
Cristiano Lima-Strong, associate editor at Tech Policy Press, offers analysis of the Federal Trade Commission's antitrust case against Meta, where they will argue that the social media giant maintained a monopoly after it bought Instagram and WhatsApp.
The Federal Trade Commission will argue that the social media giant Meta, formerly Facebook, maintained a monopoly after it bought Instagram and WhatsApp.On Today's Show:Cristiano Lima-Strong, associate editor at Tech Policy Press, offers analysis of the FTC's antitrust case.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Prateek Waghre, former executive director at the Internet Freedom Foundation and currently a fellow at Tech Policy Press. Together, they cover:BookMyShow Restores Kunal Kamra's Profile After Controversy (Medianama)Kunal Kamra show audience members served notices (The Times of India)Cops force banker to cut short vacation to join Kamra probe (The Times of India)Unblock Vikatan Website – Madras High Court Orders Central Government (Vikatan)A Lack of Sense, and Censor-ability in India (Tech Policy Press)India befriends Big Tech as Trump tariffs knock on door, aided by a string of biz-friendly moves (Livemint)Meta can be sued in Kenya over posts related to Ethiopia violence, court rules (Reuters)US to screen social media of immigrants, rights advocates raise concerns (Reuters)In Karnataka HC, Centre defends use of IT Act for takedown notices (Hindustan Times) This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
On April 4, The New York Times reported that the European Commission is considering finding X, formerly Twitter, as part of its ongoing DSA investigation, which began in 2023. Tech Policy Press has discussed at length the extent and quality of transparency from platforms under the DSA, but there is limited insight into how the Commission is conducting its investigations into large online platforms and search engines. In most cases, the publicly available documents on cases are just press releases, while enforcement strategies and methods are not spelled out. To delve into the challenges this lack of transparency presents and how it impacts the public's understanding of the DSA, Tech Policy Press Associate Editor Ramsha Jahangir spoke to two researchers:Jacob van de Kerkhof, a PhD researcher at Utrecht University. His research is focused on the DSA and freedom of expression.Matteo Fabbri, a PhD candidate at IMT School for Advanced Studies in Lucca, Italy. Fabbri is also a visiting scholar at the Institute for Information Law at the University of Amsterdam. He recently published a research article titled "The Role of Requests for Information in Governing Digital Platforms Under the Digital Services Act: The Case of X."
Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President Donald Trump. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire Elon Musk to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.In this conversation, Justin Hendrix is joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:Eryk Salvaggio, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;Rebecca Williams, a senior strategist in the Privacy and Data Governance Unit at ACLU;Emily Tavoulareas, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and Matthew Kirschenbaum, Distinguished University Professor in the Department of English at the University of Maryland.
I'm Charley Johnson, and this is Untangled, a newsletter and podcast about our sociotechnical world, and how to change it. Today, I'm bringing you the audio version of my latest essay, “There's no such thing as ‘fully autonomous agents.' Before getting into it, two quick things:1. I have two part essay out in Tech Policy Press with Michelle Shevin that offers a roadmap for how philanthropy can use the current “AI Moment” to build more just futures.2. There is still room available in my upcoming course. In it, I weave together frameworks — from science and technology studies, complex adaptive systems, future thinking etc. — to offer you strategies and practical approaches to address the twin questions confronting all mission driven leaders, strategists, and change-makers right now: what is your 'AI strategy' and how will you change the system you're in?Now, on to the show! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features Tech Policy Press Associate Editor Ramsha Jahangir in discussion with Rina Chandran, Rest of World; Natalia Anteleva, Coda Story; Anupriya Datta, Euractiv; and Anisha Dutta, an award-winning investigative reporter.This discussion delved into the global implications of these developments and key lessons from reporting in various political contexts. Questions included:What key narratives are emerging globally from recent shifts in US policy?How is the rise of a tech oligarchy shaping technology coverage outside the US?What practical lessons can journalists learn from reporting on technology and politics in non-Western contexts?
On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features a discussion with Michael Masnick from Techdirt, Vittoria Elliot from Wired, and Emmanuel Maiberg from 404 Media.This session explored the intersection of technology and the current political situation in the US. Key questions included: How are tech journalists addressing the current situation, and why is their perspective so crucial? What critical questions are journalists covering the intersection of tech and democracy currently asking? How does the field approach reporting on anti-democratic phenomena and the challenges journalists face in this work?
2025 will be a pivotal year for technology regulation in the United States and around the world. The European Union has begun regulating social media platforms with its Digital Services Act. In the United States, regulatory proposals at the federal level will likely include renewed efforts to repeal or reform Section 230 of the Communications Decency Act. Meanwhile, States such as Florida and Texas have tried to restrict content moderation by major platforms, but have been met with challenges to the laws' constitutionality. On March 19, NYU Law hosted a Forum on whether it is lawful, feasible, and desirable for government actors to regulate social media platforms to reduce harmful effects on U.S. democracy and society with expert guests Daphne Keller, Director of the Program on Platform Regulation at Stanford Law School's Cyber Policy Center, and Michael Posner, Director of the Center for Business and Human Rights at NYU Stern School of Business. Tess Bridgeman and Ryan Goodman, co-editors-in-chief of Just Security, moderated the event, which was co-hosted by Just Security, the NYU Stern Center for Business and Human Rights and Tech Policy Press. Show Notes: Tess Bridgeman Ryan GoodmanDaphne Keller Michael PosnerJust Security's coverage on Social Media PlatformsJust Security's coverage on Section 230Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)
A year ago, Europe's Digital Markets Act—the DMA—went into effect. The European Commission says the purpose of the regulation is to make “digital markets in the EU more contestable and fairer.” In particular, the DMA regulates gatekeepers, the large digital platforms whose position gives them greater leverage over the digital economy. One year in, how has the DMA performed? Do Europeans enjoy more choice and competition? And what are the new politics of the DMA as European regulations are contested by the Trump administration and its supporters in US industry? To answer these questions and more, Tech Policy Press contributing editor Dean Jackson spoke to a set of experts following a conference hosted by the Knight Georgetown Institute titled “DMA and Beyond.” His guests include:Alissa Cooper, Executive Director of the Knight-Georgetown Institute (KGI)Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law SchoolHaeyoon Kim, a Non-Resident Fellow at the Korea Economic Institute (KEI), andGunn Jiravuttipong, a JSD Candidate and Miller Fellow at Berkeley Law School.
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI's charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term. Today's guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; andMargaret Mitchell, chief ethics scientist at Hugging Face.
Last week, Tech Policy Press joined the Latin American Center for Investigative Journalism (EL CLIP) in publishing a report and series of articles documenting how adult users use public Facebook groups to identify and target accounts that indicate they are children for sexual exploitation. The “Innocence at Risk (Inocencia en Juego)” project, coordinated by EL CLIP with participation from Chequeado, includes a report from Lara Putnam, a professor of Latin American history and Director of the Civic Resilience Initiative of the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh, and independent reports from journalists across Latin America investigating a pattern of behavior on the platform's public groups in Colombia, Venezuela, and Argentina. They published their reports in EL CLIP, Chequeado, Crónica Uno, El Espectador, and Factchequeado. This episode features a discussion with Lara Putnam and Pablo Medina Uribe, who led the project at EL CLIP.
From February 15, 2023: The Jan. 6 committee's final report on the insurrection is over 800 pages, including the footnotes. But there's still new information coming out about the committee's findings and its work.Last week, we brought you an interview with Dean Jackson, one of the staffers who worked on the Jan. 6 committee's investigation into the role of social media in the insurrection. Today, we're featuring a conversation with Jacob Glick, who served as investigative counsel on the committee and is currently a policy counsel at Georgetown's Institute for Constitutional Advocacy and Protection. His work in the Jan. 6 investigation focused on social media and far-right extremism. Lawfare senior editor Quinta Jurecic spoke with Jacob about what the investigation showed him about the forces that led to Jan. 6, how he understands the threat still posed by extremism, and what it was like interviewing Twitter whistleblowers and members of far-right groups who stormed the Capitol.You can read Jacob's Lawfare article here, his essay with Mary McCord on countering extremism here in Just Security, and an interview with him and his Jan. 6 committee colleagues here at Tech Policy Press.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Silicon Valley's biggest power players traded in their hoodies for suits and ties this week as they sat front and center to watch Donald Trump take the oath of office again.Seated in front of the incoming cabinet were Meta's Mark Zuckerberg, Google's Sundar Pichai, Amazon's Jeff Bezos, and Trump confidant and leader of the so-called Department of Government Efficiency, Elon Musk. Apple CEO Tim Cook, Sam Altman from OpenAI, and TikTok CEO Shou Zi Chew also looked on.For an industry once skeptical of Trump, this dramatic transformation in political allegiance portends changes for the country — and the world. From the relaxing of hate speech rules on Meta platforms to the mere hourslong ban of TikTok to the billions of government dollars being pledged to build data centers to power AI, it is still only the beginning of this realignment.On this week's episode of The Intercept Briefing, Justin Hendrix, the CEO and editor of Tech Policy Press, and Intercept political reporter Jessica Washington dissect this shift. “Three of the individuals seated in front of the Cabinet are estimated by Oxfam in its latest report on wealth inequality are on track to potentially become trillionaires in the next just handful of years: Mark Zuckerberg, Jeff Bezos, Elon Musk,” says Hendrix. “Musk is estimated to be the first trillionaire on the planet, possibly as early as 2027.”Washington says there's more at stake than just personal wealth. “These are people who view themselves as world-shapers, as people who create reality in a lot of ways. Aligning themselves with Trump and with power in this way is not just about their financial interests, it's about pushing their vision of the world.”To hear more of this conversation, check out this week's episode of The Intercept Briefing. Hosted on Acast. See acast.com/privacy for more information.
On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, Justin Hendrix spoke with three experts: Katie Harbath, CEO of Anchor Change and chief global affairs officer at Duco Experts;Nicole Schneidman, technology policy strategist at Protect Democracy; andDean Jackson, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.
Last week, Wall Street Journal technology reporter Jeff Horwitz first reported on details of an unredacted version of a complaint against Snap brought by New Mexico Attorney General Raúl Torrez. Tech Policy Press editor Justin Hendrix spoke to Horwitz about its details, and questions it leaves unanswered.
In 2019, Thierry Breton, a French business executive who became the France's Minister of Finance from 2005 to 2007, was nominated by President Emmanuel Macron to become a member of the European Commission for the Internal Market. In that role his name and face were closely associated with Europe's push to regulate digital markets and the passage of legislation such as the Digital Services Act and the EU's AI Act. On Monday, September 16 - in a letter that called into question EU Commission President Ursula von der Leyen's governance - Breton resigned his post. While certain tech executives may be happy to see him go- Elon Musk posted “bon voyage” to the news - his departure spells change for Europe's approach to tech going forward. To learn more, Justin Hendrix reached out to a European journalist who is covering these matters closely, and who has been kind enough to share his reporting on the EU AI Act with Tech Policy Press in the past: MLex Senior AI Correspondent Luca Bertuzzi.
At Tech Policy Press, we're closely following the implementation of the Digital Services Act, the European Union law designed to regulate online platforms and services. One of the DSA's key objectives is to identify and mitigate systemic risks.But how do we gauge what rises to the level of a systemic risk? How do we get the sort of information we need from platforms to identify and mitigate systemic risk, and how do we create the kinds of collaborations between regulators and the research community that are necessary to answer complex questions?Ramsha Jahangir, a reporting fellow at Tech Policy Press, recently discussed these questions with Dr. Oliver Marsh, who is head of tech research at Algorithm Watch, an NGO with offices in Berlin and Zurich that works on issues at the intersection of technology and society. Dr. Marsh has been leading research on systemic risks and the DSA's approach, and just put out a detailed summary of his work.
Today is Monday, September 9th. Today Judge Leonie Brinkema of the US District Court for the Eastern District of Virginia is presiding over the start of a trial in which the United States Department of Justice accuses Google of violating antitrust law, abusing its power in the market for online advertising. Google contests the allegations against it. To get a bit more detail on what to expect, Justin Hendrix spoke to two individuals covering the case closely who take a critical view of Google, the government's allegations about its power in the online advertising market, and the company's effect on journalism and the overall media and information ecosystem:Sarah Kay Wiley, director of policy at Check My Ads, which is running a comprehensive tracker on the case;Karina Montoya, a senior reporter and policy analyst at the Center for Journalism and Liberty, a program of the Open Markets Institute, who has covered the case extensively for Tech Policy Press.
Thirty tech bills went through the law making sausage grinder in California this past session, and now Governor Gavin Newsom is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature. To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, Justin Hendrix spoke to two journalists who live and work in the state and cover these issues regularly:Jesús Alvarado, a reporting fellow at Tech Policy Press and author of a recent post on SB 1047, a key piece of the California legislation; Khari Johnson, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a recent article on the California legislation.
Renée DiResta, who serves on the board of Tech Policy Press and has been an occasional contributor, is the author of Invisible Rulers: The People Who Turn Lies Into Reality, published by Hachette Book Group in June. Justin Hendrix had a chance to catch up with DiResta last week to discuss some of the key ideas in the book, and how she sees them playing out in current moment headed into the 2024 US election.
Internet of Humans, with Jillian York & Konstantinos Komaitis
For this episode, we sit down with Justin Hendrix, CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. With years of experience in covering tech policy issues, we chat about a bunch of different issues, including privacy, misinformation and disinformation, elections and the future of tech.
On Friday, August 16, the United States Ninth Circuit Court of Appeals issued a ruling in NetChoice v. Bonta, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The court affirmed that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data. In this episode, Justin Hendrix recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow Dean Jackson is joined by Tech Justice Law Project executive director Meetali Jain and USC Marshall School Neely Center managing director Ravi Iyer for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.
What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, Justin Hendrix speaks with Elise Silva, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and John Wihbey, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a recent piece in Tech Policy Press titled "AI-Powered Search and the Rise of Google's 'Concierge Wikipedia.'” Wihbey is the author a paper published last month titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"
What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining Justin Hendrix are three experts: Amber Sinha and Vandinika Shukla, both fellows at Tech Policy Press, and Prateek Waghre, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager Prithvi Iyer sums up the election result.
The guests in this episode are authors of a new study titled Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work. In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining Justin Hendrix to discuss the results are:Dean Jackson, the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;Zelly Martin, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and Inga Trauthig, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.
As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:Sarah West, co-executive director of the AI Now InstituteNasser Eledroos, policy lead on technology at Color of ChangeParamita Shah, executive director of Just Futures LawCynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab
Last October, Dr. Jasmine McNealy, as an associate professor at the University of Florida, a Senior Fellow in Tech Policy with the Mozilla Foundation, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, wrote in Tech Policy Press about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. Justin Hendrix attended the workshop, and recently he checked in with Dr. McNealy and three of the other attendees he met there:Michaela Henley, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;Dr. Dominique Harrison, founding principal of Equity Innovation Ventures; andDr. Theodora Dryer, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.
On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.
On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.
On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies' ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.
2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.RECOMMENDED MEDIA White House AI Executive Order Takes On Complexity of Content Integrity IssuesRenee DiResta's piece in Tech Policy Press about content integrity within President Biden's AI executive orderThe Stanford Internet ObservatoryA cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social mediaDemosBritain's leading cross-party think tankInvisible Rulers: The People Who Turn Lies into Reality by Renee DiRestaPre-order Renee's upcoming book that's landing on shelves June 11, 2024RECOMMENDED YUA EPISODESThe Spin Doctors Are In with Renee DiRestaFrom Russia with Likes Part 1 with Renee DiRestaFrom Russia with Likes Part 2 with Renee DiRestaEsther Perel on Artificial IntimacyThe AI DilemmaA Conversation with Facebook Whistleblower Frances HaugenYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Robert sits down with Cooper Quinton and Caroline Sinders from the Tech Policy Press to discuss encrypted messaging apps, and which are really secure.See omnystudio.com/listener for privacy information.
The Jan. 6 committee's final report on the insurrection is over 800 pages, including the footnotes. But there's still new information coming out about the committee's findings and its work.Last week, we brought you an interview with Dean Jackson, one of the staffers who worked on the Jan. 6 committee's investigation into the role of social media in the insurrection. Today, we're featuring a conversation with Jacob Glick, who served as investigative counsel on the committee and is currently a policy counsel at Georgetown's Institute for Constitutional Advocacy and Protection. His work in the Jan. 6 investigation focused on social media and far-right extremism. Lawfare senior editor Quinta Jurecic spoke with Jacob about what the investigation showed him about the forces that led to Jan. 6, how he understands the threat still posed by extremism, and what it was like interviewing Twitter whistleblowers and members of far-right groups who stormed the Capitol.You can read Jacob's essay with Mary McCord on countering extremism here in Just Security and listen to an interview with Jacob and his Jan. 6 committee colleagues here at Tech Policy Press.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.