Podcast appearances and mentions of matt perault

  • 20PODCASTS
  • 57EPISODES
  • 37mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about matt perault

Latest podcast episodes about matt perault

Two Think Minimum
Little Tech, Big Challenges: Competing in the AI Era with Matt Perault

Two Think Minimum

Play Episode Listen Later Feb 21, 2025 41:06


Little Tech, Big Challenges: Competing in the AI Era with Matt Perault by Technology Policy Institute

challenges tech competing technology policy institute matt perault
The Lawfare Podcast
Lawfare Daily: Matt Perault on the Little Tech Agenda

The Lawfare Podcast

Play Episode Listen Later Feb 18, 2025 40:10


Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Two Think Minimum
AI Policy and the Future of Startups with Matt Perault

Two Think Minimum

Play Episode Listen Later Feb 13, 2025 41:53


AI Policy and the Future of Startups with Matt Perault by Technology Policy Institute

future startups ai policy technology policy institute matt perault
Marketplace Tech
The case for a comprehensive federal law to oversee AI

Marketplace Tech

Play Episode Listen Later Feb 6, 2025 10:22


Congress considered 158 bills that mention artificial intelligence over the past two years, according to a count by the Brennan Center for Justice. But zero comprehensive AI laws have been passed. There has been movement by states, however. In Tennessee, for example, the ELVIS Act, which protects voices and likenesses from unauthorized use by AI, became law in March. In Colorado, a law that takes effect in 2026 requires developers of high-risk AI systems to protect consumers from algorithm-based discrimination. But some who fund AI technology say a federal law is needed. That includes Matt Perault, head of AI policy at the venture capital firm Andreessen Horowitz.

Marketplace All-in-One
The case for a comprehensive federal law to oversee AI

Marketplace All-in-One

Play Episode Listen Later Feb 6, 2025 10:22


Congress considered 158 bills that mention artificial intelligence over the past two years, according to a count by the Brennan Center for Justice. But zero comprehensive AI laws have been passed. There has been movement by states, however. In Tennessee, for example, the ELVIS Act, which protects voices and likenesses from unauthorized use by AI, became law in March. In Colorado, a law that takes effect in 2026 requires developers of high-risk AI systems to protect consumers from algorithm-based discrimination. But some who fund AI technology say a federal law is needed. That includes Matt Perault, head of AI policy at the venture capital firm Andreessen Horowitz.

POLITICO Dispatch
How should states regulate AI? Andreessen Horowitz weighs in.

POLITICO Dispatch

Play Episode Listen Later Feb 4, 2025 16:41


DeepSeek has stoked fresh concerns in Washington about America's standing in the AI race. But the real policy action is happening in the states, where legislatures are moving faster than Congress to regulate. Andreessen Horowitz, the Silicon Valley investment firm with ties to the Trump administration, is now warning against a medley of AI laws — and wants the federal government to preempt states when it comes to regulating the technology's development. On POLITICO Tech, the firm's head of AI policy, Matt Perault, joins host Steven Overly to explain. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Lawfare Podcast
Lawfare Archive: A TikTok Ban and the First Amendment

The Lawfare Podcast

Play Episode Listen Later Dec 22, 2024 47:13


From April 14, 2023: Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that's used by millions to communicate every day?On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Catching Up on the State of Platform Governance: Zuckerberg, Durov, and Musk

The Lawfare Podcast

Play Episode Listen Later Sep 6, 2024 49:20


It's been a busy week in the world of social media and technology platforms. Meta CEO Mark Zuckerberg sent an odd letter to the House Judiciary Committee apparently disclaiming some of his company's past content moderation efforts. Telegram founder Pavel Durov was arrested in France on a wide range of charges involving an investigation into the misuse of his platform. And Elon Musk is engaged in an ongoing battle with Brazilian courts, which have banned access to Twitter (now X) in the country after Musk refused to abide by court orders. These three news stories speak to a common theme: the difficult and uncertain relationship between tech platforms and the governments that regulate them. To make sense of it all, Quinta Jurecic, a Senior Editor at Lawfare, with Matt Perault—the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill—and Renée DiResta, author of the new book, “Invisible Rulers: The People Who Turn Lies Into Reality,” and the former technical research manager at the Stanford Internet Observatory.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: AI Policy Under Technological Uncertainty, with Alex “amac” Macgillivray

The Lawfare Podcast

Play Episode Listen Later Jul 23, 2024 39:35


Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, and Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, sat down with Alexander Macgillivray, known to all as "amac," who was the former Principle Deputy Chief Technology Officer of the United States in the Biden Administration and General Counsel at Twitter.amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty, and Matt and Alan talked to him about how to do just that.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: The Supreme Court Rules in Murthy v. Missouri

The Lawfare Podcast

Play Episode Listen Later Jun 28, 2024 42:03


On June 26, the Supreme Court handed down its decision in Murthy v. Missouri—the “jawboning” case, concerning a First Amendment challenge to the government practice of pressuring social media companies to moderate content on their platforms. But instead of providing a clear answer one way or the other, the Court tossed out the case on standing. What now? Lawfare Editor-in-Chief Benjamin Wittes discussed the case with Kate Klonick of St. Johns University School of Law and Matt Perault, Director of the Center on Technology Policy at the University of North Carolina.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Marketplace Tech
Government pressures tech behind the scenes, says former Facebook employee. It’s called jawboning.

Marketplace Tech

Play Episode Listen Later Mar 28, 2024 11:26


It’s something government officials on both sides of the aisle are known to do: pressuring tech platforms to bend to their will, aka jawboning. But the line between persuasion and coercion, or even censorship, can get murky. Last week, the Supreme Court heard arguments from two states alleging that the Joe Biden administration illegally coerced social media companies into blocking conservative content. Matt Perault, now with the University of North Carolina at Chapel Hill’s Center on Technology Policy, says that in his former job working in policy at Facebook, jawboning happened all the time.

Marketplace All-in-One
Government pressures tech behind the scenes, says former Facebook employee. It’s called jawboning.

Marketplace All-in-One

Play Episode Listen Later Mar 28, 2024 11:26


It’s something government officials on both sides of the aisle are known to do: pressuring tech platforms to bend to their will, aka jawboning. But the line between persuasion and coercion, or even censorship, can get murky. Last week, the Supreme Court heard arguments from two states alleging that the Joe Biden administration illegally coerced social media companies into blocking conservative content. Matt Perault, now with the University of North Carolina at Chapel Hill’s Center on Technology Policy, says that in his former job working in policy at Facebook, jawboning happened all the time.

The Sunday Show
Unpacking the Oral Argument in Murthy v Missouri

The Sunday Show

Play Episode Listen Later Mar 24, 2024 51:42


On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.

The Lawfare Podcast
Matt Perault, Ramya Krishnan, and Alan Rozenshtein Talk About the TikTok Divestment and Ban Bill

The Lawfare Podcast

Play Episode Listen Later Mar 22, 2024 50:32


Today, we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem.Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app.Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally.Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Matt Perault, Ramya Krishnan, and Alan Rozenshtein Talk About the TikTok Divestment and Ban Bill

Arbiters of Truth

Play Episode Listen Later Mar 22, 2024 50:32


Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app.Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally.Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Jawboning at the Supreme Court

The Lawfare Podcast

Play Episode Listen Later Mar 21, 2024 51:38


Today, we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem.On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platforms—what's sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real. To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffs' claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible?If you're interested in more, you can read the Knight Institute's amicus brief in Murthy here and Knight's series on jawboning—including Perault's reflections—here.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Jawboning at the Supreme Court

Arbiters of Truth

Play Episode Listen Later Mar 21, 2024 51:38


Today, we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem.On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platforms—what's sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real. To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffs' claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible?If you're interested in more, you can read the Knight Institute's amicus brief in Murthy here and Knight's series on jawboning—including Perault's reflections—here. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
How Are the TikTok Bans Holding Up in Court?

The Lawfare Podcast

Play Episode Listen Later Jan 3, 2024 49:27


In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok?We've discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast, when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth, our series on the information ecosystem.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
How Are the TikTok Bans Holding Up in Court?

Arbiters of Truth

Play Episode Listen Later Jan 3, 2024 49:27


In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok?We've discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast, when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth, our series on the information ecosystem. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Jeff Horwitz on Broken Code and Reporting on Facebook

The Lawfare Podcast

Play Episode Listen Later Dec 20, 2023 53:58


In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook's inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook's products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “Broken Code”—which dives even deeper into the documents he uncovered from within the company. He's one of the most rigorous reporters covering Facebook, now known as Meta.On this episode of Arbiters of Truth, our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill—and also someone with close knowledge of Meta from his own time working at the company. They discussed Jeff's reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Jeff Horwitz on Broken Code and Reporting on Facebook

Arbiters of Truth

Play Episode Listen Later Dec 20, 2023 53:58


In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook's inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook's products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “Broken Code”—which dives even deeper into the documents he uncovered from within the company. He's one of the most rigorous reporters covering Facebook, now known as Meta.On this episode of Arbiters of Truth, our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill—and also someone with close knowledge of Meta from his own time working at the company. They discussed Jeff's reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Will Generative AI Reshape Elections?

The Lawfare Podcast

Play Episode Listen Later Nov 29, 2023 49:03


Unless you've been living under a rock, you've probably heard a great deal over the last year about generative AI and how it's going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare's Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections?Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Will Generative AI Reshape Elections?

Arbiters of Truth

Play Episode Listen Later Nov 29, 2023 49:03


Unless you've been living under a rock, you've probably heard a great deal over the last year about generative AI and how it's going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare's Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections? Hosted on Acast. See acast.com/privacy for more information.

Explain to Shane
Candidates, Bots, and Ballots: How AI is Rewriting Political Advertising (with Scott Brennen and Matt Perault)

Explain to Shane

Play Episode Listen Later Nov 20, 2023 33:47


Generative AI poses new challenges for political campaigning and our democracy as we head towards the 2024 presidential election. While this technology could streamline political messaging, there is greater fear that it could enable widespread manipulation and distortion of the democratic process. Heading into a contentious election, how can we assess and mitigate harms from AI-generated disinformation? How will the use of generative AI be different than prior “cheap fake” attempts? How should policymakers prepare for and respond to the use of AI in political advertising?On this episode, Shane is joined by Scott Brennen and Matt Perault, co-authors of “The new political ad machine: Policy frameworks for political ads in an age of AI.” They discuss how generative AI is transforming campaigning and address constituents' pressing concerns around the technology including AI-manipulation risks, synthetic media transparency, and evolving regulations for political advertising.

Impossible Tradeoffs with Katie Harbath
From the inside: When platforms engage with government

Impossible Tradeoffs with Katie Harbath

Play Episode Listen Later Oct 26, 2023 64:01


I have to apologize to you all. I forgot to put a poll in last week's podcast notes about our fun tradeoff, which was how you would want to unwind.I won't make that same mistake this week! The hard question for you today is:This week, we are joined by a bunch of exciting guests.First up is Diane Chang.Diane is a journalist-turned-product manager and was most recently Meta's election integrity product manager. She is an Entrepreneur-in-Residence at the Brown Institute for Media Innovation at Columbia Journalism School.For our first conversation, I wanted to talk to Diane about my current obsession with the role of news and politics on online platforms, given her experience as a journalist and a product manager.Then, I welcome three of my favorite people who are part of the OG Facebook DC crew - Adam Conner, Brooke Oberwetter, and Matt Perault. Our conversation centered on jawboning: “informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation practices.”From the top left, going clockwise, are Adam, Brooke, myself, and Matt.We all worked on Facebook's public policy team, regularly engaging with governments worldwide. Two things spurred this conversation. The first was Adam asking which of them I would have on as a guest first, and then a recent piece Matt and I wrote for the Knight First Amendment Institute on our experiences. I didn't dare pick between them, so I asked them all to come on together after Brooke responded to our piece on X/Twitter about how we should talk about Congressional letters as a form of jawboning. I thought that was a great point and invited them all to talk about their experiences.Enjoy!Please support the curation and analysis I'm doing with this podcast. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics. Get full access to Anchor Change with Katie Harbath at anchorchange.substack.com/subscribe

The Lawfare Podcast
Talking AI with Data and Society's Janet Haven

The Lawfare Podcast

Play Episode Listen Later Oct 5, 2023 46:22


Today, we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we're discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it's going to kill us all—so we're glad to be able to share an interview that hopefully cuts through some of that noise.Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she'd like to see in this space. Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Talking AI with Data and Society's Janet Haven

Arbiters of Truth

Play Episode Listen Later Oct 5, 2023 46:22


Today, we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we're discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it's going to kill us all—so we're glad to be able to share an interview that hopefully cuts through some of that noise.Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she'd like to see in this space. If you're interested in reading further, Janet mentions this paper from Data and Society on “Democratizing AI” in the course of the conversation. Hosted on Acast. See acast.com/privacy for more information.

TechTank
Ways to Protect Children Online

TechTank

Play Episode Listen Later Jul 17, 2023 32:59


In this episode of the TechTank podcast, co-host Darrell M. West explores potential paths forward in thesafety of kids online with Matt Perault, a professor at UNC's School of Information and Library Science,and Scott Brennen, the head of online expression policy at UNC's Center on Technology Policy. As co-authors of the report "Keeping Kids Safe Online: How Should Policymakers Approach Age Verification",both scholars will discuss current methods for age verification, consider the key trade-offs for each, andshare informed recommendations for policymakers. Hosted on Acast. See acast.com/privacy for more information.

Marketplace Tech
Identifying the trade-offs in online age verification

Marketplace Tech

Play Episode Listen Later Jun 26, 2023 11:35


Concern about the harm social media can do to young people is growing. But to protect kids, platforms have to know who is underage. That's why user age verification has become a focus for policymakers. Several states have passed laws that require it. But these policies require a range of trade-offs, according to a new analysis from Utah State University's Center for Growth and Opportunity. Matt Perault and Scott Brennen of the University of North Carolina at Chapel Hill's Center on Technology Policy co-wrote that research. Marketplace’s Meghan McCarty Carino discussed the costs and benefits involved in various age verification methods with the pair.

Marketplace All-in-One
Identifying the trade-offs in online age verification

Marketplace All-in-One

Play Episode Listen Later Jun 26, 2023 11:35


Concern about the harm social media can do to young people is growing. But to protect kids, platforms have to know who is underage. That's why user age verification has become a focus for policymakers. Several states have passed laws that require it. But these policies require a range of trade-offs, according to a new analysis from Utah State University's Center for Growth and Opportunity. Matt Perault and Scott Brennen of the University of North Carolina at Chapel Hill's Center on Technology Policy co-wrote that research. Marketplace’s Meghan McCarty Carino discussed the costs and benefits involved in various age verification methods with the pair.

Wonks and War Rooms
Antitrust and Big Tech with Matt Perault

Wonks and War Rooms

Play Episode Listen Later May 3, 2023 32:13 Transcription Available


Matt Perault is the director of the Center on Technology Policy at UNC Chapel Hill, and previously worked at Facebook, as the head of the global policy development team. This episode he and Elizabeth get into the weeds on antitrust competition laws, monopolies and big tech. They talk about how monopolies can be both good and bad, the goals of antitrust laws, and the complication of these laws applying across different regions of the world. They also discuss some of the antitrust lawsuits in big tech right now, and the various stakeholders involved.Additional resources: Off the top, Elizabeth mentions this crash course video about monopolies and anti-competitive markets.Matt uses a few terms related to the antitrust context: interoperability - the ability for users to use different devices or systems interchangeability, for example using the same cable to charge different types of cell phonesnondiscrimination principle - an antitrust principle that obliges companies to be neutral vis-à-vis other competitors and service providers in their industryconsumer welfare standard - a guideline that prohibits actions by companies that negatively impact consumersMatt brings up a number of antitrust cases throughout the episode: FTC suit to block a proposed merger between Meta and Within UnlimitedUK looking into Microsoft bid to buy video game company Activision16 U.S. states and territories filed a suit against Google over ad technology practicesU.S. Justice Department filed a suit on the same issue at the federal levelFTC looking at Amazon for monopolistic business practicesDispute between Apple and Epic Games House Judiciary Committee did a big a investigation of big tech companiesThere are also a few big pieces of antitrust legislation that come up. General Data Protection Regulation (GDPR) — EuropeCompetition Act — Canada Please visit our website for complete show notes with additional links. Check out www.polcommtech.ca for annotated transcripts of this episode in English and French.

The Lawfare Podcast
Cox and Wyden on Section 230 and Generative AI

The Lawfare Podcast

Play Episode Listen Later May 2, 2023 29:52


Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare's occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they're skeptical that its protections apply to generative AI. Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Cox and Wyden on Section 230 and Generative AI

Arbiters of Truth

Play Episode Listen Later May 2, 2023 29:52


Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare's occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they're skeptical that its protections apply to generative AI. Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
A TikTok Ban and the First Amendment

The Lawfare Podcast

Play Episode Listen Later Apr 14, 2023 46:32


Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that's used by millions to communicate every day?On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
A TikTok Ban and the First Amendment

Arbiters of Truth

Play Episode Listen Later Apr 14, 2023 46:32


Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that's used by millions to communicate every day?On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban. Hosted on Acast. See acast.com/privacy for more information.

Barron's Live
Tech Trader - The Outlook for Technology Stocks

Barron's Live

Play Episode Listen Later Apr 4, 2023 44:03


Barron's associate editor for technology Eric Savitz speaks with Matt Perault, director of the Center on Technology Policy at UNC-Chapel Hill on the outlook for technology stocks.

Tech Policy Grind
Digital Safety and That Section 230 Thing – Conversations from State of the Net (Part 2) [S4E02]

Tech Policy Grind

Play Episode Listen Later Mar 23, 2023 30:07


We're back with Part 2 of our State of the Net series, and this week we're chatting kids privacy and what's going on with Section 230! In this episode, Reema and Joe talk digital privacy and safety issues for children with Natalie Campbell (Internet Society). Then, Reema chats with Matt Perault (University of North Carolina, Chapel Hill) and Yaël Eisenstat (Anti-Defamation League) on Section 230 and the current landscape of platform liability. In case you missed last week, for the past (nearly) two decades, the State of the Net Conference has served as the convening point for tech policy professionals to chart a course forward for the tech policy decisions of the future. We were there to chat with tech policy experts on their takes on the latest digital issues of the day. Section 230 shields platforms (like Google, Meta, and others) from liability for the unlawful content their users may post. Yaël and Matt dig into the recent oral arguments heard before the Supreme Court and legislative developments on changing 230. To learn more about the Foundry, check out ilpfoundry.us and follow us on social media (LinkedIn and Twitter @ILPFoundry). If you'd like to support the show, donate to the Foundry here or reach out to us at foundrypodcasts@ilpfoundry.us. Disclaimer: Reema, Joe, and the Foundry Fellows engage with the Foundry in their personal capacities. Their views here aren't reflective of those of the organizations and institutions they're affiliated with.

PBS NewsHour - Segments
TikTok says Biden administration pressuring it to sell company as security concerns grow

PBS NewsHour - Segments

Play Episode Listen Later Mar 16, 2023 9:47


The Chinese parent company of TikTok says the Biden administration is pressuring it to sell to an American firm or face a national ban. Nick Schifrin reports on the national security concerns of the hugely popular video app and Amna Nawaz discusses TikTok's future with Matt Perault of the University of North Carolina's Center on Technology Policy. PBS NewsHour is supported by - https://www.pbs.org/newshour/about/funders

The Lawfare Podcast
Does Section 230 Protect ChatGPT?

The Lawfare Podcast

Play Episode Listen Later Mar 9, 2023 50:28


During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT? Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230's protections wouldn't extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPT's output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Does Section 230 Protect ChatGPT?

Arbiters of Truth

Play Episode Listen Later Mar 9, 2023 50:28


During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT? Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230's protections wouldn't extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPT's output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place. Hosted on Acast. See acast.com/privacy for more information.

Marketplace Tech
ChatGPT is a content host and creator. Does that make it liable for what it produces?

Marketplace Tech

Play Episode Listen Later Mar 2, 2023 9:47


So much of the internet today rests on the bedrock of a federal law that shields tech companies from liability for the content users post online. Everything from the AOL chatrooms of yore to modern social media likely wouldn’t exist without Section 230 of the 1996 Communications Decency Act. The idea is internet platforms aren’t acting like traditional publishers in creating content; they’re merely hosting it. But new generative artificial intelligence tools like DALL-E or ChatGPT that generate images or text are kind of different, says Matt Perault, director of the Center on Technology Policy at UNC Chapel Hill. He spoke with Marketplace’s Meghan McCarty Carino about the implications of these tools falling outside Section 230 protection.

Marketplace All-in-One
ChatGPT is a content host and creator. Does that make it liable for what it produces?

Marketplace All-in-One

Play Episode Listen Later Mar 2, 2023 9:47


So much of the internet today rests on the bedrock of a federal law that shields tech companies from liability for the content users post online. Everything from the AOL chatrooms of yore to modern social media likely wouldn’t exist without Section 230 of the 1996 Communications Decency Act. The idea is internet platforms aren’t acting like traditional publishers in creating content; they’re merely hosting it. But new generative artificial intelligence tools like DALL-E or ChatGPT that generate images or text are kind of different, says Matt Perault, director of the Center on Technology Policy at UNC Chapel Hill. He spoke with Marketplace’s Meghan McCarty Carino about the implications of these tools falling outside Section 230 protection.

The Lawfare Podcast
The CLOUD Act Five Years Later

The Lawfare Podcast

Play Episode Listen Later Feb 3, 2023 38:18


Next month will mark the five-year anniversary of the CLOUD Act, a foundational piece of legislation on cross-border data transfers and criminal investigations. Before he was a University of Minnesota law professor and senior editor at Lawfare, Alan Rozenshtein worked in the Department of Justice where he was a member of the team that developed the CLOUD Act. In that capacity, he interacted with representatives from the large tech companies that would be most directly affected by the law. One of these people was Matt Perault, then the head of Global Policy Development at Facebook, and now the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill. Matt joined Alan to discuss the CLOUD Act with two more people who were present at its creation: Greg Nojeim, senior counsel and director of the Security and Surveillance Project at the Center for Democracy and Technology, and Aaron Cooper, a partner at the law firm of Jenner & Block, who was at the time a colleague of Alan's at the Department of Justice. They talked about the reasons for the CLOUD Act's development, whether it has succeeded in its goals, and what we should expect to see in the next five years.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Moderated Content
MC Weekly Update 1/30: No One Expects the Copyright Order

Moderated Content

Play Episode Listen Later Jan 31, 2023 34:50


Stanford's Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:India UpdateAt least some of the YouTube, Meta, and Internet Archive takedowns of clips from a BBC documentary that examines Prime Minister Narendra Modi's political rise were due to copyright claims made by BBC, rather than requests made by the Indian government. Maybe they could have mentioned that a bit earlier? - Rishi Iyengar/ Foreign Policy, Russell Brandom/ Rest of World, Internet ArchiveLuckily, Twitter owner Elon Musk chimed in with a tweet reply that he hadn't heard of the issue, adding “It is not possible for me to fix every aspect of Twitter worldwide overnight, while still running Tesla and SpaceX, among other things.” - @elonmuskTwitter reinstated Indian Hindu nationalist accounts previously suspended for hate speech against Muslims. - Newley Purnell/ The Wall Street JournalTwitter CornerA new Twitter Files thread on the German Marshall Fund's Hamilton 68 project, which tracked Russian influence operations on Twitter, illustrates the dashboard's flawed methodology. That doesn't change the fact that there was Russian interference during the 2020 U.S. presidential election. - @mtaibbiMusk made the rounds on Capitol Hill, meeting with House leadership to ensure that Twitter will be “fair to both parties.” We are sure there will be tons of transparency. - Tony Romm, Faiz Siddiqui, Cat Zakrzewski, Adela Suliman/ The Washington PostTwitter will allow anyone to appeal an account suspension, starting this Wednesday, February 1. - @TwitterSafetyAnd Twitter is re-suspending some of those accounts. White supremacist and Holocaust denier Nick Fuentes was suspended less than 24 hours after his account was reinstated. - Julia Shapero/ The HillIn completely unrelated news, Twitter is being sued in Germany over failing to remove antisemitic hate speech. - Molly Killeen/ Euractiv, Aggi Cantrill, Karin Matussek/ Bloomberg NewsTikTok OffensiveTikTok is going on the offensive with public engagements explaining its private negotiations with the U.S. government. Executives are briefing members of Congress, academics, and think tank researchers about Project Texas, the company's plan to audit content recommendation systems and securely store and process U.S. user data in partnership with Oracle. - Cecilia Kang, Sapna Maheshwari, David McCabe/ The New York TimesResearchers briefed on TikTok's proposal to continue operating in the U.S. said that a new subsidiary, TikTok U.S. Data Security Inc. (USDS), will house all of its U.S. content moderation under the governance of an independent board that will report to the U.S. government (CFIUS) — not to ByteDance. Plans also call for TikTok's source code and content recommendation systems to be audited by Oracle and a third-party inspector. - David Ingram/ NBC News, Matt Perault, Samm Sacks/ Lawfare (commentary)Other storiesThe messy business of operating in China caught up with Apple again as the company's Safari web browser seems to have quietly adopted a Chinese government website block list. - Sam Biddle/ The InterceptGoogle plans to sunset a pilot program that stopped political campaign emails from winding up in the spam folder as it seeks to dismiss a lawsuit from the Republican National Committee claiming that Gmail filters have political bias. - Isaac Stanley-Becker/ The Washington Post, Ashley Gold/ AxiosThe Financial Times had a miserable experience attempting to run its own Mastodon instance, facing “compliance, security and reputational risks” in addition to cloud hosting costs and creepy factor issues, such as seeing direct messages by default. - Bryce Elder/ Financial TimesSports CornerDid Alex receive a call from the San Francisco 49ers football team during their NFL playoff game this weekend? No, not for that cyber issue last year. Things get “Purdy'' desperate when a team's first four quarterbacks are injured. - Nick Wagoner/ ESPNJoin the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don't forget to subscribe and share the podcast with friends!

The Lawfare Podcast
When States Make Tech Policy

The Lawfare Podcast

Play Episode Listen Later Jan 23, 2023 45:03


Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacy—and yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. More and more, state-level officials are proposing and implementing changes in technology policy. Most prominently, Texas and Florida recently passed laws restricting how platforms can moderate content, which will likely be considered by the Supreme Court later this year.On this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare senior editor Quinta Jurecic spoke with J. Scott Babwah Brennen and Matt Perault of the Center on Technology Policy at UNC-Chapel Hill. In recent months, they've put together two reports on state-level tech regulation. They talked about what's driving this trend, why and how state-level policymaking differs—and doesn't—from policymaking at the federal level, and what opportunities and complications this could create.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
When States Make Tech Policy

Arbiters of Truth

Play Episode Listen Later Jan 23, 2023 45:03


Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacy—and yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. More and more, state-level officials are proposing and implementing changes in technology policy. Most prominently, Texas and Florida recently passed laws restricting how platforms can moderate content, which will likely be considered by the Supreme Court later this year.On this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare senior editor Quinta Jurecic spoke with J. Scott Babwah Brennen and Matt Perault of the Center on Technology Policy at UNC-Chapel Hill. In recent months, they've put together two reports on state-level tech regulation. They talked about what's driving this trend, why and how state-level policymaking differs—and doesn't—from policymaking at the federal level, and what opportunities and complications this could create. Hosted on Acast. See acast.com/privacy for more information.

The Sunday Show
Examining Programmatic Political Advertising in the United States

The Sunday Show

Play Episode Listen Later Nov 1, 2022 36:38


As the U.S. midterm elections approach next week, there is a renewed focus on understanding the spending on and claims made in political advertising in digital channels, particularly on social media. But what is going on across the web, beyond the social media platforms? A https://techpolicy.unc.edu/wp-content/uploads/2022/09/UNC_CTP_Programmed-Political-Speech_final_corrected.pdf (recent report) from the University of North Carolina at Chapel Hill Center on Technology Policy found that as a result of restrictions on political ads instituted by major platforms ahead of the 2020 elections, political advertisers are increasingly turning to political advertising on other platforms. Programmatic advertising accounts for a substantial and increasing share of political advertising, they say, and more attention needs to be paid to this complex and confusing ecosystem of companies- large and small- that serve up ads on websites, apps, streaming services, and other digitally connected devices. This episode features a discussion with the report's authors, J. Scott Babwah Brennen & Matt Perault.

Marketplace Tech
After Roe, what happens when the rules for online speech are different in each state?

Marketplace Tech

Play Episode Listen Later Sep 20, 2022 7:14


Since the Supreme Court overturned Roe vs. Wade back in June, many states have been working on new laws related to digital privacy and access — or restricting what kind of information can be shared online. This trend highlights the increasing disparity between states in terms of what’s legal online and what might be in the future. Marketplace’s Kimberly Adams speaks with Matt Perault, director of the Center on Technology Policy at the University of North Carolina, Chapel Hill, and a consultant on technology policy issues. He wrote an essay for Wired on what might happen when the rules for what you can say and do online are different from state to state. Perault says this kind of digital fragmentation is a relatively new concept in the U.S., but some people already know what it’s like.

Marketplace All-in-One
After Roe, what happens when the rules for online speech are different in each state?

Marketplace All-in-One

Play Episode Listen Later Sep 20, 2022 7:14


Since the Supreme Court overturned Roe vs. Wade back in June, many states have been working on new laws related to digital privacy and access — or restricting what kind of information can be shared online. This trend highlights the increasing disparity between states in terms of what’s legal online and what might be in the future. Marketplace’s Kimberly Adams speaks with Matt Perault, director of the Center on Technology Policy at the University of North Carolina, Chapel Hill, and a consultant on technology policy issues. He wrote an essay for Wired on what might happen when the rules for what you can say and do online are different from state to state. Perault says this kind of digital fragmentation is a relatively new concept in the U.S., but some people already know what it’s like.

Tech Refactored
S2E33 - Leadership, Innovation, and Management with Matt Perault

Tech Refactored

Play Episode Listen Later May 4, 2022


On this episode we explore what leadership and innovation look like in the industry and in academia focused on the tech sector. Matt Perault is the director of the Center on Technology Policy at University of North Carolina's School of Information & Library Science. He previously led the Center on Science & Technology Policy at Duke University. Before returning to academia, Matt was a director on the public policy team and the head of the global policy development team at Facebook. He covered issues ranging from antitrust to law enforcement to human rights and oversaw the company's policy work on emerging technologies like artificial intelligence and virtual reality.

Tech Refactored
S2E33 - Leadership, Innovation, and Management with Matt Perault

Tech Refactored

Play Episode Listen Later Mar 23, 2022 47:51


On this episode we explore what leadership and innovation look like in the industry and in academia focused on the tech sector. Matt Perault is the director of the Center on Technology Policy at University of North Carolina's School of Information & Library Science. He previously led the Center on Science & Technology Policy at Duke University. Before returning to academia, Matt was a director on the public policy team and the head of the global policy development team at Facebook. He covered issues ranging from antitrust to law enforcement to human rights and oversaw the company's policy work on emerging technologies like artificial intelligence and virtual reality.

Explain to Shane
A potential breaking point for tech antitrust (with Matt Perault and Blair Levin)

Explain to Shane

Play Episode Listen Later Jan 18, 2022 35:42


Several bills seeking to restructure American antitrust law are moving forward in both bodies of Congress. The bills specifically target Big Tech firms — namely Amazon, Apple, Facebook, and Google — though they follow arbitrary metrics for “bigness,” and don't address lawmakers' main concerns with tech platforms. Meanwhile, the Department of Justice and Federal Trade Commission are also stepping up enforcement against Big Tech. With a Senate markup approaching for one of the bills, we must ask: Is Congress really prepared to pass major antitrust legislation? On this episode, https://www.aei.org/profile/shane-tews/ (Shane) is joined by https://sils.unc.edu/people/faculty/profiles/Matt-Perault (Matt Perault), a former Facebook public policy director and professor at the University of North Carolina School of Information & Library Science, along with https://www.newstreetresearch.com/team-members/blair-levin/ (Blair Levin), nonresident senior fellow at the Brookings Institution and policy advisor to New Street Research. The three of them discuss what consumers stand to lose if these bills become law, and the political forces at play in tech antitrust.

BettingPros NFL Podcast
Early Look at Week 8 NFL Lines (Ep. 120)

BettingPros NFL Podcast

Play Episode Listen Later Oct 25, 2021 38:40


Matt Perault joins Dan Harris to take an early look at the Week 8 NFL spreads and totals, in addition to a spirited Tua Tagovailoa debate.

NSI Live
Tech 2020/21: Allies, Enemies, and the Homefront Part 3: The National Security Implications of Antitrust

NSI Live

Play Episode Listen Later Oct 20, 2021 59:04


On Tuesday, October 19, 2021, as part of NSI's Tech Innovation and American National Security project, NSI hosted the third panel of a four-part series examining the national security implications of antitrust challenges at home and abroad. This third event took a look at how U.S. adversaries are addressing antitrust questions related to the tech industry as well as the implications of such efforts for our national security. Our panel features Maureen Ohlhausen, Matt Perault, and Alex Petros, and was moderated by NSI Founder and Executive Director, Jamil N. Jaffer.Adversarial nation-state governments, such as China's, are known to bolster their own economy through government financing of certain private sector industries, including companies in the tech industry, in order to effectuate national goals, including national security related goals. In particular, while China has encouraged rapid growth in its domestic tech sector in a bid to challenge its biggest economic competitor—the United States—it has at times, placed a heavy regulatory hand on both foreign and domestic tech companies, including using the levers of antitrust policy at home. China's antitrust challenges are seen by many to have little to do with protecting competition; rather, Beijing's antitrust and other policies appear to punish companies and executives that don't adhere to the party line. This panel will look at the ways in which China and other adversarial nations both encourage and discourage foreign and domestic tech competition and how such nations choose its antitrust targets. See acast.com/privacy for privacy and opt-out information.

PR Talk
Tech Policy and Our Society with Matt Perault

PR Talk

Play Episode Listen Later Oct 20, 2021 43:12


This week's episode of the PR Talk podcast features Amy's interview with Matt Perault during a recent Rotary Club of Portland meeting. The Oregon Ethics in Business committee hosted this fascinating conversation about the impact of technological policies on today's society during the October 12th luncheon. Matt Perault, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, drew on his varied experience at Facebook, the Congressional Oversight Panel, Duke University and UNC to discuss the societal impact of issues-based decision making in tech policy—such as human rights, antitrust, surveillance and the Snowden disclosures at Facebook. See the write-up at https://www.veracityagency.com/podcast/tech-policy-and-our-society/

Pushing the Odds
6/8 Pushing the Odds Hour 1

Pushing the Odds

Play Episode Listen Later Jun 8, 2021 46:48


Matt Perault discusses NBA and NHL Playoff action, as well as some MLB betting, DFS, and the latest on Aaron Rodgers.

Pushing the Odds
6/8 Hour 2 Pushing The Odds

Pushing the Odds

Play Episode Listen Later Jun 8, 2021 44:44


Matt Perault discusses DFS, Baseball Betting, UFC, NBA Playoffs, NHL Playoffs and much more