POPULARITY
In Episode 146 des c't-Datenschutz-Podcasts widmen sich Holger und Joerg einem Grundsatzthema: Wo steht steht der Datenschutz zwischen notwendiger Machtbegrenzung und störendem Innovationshindernis? Juraprofessorin Hannah Ruschemeier beschäftigt sich genau mit derlei Fragen. Sie forscht an der Universität Osnabrück zu KI-Regulierung, Plattformrecht und den Herausforderungen der digitalen Transformation. Auf dem DatenTag der Stiftung Datenschutz hatte sie jüngst ihre Thesen zur "Datenmacht" in einer Keynote zusammengefasst (siehe Shownotes). Ruschemeier bestreitet vehement, dass Datenschutz obsolet oder gar tot sei. Sie beobachte zwar eine gewisse Resignation in der Gesellschaft, sehe aber gerade deshalb die Notwendigkeit für mehr Aufklärung. Viele Menschen verstünden nicht, was mit ihren Daten passiert und welche Macht große Tech-Konzerne damit ausüben. Diese "informationelle Machtasymmetrie" zwischen Datenkonzernen und Verbrauchern hält sie für ein zentrales Problem. Besonders kritisch sieht die Professorin das Geschäftsmodell vieler Tech-Giganten, die nach dem Prinzip "move fast and break things" Fakten schaffen und sich erst später um rechtliche Konformität kümmern. Während Meta oder Google Milliardenstrafen quasi aus der Portokasse zahlen können, kämpfen kleine und mittlere Unternehmen mit hohen Compliance-Kosten. Diese Asymmetrie zeige sich auch im mangelnden Vollzug: Große Player würden unzureichend belangt, während kleinere Betriebe unter der Bürokratielast leiden. Aus der Praxis berichtet Joerg, dass die bürokratischen Hürden für Start-ups und kleine Unternehmen enorm sein können. Seiner These, dass Datenschutz durchaus Innovationen ausbremse, steht Ruschemeier allerdings kritisch gegenüber. Sie fordert hier eine differenziertere Sichtweise. Der Begriff "Innovation" dürfe kein Totschlagargument gegen jede Regulierung sein. Vielmehr müsse man fragen, wem eine Neuerung nützt. Sie plädiert für stärker gemeinwohlorientierte Definitionen und Entwicklungen. Regulierung schütze, statt zu hemmen - Europa solle stolz auf seinen starken Grundrechtsschutz sein. Ruschemeier plädiert für eine umfassende Reform der europäischen Datenschutz-Grundverordnung (DSGVO). Sie spricht sich im Podcast für einen risikobasierten Ansatz aus: Unternehmen mit besonders datenintensiven Geschäftsmodellen sollten strenger reguliert werden, während kleine Betriebe entlastet werden könnten. Gleichzeitig warnt sie davor, das Schutzniveau generell abzusenken. Stattdessen brauche es eine bessere Abstimmung zwischen den verschiedenen Digital-Gesetzen wie DSGVO, KI-Verordnung und Digital Services Act. Die Expertin sieht Europa nicht im Wettrennen mit USA und China um die nächste große Plattform, insbesondere im Bereich KI. Stattdessen sollte sich der Kontinent auf seine Stärken konzentrieren: starker Grundrechtsschutz, Rechtssicherheit und industrielle Anwendungen. Diese könnten durchaus Standortvorteile sein, wenn man sie richtig nutze und kommuniziere.
W Poranku Radia Wnet prof. Katarzyna Chałubińska-Jentkiewicz mówiła o Digital Services Act – czyli rozporządzeniu o jednolitym rynku usług cyfrowych – które przebudowuje zasady odpowiedzialności w sieci. Przypomniała, że DSA wprowadza nowe obowiązki informacyjne oraz procedury dla dostawców usług, od hostingodawców po „bardzo wielkie platformy” i wyszukiwarki.To rozporządzenie zmienia zasadniczo myślenie o tym, jak regulujemy środowisko internetowe. Można powiedzieć, że kończy się era wolności internetu. Będziemy teraz odpowiadać za działania i treści, które jako użytkownicy udostępniamy na platformach – wskazała.Kluczowa zmiana dotyczy ciężaru odpowiedzialności – szczególnie po stronie pośredników. To właściciele infrastruktury mają realne narzędzia do moderacji: usuwania treści czy blokowania kont. Jednocześnie definicja „treści nielegalnych” pozostaje w gestii prawa krajowego.Bardzo słusznie prawodawca unijny zdecydował, że treści nielegalne będą definiowane na poziomie państw członkowskich. Każde państwo najlepiej wie, co jest niezgodne z porządkiem prawnym i moralnością publiczną – mówiła.W Polsce trwają prace nad nowelizacją ustawy o świadczeniu usług drogą elektroniczną, która ma pełniej wdrożyć DSA i zbudować mechanizmy koordynacji. W projekcie wskazano, że rolę krajowego koordynatora rynku cyfrowego miałby pełnić prezes UKE.Propozycja jest taka, żeby koordynatorem spraw DSA był prezes Urzędu Komunikacji Elektronicznej. Chodzi o sieć proceduralnych powiązań między państwami UE przy usługach transgranicznych – także wobec wielkich platform spoza Unii – dodała.Wątpliwości budzi od lat ryzyko „miękkiej cenzury”. Profesor podkreśla, że cenzury prewencyjnej (ex ante) prawo nie dopuszcza, lecz platformy mają obowiązek moderowania środowiska i niedopuszczania do ponownego pojawienia się raz uznanych treści nielegalnych (np. w obszarach przestępczości komputerowej czy ochrony małoletnich).Nie ma tu ryzyka cenzury prewencyjnej. Dostawcy będą zobowiązani do moderacji i dbania, by treść uznana za nielegalną nie pojawiła się ponownie. To wynika też z przepisów o usługach audiowizualnych i ustawy o radiofonii i telewizji – wskazała.Katalog treści nielegalnych w Polsce ma opierać się przede wszystkim na Kodeksie karnym i przepisach szczególnych, ale – jak zaznacza profesor – to wciąż za wąski wykaz. Jednocześnie dezinformacja w projekcie wdrożeniowym nie jest definiowana jako treść nielegalna.„W projekcie mamy wskazania oparte na przestępczości z kodeksu karnego. To za krótkie. Dezinformacji nie da się po prostu zdefiniować jako przestępczości – to inne zjawisko.”Na koniec – odwieczny dylemat między wolnością a bezpieczeństwem. DSA ma według profesor zwiększyć to drugie, nie unieważniając pierwszego.„Powinniśmy się cieszyć, że będziemy bardziej bezpieczni w sieci. Ale to zawsze dylemat: więcej wolności czy więcej bezpieczeństwa. Jestem po stronie wolności – i jednocześnie uważam bezpieczeństwo za bardzo ważne.”Profesor zapowiedziała, że 4 listopada odbędzie się wysłuchanie publiczne w sprawie polskich rozwiązań wdrożeniowych – tam mogą zapaść ważne rozstrzygnięcia co do konstrukcji systemu i roli krajowego koordynatora.
//The Wire//2300Z October 23, 2025////ROUTINE////BLUF: UNREST CONTINUES IN IRELAND AS CENSORSHIP CRACKDOWN INTENSIFIES. ILLEGAL MIGRANT FROM INDIA CAUSES FATAL CRASH IN CALIFORNIA. MIGRANT WOES CONTINUE THROUGHOUT EUROPE.// -----BEGIN TEARLINE------International Events-Middle East: The war continues as before. Following Israeli strikes in Lebanon a few days ago, more significant airstrikes were reported in southern Lebanon overnight. Yesterday evening the Israeli Knesset voted to annex the West Bank, a move which has drawn slight criticism from the United States due to the war continuing to widen after the ceasefire was broken last week. SECSTATE Marco Rubio condemned this vote and VP Vance stated that Israel will not be allowed to annex the West Bank.Italy: A man was arrested on Tuesday after attempting to kidnap a child from a stroller near the train station in Bologna. Local authorities state that the man, who is an illegal immigrant from Gambia, approached a family on the sidewalk and attempted to grab their newborn child from their stroller. After local citizens fought back against him, he fled the scene before being arrested later in an adjacent park. The suspect has not been identified due to allegedly being a juvenile, but locals did state that he was an asylum seeker living at the local migrant housing complex near the attack site.-HomeFront-California: The driver of a semi truck who caused a fatal pileup on I-10 yesterday has been arrested. Jashanpreet Singh, an illegal immigrant from India, was arrested on multiple counts after the crash, including DUI. 3x people were killed during the crash, and 4x others wounded.Analyst Comment: Singh was illegally in the United States after walking over the southern border in 2022. After being initially detained, he was released into society by Customs and Border Patrol due to the catch-and-release open-border policy at the time.-----END TEARLINE-----Analyst Comments: In Ireland, the situation remains as tense as expected. One of the major issues with discerning the truth regarding events on the ground is the now-infamous Online Safety Act, which has been adopted by the European Union as the Digital Services Act (DSA) This is more or less the EU copying Starmer's homework and bringing the Online Safety Act to most of Europe. The DSA and the local regulations put into effect by the Irish government (most notably the chairman of the media commission Jeremy Godfrey) have resulted in a lot of the footage of protest activity being banned from being seen from Irish IP addresses. Users on X for example, have noted that their content is banned from view in Ireland, even content that is benign in nature but related to protest activity. This means that either the Irish government and/or the European Commission itself has ordered X to remove certain content by flagging it as "adult content" and thus subject to the Digital Services Act.In other words, exactly what everyone thought would happen, has happened. And not only happened once, but has been industrialized. Content which makes government agencies look bad, or otherwise highlights the crimes of violent migrants gets flagged as "adult content" and therefore is subject to censorship via the Online Safety Act and requires a photo ID (or a digital ID) to see. Right now, there are probably a lot of people in Ireland that have no understanding of the scale of protests and demonstrations (or even what the issue is all about) due to this censorship. This has been made worse by other larger issues that have actually made it into the mainstream press.Yesterday, the family of Josef Puska was sentenced for their various roles in the murder of Ashling Murphy. These sentences have inflamed tensions once more due to the nature of the crime itself, and the systemic coverup of the incident. While walking on a hiking path back in 20
The European Commission has accused Meta and TikTok of violating the Digital Services Act, Intel’s revenue rose 3% year-over-year to $13.7 billion, and a federal court has ordered Meta to release internal documents related to teen harm. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–withoutContinue reading "Anthropic And Google Have Formed A Cloud Partnership Valued At Tens Of Billions Of Dollars – DTH"
Tara dives deep into the mounting threats to Americans both domestically and internationally. From Portland's “ICE wars” with lasers, armed confrontations, and rogue courts, to Europe's escalating censorship regimes targeting 4chan, LinkedIn, and even basic scientific truths, the episode reveals a world where free speech is under siege. Add in rising political violence in North Carolina and global geopolitical maneuvering with Russia, and listeners get an unfiltered look at how law, media, and politics collide to create danger at home and abroad.
New research shows that children who go onto the social media app TikTok and make enquiries about mental health will quickly find depressive content, and that within a few hours they are bound to see content from users expressing the will to kill themselves. That research from Amnesty Tech is entitled "Dragged Into the Rabbit Hole". The organisation says the findings highlight TikTok's ongoing failure to address its systemic design risks affecting children and young people, and also illustrate the failings of the European Union's Digital Services Act. Since 2023, the act requires platforms to identify and mitigate systemic risks to children's rights. In Perspective, we spoke to Lauren Armistead, deputy director at Amnesty Tech.
Europe created regulatory frameworks, including the Digital Markets Act and the Digital Services Act, in attempt to reassert control, but these laws have recently faced backlash from the Trump administration. In this episode of the TechTank podcast, Courtney Radsch speaks with Marietje Schaake about these challenges and the digital sovereignty debate. Hosted on Acast. See acast.com/privacy for more information.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Thomas Hughes, CEO of Appeals Centre Europe and former Director at the Oversight Board. Together they discuss:Appeals Centre Europe Transparency Report (ACE)Most people want platforms (not governments) to be responsible for moderating content (Reuters Institute) Happy Birthday, Digital Services Act! – Time for a Reality Check (Algorithm Watch)Proof-of-age ID leaked in Discord data breach (The Guardian)Update on a Security Incident Involving Third-Party Customer Service (Discord)Another Day, Another Age Verification Data Breach: Discord's Third-Party Partner Leaked Government IDs (Techdirt)Exclusive: Apple Quietly Made ICE Agents a Protected Class (Migrant Insider)My Email to Tim Cook (Wiley Hodges — Substack) Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
It's been three years since Europe's Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms' original decisions were overturned, either because they were incorrect, or because the platform didn't provide the content for review at all.Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:Thomas Hughes is the CEO of Appeals Center EuropePaddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.
Thomas Hughes, CEO of Appeals Centre Europe, explains out-of-court dispute settlement bodies (ODS bodies) and the role they play in the Digital Services Act. We discuss Appeals Centre Europe's first Transparency Report*, which provides an initial glimpse into how Facebook, Instagram, TikTok, and YouTube are handling requests from ODS bodies. We cover some top line descriptives from the Centre's first year: how many disputes were submitted, from which platform, and what types of content they were about. We also talk about 'signposting' and how the platforms are communicating their digital rights to users.*Just a friendly heads up that the report is 25MB, so you may want to download it on WiFi!
This episode unpacks the media circus around Jimmy Kimmel and the supposed “first amendment crisis,” revealing the real forces shaping speech in America. From ABC's temporary pull of Kimmel's show to Hillary Clinton traveling to Europe to push the EU's Digital Services Act, we explore how censorship is being globalized and weaponized against conservative voices. We break down the difference between theatrical outrage over the FCC and actual government-backed speech monitoring, including the FBI's actions on Twitter and John Brennan's international influence. Hear how Americans' perception of free speech is shifting, with 79% now feeling it's under threat, and why the real authoritarian moves are coming from entrenched elites rather than elected Republicans. A deep dive into the clash between political theater, media narratives, and the global fight over what can—and cannot—be said online.
The European Union’s Digital Services Act applies to digital platforms and service providers offering services to users in the EU, regardless of where the company is based—including U.S. companies.EU officials contend the Digital Services Act is needed to protect democracy from misinformation, disinformation, and hate speech online. Regulators in Brussels promise it will create a safer digital space by holding platforms such as Google, Amazon, Meta, and X accountable for policing these categories. Service providers that fail to comply risk fines of up to 6% of global annual revenue, restricted access to the EU market, or suspension of operations.House Judiciary Republicans recently issued a report warning that European regulators could use the Digital Services Act to chill speech, suppress political dissent, and establish a global censorship regime. By contrast, House Judiciary Democrats argue the Digital Services Act includes procedural safeguards, judicial oversight of content moderation, and democratic accountability within the EU.Will the Act make Brussels the new “sheriff of the digital public square”? Could it export European hate speech laws—which have at times been used against individuals peacefully expressing their views—beyond Europe? And what steps can governments, companies, and citizens take to safeguard free expression online?Join the Federalist Society for a discussion with experts on the EU, the Digital Services Act, and freedom of expression as we consider whether the United States should support—or oppose—the Act.Featuring: Stéphane Bonichot, Partner, Briard Bonichot & AssociésDr. Adina Portaru, Senior Counsel, Alliance Defending Freedom InternationalDr. John Rosenthal, Independent scholar and journalistBerin Szóka, President, TechFreedomModerator: Prof. Maimon Schwarzschild, Professor of Law, University of San Diego School of Law
In this episode of The Winston Marshall Show, I sit down with Mike Benz, former State Department official and leading critic of the Censorship Industrial Complex, to expose how governments and NGOs are working hand-in-glove with Big Tech to police free speech across the West.Mike explains how the EU's Digital Services Act and Britain's Online Safety Act are being used to impose AI-driven censorship, threatening not only European citizens but also Americans posting from within the US. We explore how these laws function as a “digital censorship regime,” backed by crippling fines and international coordination.From USAID funding NGOs to shut down cross-border political movements, to the Hate Lab in Cardiff feeding real-time speech data to UK police, Benz reveals how censorship has become industrialised — and how Elon Musk's takeover of Twitter briefly broke the system. We also discuss the Biden administration's role, the UK's crackdown on comedians and dissidents, and the disturbing reality of AI censorship ‘Death Stars' designed to silence debate on elections, COVID, climate, and more.A deep dive into the machinery of modern censorship — and what it means for the future of free speech in the West.-----------------------------------------------------------------------------------------------------------------------To see more exclusive content and interviews consider subscribing to my substack here: https://www.winstonmarshall.co.uk/-----------------------------------------------------------------------------------------------------------------------FOLLOW ME ON SOCIAL MEDIA:Substack: https://www.winstonmarshall.co.uk/X: https://twitter.com/mrwinmarshallInsta: https://www.instagram.com/winstonmarshallLinktree: https://linktr.ee/winstonmarshall----------------------------------------------------------------------------------------------------------------------Chapters 00:00 Introduction01:32 What Is the Censorship Industrial Complex?06:10 How the Government Outsources Censorship to NGOs11:45 The EU's Role in Exporting Censorship to the US17:30 AI-Powered Censorship & The Death of Free Speech22:05 Twitter Files & the Exposure of Government Pressure28:40 How “Trust & Safety” Became a Weapon35:15 The Role of the National Security State41:10 Elon Musk vs The Censorship Machine47:35 How the Online Safety Bill Extends UK Censorship53:20 The Global Coordination of Digital Repression1:00:45 How NGOs Control the Narrative Across Borders1:07:12 Mike Benz's Plan to Dismantle the Censorship Network1:12:40 Final Thoughts & A Warning for the West Hosted on Acast. See acast.com/privacy for more information.
Épisode 1367 : C'est Adam Mosseri, le patron d'Instagram, qui a pris la parole en vidéo le 24 septembre 2025. Il y dévoile une interface accessible à tous les utilisateurs. Pour fêter ce cap, la plateforme dévoile une nouvelle fonctionnalité baptisée Your Algorithm. Un outil qui pourrait bien redéfinir la manière dont les contenus sont recommandés sur le réseau.Un bouton pour reprendre la main sur l'algorithme.C'est Adam Mosseri, le patron d'Instagram, qui a pris la parole en vidéo le 24 septembre 2025. Il y dévoile une interface accessible à tous les utilisateurs. Objectif : permettre à chacun de personnaliser ses préférences de recommandation.Le principe : afficher la liste des sujets que l'algorithme a identifiés comme étant vos centres d'intérêts, et vous donner la possibilité de les modifier manuellement. Ajouter des thématiques. En supprimer d'autres. Corriger ce que l'algorithme a mal interprété.Un lancement stratégique sur Reels.La fonctionnalité est d'abord déployée sur Instagram Reels, avant d'être étendue à Explorer puis au feed principal. Ce choix n'est pas anodin. Reels est le format où l'algorithme joue un rôle central dans la distribution.L'interface est simple. Deux listes :• Ce que vous aimez• Ce que vous ne voulez plus voirRegardé une vidéo de chat par erreur ? Votre feed s'est rempli de félins ? Vous pourrez désormais supprimer cette catégorie d'un clic.Instagram s'aligne avec la demande utilisateur.Cela fait longtemps que les utilisateurs réclament plus de contrôle sur leur fil. Beaucoup souhaitent retrouver davantage de contenu issu des comptes qu'ils suivent réellement. Instagram répond ici à une frustration récurrente.3 piliers stratégiques pour Instagram : DMs, Reels, Reco.Mosseri l'a dit clairement : la croissance d'Instagram est portée par trois éléments : les messages privés, les Reels, les recommandations algorithmiques. L'app s'oriente donc entièrement autour de ces usages. Exemple concret : le bouton d'upload remplacé par un raccourci vers les DMs.Une nouvelle donne pour les créateurs.Ce changement bouleverse la stratégie des créateurs de contenu. Plus question de compter uniquement sur l'algorithme. Avec des utilisateurs qui filtrent eux-mêmes ce qu'ils souhaitent voir, il va falloir proposer du contenu plus engageant, plus ciblé, plus authentique.—Plus de transparence, plus de pouvoir aux utilisateurs.Meta accélère sur la transparence algorithmique. Une nécessité imposée par le Digital Services Act européen. Depuis 2023, l'entreprise multiplie les initiatives pour rendre ses algorithmes lisibles. La fonction "Your Algorithm" s'ajoute à celle de réinitialisation des recommandations, disponible depuis fin 2024.——Ce contrôle utilisateur pose plusieurs questions :• Est-ce que les préférences manuelles vont vraiment primer sur les autres signaux de l'algorithme ?. . . Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs. Nous sommes une agence social media basée à Lyon : https://supernatifs.com. Ensemble, nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Ensemble, nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Hello, and welcome to episode 188 of the Financial Crime Weekly Podcast, I am Chris Kirkbride. In this episode, beginning with sanctions, where the UK government has published a comprehensive starter guide for businesses and imposed asset freezes on two entities, Embers of an Empire and Rampage Productions, for alleged links to terrorism through music. The US has also taken action, sanctioning Indian nationals and an online pharmacy for trafficking counterfeit fentanyl pills. In money laundering news, a joint effort by Eurojust and Europol dismantled a sophisticated network laundering at least €30 million through gold bar transactions across France and Italy, seizing nearly 100 kilos of gold and luxury assets. We cover significant fraud operations, including INTERPOL's Operation HAECHI VI, which recovered $439 million globally from various cyber-enabled financial crimes, and the situation surrounding global insurer Allianz, which is grappling with a surge in fraud detection in the UK while simultaneously facing a $16.8 million fine in Australia for misleading claims. Rounding out the episode, we look at the EU demanding that major tech platforms like Google and Apple disclose their anti-scam measures under the Digital Services Act and discuss the recent UK arrest connected to the cyber-attack on Collins Aerospace that disrupted European flights.A transcript of this podcast, with links to the stories, will be available at www.crimes.financial.
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Ein Kommentar von Norbert Häring.Ist Digitalcourage Anwalt für Bürgerrechte und Demokratie oder Trojanisches Pferd der Mächtigen? Lange schwankte ich in meinem Urteil. Mit einem Aufruf zur Rettung des Digital Services Act der EU zeigt die Führung der Organisation ihre wahren Farben. Mir ihr offenbart eine ganze Reihe von Mitunterzeichnern ein fragwürdiges Demokratieverständnis und eine mindestens zwiespältige Rolle.In einem E-Mail-Newsletter vom 17.9. fordert Digitalcourage:“ „Freiheit für Fakten“ – Jetzt den DSA verteidigenTwitter, Instagram, Youtube und TikTok hetzen uns gegeneinander auf und machen damit Profit. Beiträge mit Hass und Hetze verbreiten sich schneller – und genau die fördern Musk, Zuckerberg und Co., während Fakten untergehen. Die Algorithmen dahinter sind unbekannt, verzerren aber auch hier in Europa die Öffentlichkeit und zerstören so unsere demokratischen Gesellschaften.Der Digital Service Act (DSA) der Europäischen Union verpflichtet die großen Plattformen dazu, ihre Algorithmen offenzulegen. Außerdem gibt er der Kommission die Macht, gegen die Algorithmen vorzugehen, die zur Verbreitung von Falschinformationen beitragen. Ein wichtiger Schritt, um unsere Demokratie internetfest zu machen. Genau deshalb ist er den US-Tech-Bro-Firmen und Präsident Trump ein Dorn im Auge. Seit Trump uns mit hohen Zöllen droht, scheint die Kommission leider zu allem bereit zu sein. Anstatt stark zu verhandeln, fürchten wir, dass die EU einknickt und sich bereit erklärt, unseren Schutz zu opfern, um Trump und seine Tech-Bros zu besänftigen.Es braucht jetzt ein starkes Signal von Politik und Gesellschaft, um der Kommission zu zeigen: Der DSA schützt unsere Demokratien! Der DSA ermöglicht Freiheit für Fakten! Der DSA ist ein Beispiel für gelungene EU-Politik. Die Grünen-Politikerin Alexandra Geese hat deshalb eine Petition gestartet, Digitalcourage, Markus Beckedahl, Shoshana Zuboff und Wikimedia Deutschland unterstützen sie als Erstunterzeichnerinnen. Gemeinsam fordern wir von der Europäischen Kommission:Setzt den DSA durch, überprüft die Algorithmen – und stoppt die Desinformationen! Dafür braucht es Sie. Unterschreiben Sie die Petition von Alexandra Geese und kämpfen Sie mit uns für echte Meinungsfreiheit. Damit wir entscheiden, was wir sehen – nicht Zuckerberg, Musk und Trump.Hier geht es zur Petition. Und weil wir das Thema so wichtig finden, haben wir dazu auch einen offenen Brief unterzeichnet, der von einem breiten internationalen Bündnis an Ursula von der Leyen gesendet wurde. Hier der Brief (Englische Originalversion):“Als jemand, der sich intensiv mit dem Digital Services Act befasst hat, halte ich es für sehr unwahrscheinlich, dass kundige Leute wie die von Digitalcourage dieses Gesetz zur Unterdrückung nicht genehmer Meinungen und Informationen aus schierer Unkenntnis als Rettung für die Meinungsfreiheit und Schutz der Demokratie werten und verteidigen können. Nicht alles, wogegen Donald Trump, J.D. Vance und Elon Musk agitieren, ist allein schon deshalb gut. Hosted on Acast. See acast.com/privacy for more information.
Greg Lukianoff, president of the Foundation for Individual Rights and Expression, joins The Winston Marshall Show for a sweeping conversation on free speech, censorship, and the dangers of state overreach.Lukianoff warns that Britain is sliding into authoritarianism with the Online Safety Act, the arrest of comedian Graham Linehan for “offensive tweets,” and thousands of citizens detained each year for speech crimes. He explains why Americans should be alarmed—not only because of cultural ties, but because UK and EU laws like the Digital Services Act now risk exporting censorship to the United States.They discuss the chilling effect of “non-crime hate incidents,” the hypocrisy of politicians who ignore Islamist extremism while cracking down on online speech, and why Big Tech is tripping over itself to appease Brussels bureaucrats. Lukianoff contrasts this with the Trump administration's free speech battles on university campuses, where anti-Semitism and harassment rationales are being used to justify speech codes.They also dive into the deeper history—Lenin, Marx, communism, and why the West never reckoned with the crimes of socialism—leaving us vulnerable to a new wave of ideological totalitarianism.All this—Britain's speech crisis, EU overreach, Trump and the universities, and the global struggle to preserve freedom of expression in the 21st century…-----------------------------------------------------------------------------------------------------------------------To see more exclusive content and interviews consider subscribing to my substack here: https://www.winstonmarshall.co.uk/-----------------------------------------------------------------------------------------------------------------------FOLLOW ME ON SOCIAL MEDIA:Substack: https://www.winstonmarshall.co.uk/X: https://twitter.com/mrwinmarshallInsta: https://www.instagram.com/winstonmarshallLinktree: https://linktr.ee/winstonmarshall----------------------------------------------------------------------------------------------------------------------Chapters 00:00 - Introduction & Guest Welcome 02:49 - Graham Linehan's Arrest and Free Speech in the UK 05:08 - Arrests for Offensive Speech in the UK 07:34 - Non-Crime Hate Incidents and Social Attitudes 10:03 - Malcolm Gladwell, Trans Issues, and Chilling Effects 14:03 - The Online Safety Act and Censorship of Americans 17:04 - Why Americans Care About British Free Speech 19:14 - The EU Digital Services Act and Global Censorship 22:20 - Protecting Children Online: Policy and Parental Controls 28:41 - Free Speech in America: Trump, Wokeism, and Ideology 32:31 - Communism, Fascism, and Totalitarianism: Historical Parallels 36:30 - Reckoning with Socialism and Communism 39:30 - Karl Popper, Tolerance, and Hate Speech Laws 42:30 - Trump Administration, Campus Speech Codes, and Harassment 45:32 - Political Correctness and Speech Codes in Higher Ed 51:02 - Harvard, Civil Rights Act, and Federal Oversight 55:56 - The Right Way to Reform University Funding 58:52 - Free Speech for Pro-Palestinian Students and Deportation 1:10:19 - Why Harvard Ranks Lowest for Free Speech 1:13:21 - FIRE's New Books and Campus Free Speech Rankings 1:13:56 - Closing Remarks Hosted on Acast. See acast.com/privacy for more information.
Gold & Silber | Podcast für Investoren, Krisenvorsorger und Sammler | Kettner-Edelmetalle
Die Meinungsfreiheit in Deutschland steht massiv unter Druck. Hausdurchsuchungen im Morgengrauen, tausende Strafanzeigen gegen Bürger und Journalisten, staatlich finanzierte NGOs als verlängerter Arm der Politik – und dazu der Digital Services Act aus Brüssel, der Zensur europaweit zementiert.Medienanwalt Joachim Steinhöfel berichtet exklusiv von 16 gewonnenen Verfahren gegen die Bundesregierung im Jahr 2024 – und warum der Kampf um unsere Grundrechte jetzt härter wird denn je.
Nachdem das US-Außenministerium Diplomaten dazu aufgerufen hatte gegen EU-Digitalgesetze vorzugehen, berichten US-Quellen nun über mögliche Sanktionen gegen EU-Politiker. Das Weiße Haus hält sich mit Informationen zurück.
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Klickibunti: Wie Windows 95 den modernen PC prägte – Vor 30 Jahren revolutionierte Microsoft mit Windows 95 die PC-Welt und legte den Grundstein für die moderne Computernutzung. Was machte Windows 95 so wegweisend? Welche Designprinzipien prägen noch heute unsere Computer? Und wie veränderte das Betriebssystem die Art, wie wir mit Computern umgehen? - Das riecht nach Ärger: US-Regierung will EU-Regulierer sanktionieren – Die Trump-Regierung plant angeblich Sanktionen gegen EU-Verantwortliche wegen des Digital Services Act. Wie ernst ist diese Drohung zu nehmen? Welche Auswirkungen hätte ein Handelsstreit zwischen USA und EU auf die Tech-Regulierung? Und kann die EU ihre digitale Souveränität gegen US-Druck behaupten? - Anrufchaos: Legen automatisierte Notrufe Rettungsleitstellen lahm? – Smarte Geräte und automatisierte Systeme überlasten zunehmend die Notfall-Infrastruktur mit Fehlalarmen. Wie groß ist das Problem der automatisierten Notrufe wirklich? Welche technischen Lösungen gibt es, um echte Notfälle von Fehlalarmen zu unterscheiden? Und wie können Rettungsleitstellen mit der wachsenden Zahl vernetzter Geräte umgehen? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
President Donald Trump called out countries trying to regulate U.S. tech companies earlier this week, warning they could face new tariffs. The White House has struck a provisional trade deal with the European Union, but tensions remain over the bloc's sweeping tech laws, like the Digital Services Act, which requires platforms to moderate illegal content and disinformation.
President Donald Trump called out countries trying to regulate U.S. tech companies earlier this week, warning they could face new tariffs. The White House has struck a provisional trade deal with the European Union, but tensions remain over the bloc's sweeping tech laws, like the Digital Services Act, which requires platforms to moderate illegal content and disinformation.
In this urgent update, we break down the political firestorm as JD Vance and Donald Trump confront the UK and EU over digital privacy and censorship. We'll cover the forced withdrawal of the UK's demand for iPhone backdoors, the potential for unprecedented US sanctions and travel bans on foreign officials, and the escalating battle against what they call the "Orwellian" Online Safety Act and Digital Services Act. This episode dives into the real-world implications of these policies, the threat of central bank digital currencies (CBDCs), and the broader fight for free speech against global tech regulation.Support the showThanks for listening. For more commentary, join my newsletter! Click here to stay in touchSee you next time!
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Klickibunti: Wie Windows 95 den modernen PC prägte – Vor 30 Jahren revolutionierte Microsoft mit Windows 95 die PC-Welt und legte den Grundstein für die moderne Computernutzung. Was machte Windows 95 so wegweisend? Welche Designprinzipien prägen noch heute unsere Computer? Und wie veränderte das Betriebssystem die Art, wie wir mit Computern umgehen? - Das riecht nach Ärger: US-Regierung will EU-Regulierer sanktionieren – Die Trump-Regierung plant angeblich Sanktionen gegen EU-Verantwortliche wegen des Digital Services Act. Wie ernst ist diese Drohung zu nehmen? Welche Auswirkungen hätte ein Handelsstreit zwischen USA und EU auf die Tech-Regulierung? Und kann die EU ihre digitale Souveränität gegen US-Druck behaupten? - Anrufchaos: Legen automatisierte Notrufe Rettungsleitstellen lahm? – Smarte Geräte und automatisierte Systeme überlasten zunehmend die Notfall-Infrastruktur mit Fehlalarmen. Wie groß ist das Problem der automatisierten Notrufe wirklich? Welche technischen Lösungen gibt es, um echte Notfälle von Fehlalarmen zu unterscheiden? Und wie können Rettungsleitstellen mit der wachsenden Zahl vernetzter Geräte umgehen? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
Basic freedoms are under threat in the Western world, as the EU's Digital Services Act and the UK's Online Safety Act raise concerns about Orwellian-style censorship—regulations that could extend across the Atlantic, impacting American citizens and businesses alike.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.
Elon Musk is taking his beef with Sam Altman AND Tim Apple to court. The Trump administration could sanction EU officials over the Digital Services Act. Spotify has added DMs. And Ben Thompson's deep analysis of the whole Intel situation. Links: Elon Musk's xAI sues Apple and OpenAI over AI competition, App Store rankings (Reuters) Exclusive: Trump administration weighs sanctions on officials implementing EU tech law, sources say (Reuters) Spotify is adding DMs (The Verge) Attorneys General To AI Chatbot Companies: You Will ‘Answer For It' If You Harm Children (404Media) Intel says Trump deal has risks for shareholders, international sales (CNBC) U.S. Intel (Stratechery) Learn more about your ad choices. Visit megaphone.fm/adchoices
President Trump dismisses Federal Reserve governor Lisa Cook over alleged mortgage loan improprieties but Cook says Trump has no grounds to remove her. In France, Prime Minister Francois Bayrou says he will put his budget cuts to a confidence vote in parliament. The three main opposition parties have already vowed to oppose his measures. And President Trump has threatened further tariffs on countries that tax or regulate American tech companies and his administration is also considering sanctions on EU or member state officials for implementing the Digital Services Act. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
As the U.S. and EU celebrate a historic trade agreement, a different restriction comes into focus: the EU's sweeping Digital Services Act. In this encore episode of Morning Wire, Jeremy Tedesco from the Alliance Defending Freedom joins us to explain the unprecedented foreign censorship pressure applied to U.S. tech companies and everyday Americans. Get the facts first on Morning Wire. - - - Wake up with new Morning Wire merch: https://bit.ly/4lIubt3 - - - Today's Sponsors: ZocDoc - Find and instantly book a top-rated doctor today. Visit https://Zocdoc.com/WIRE #sponsored - - - Privacy Policy: https://www.dailywire.com/privacy morning wire,morning wire podcast,the morning wire podcast,Georgia Howe,John Bickley,daily wire podcast,podcast,news podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
This is a free preview of a paid episode. To hear more, visit www.racket.newsThe European Commission holds a seminar on implementation of its Digital Services Act, offering a disturbing window into a tightening speech landscapeNarrated by Jared Moore
Is free speech in global decline? A new survey suggests public support for free expression is dropping worldwide, with citizens in authoritarian countries like Venezuela and Hungary showing stronger commitment to free speech than many living in democracies.From the unfulfilled digital promises of the Arab Spring to Europe's controversial Digital Services Act, the Internet hasn't necessarily delivered the free speech revolution many predicted. Americans under 30 are less committed to free speech principles than previous generations, while both of the U.S.'s major political parties face accusations of using government power to control information.As AI reshapes how we communicate and governments worldwide rethink speech regulations, what does this mean for the future of human expression? Are we witnessing a fundamental shift in how societies value free speech, or simply recycling ancient debates in digital form?Evan is joined by Jacob Mchangama, Executive Director of The Future of Free Speech at Vanderbilt, and author of Free Speech: A History From Socrates to Social Media, and Ashkhen Kazaryan, Senior Legal Fellow at The Future of Free Speech. Previously, she was the lead for North and Latin America on the content regulation team at Meta.
Host Stefan Lassnig nimmt die Hörer:innen in dieser Spezialfolge „Ganz Offen gesagt“ mit auf eine Pressereise ins Europäische Parlament in Brüssel. Gemeinsam mit den Gästen Lena Schilling, Helmut Brandstätter, Hannes Heide und Lukas Mandl beleuchtet er die akuten Gefahren für die Pressefreiheit in Europa – vom Einfluss großer US-Techkonzerne über wachsende Propagandakanäle rechtsextremer Gruppen bis zum Verlust unabhängiger Medien als Stütze der Demokratie.Ausgewählte Zitate aus dieser Episode:• Lena Schilling: „Aber ja, im Jahr 2025 brauchen wir eine europäische Union, die Pressefreiheit in Nationalstaaten sichert, weil nationalstaatliche Tendenzen leider, zB in Ungarn oder auch in Bulgarien, in eine Richtung gehen, die mir arg Bauchweh macht.“• Helmut Brandstätter: „Und wenn Herr Vance dann nach Europa kommt und sagt, die Meinungsfreiheit ist eingeschränkt, dann ist das schlicht und ergreifend falsch. Wenn der amerikanische Präsident gewisse Journalisten nicht mehr zulässt, dann ist die Meinungsfreiheit eingeschränkt. Wenn er Medien, Radio und Fernsehstationen lobt, die eindeutig russische Propaganda bringen, dann ist die Meinungsfreiheit eingeschränkt. • Hannes Heide: „Und da gibt es den Digital Service Act, der sich zum Ziel gesetzt hat, und ich sage es einfach wirklich so, den wilden Westen im Internet zu beenden."• Lukas Mandl: „Es sind eindeutig Tendenzen, die Pressefreiheit in Frage stellen oder sogar hintanhalten wollen, die freie Medien versuchen zu unterdrücken oder gar abzuschaffen, im Ansteigen begriffen. Man muss das mit einem klaren Blick auf die weltweite Situation sehen. Und auch in Europa gibt es Tendenzen, die besorgniserregend sind.“Links zur Folge:Welttag der Pressefreiheit: EU-Erklärung 2025EU-Medienfreiheitsgesetz & Schutz für JournalistenDigital Services Act: Was gilt seit 2024?Podcastempfehlung der Woche:2040 - So schön kann die Zukunft sein Wir würden uns sehr freuen, wenn Du "Ganz offen gesagt" auf einem der folgenden Wege unterstützt:Werde Unterstützer:in auf SteadyKaufe ein Premium-Abo auf AppleKaufe Artikel in unserem FanshopSchalte Werbung in unserem PodcastFeedback bitte an redaktion@ganzoffengesagt.at
Our host, Corbin Barthold, heads over to So to Speak: The Free Speech Podcast. He talks with Nico Perrino and Ari Cohn of FIRE about the death of the AI moratorium, a bogus wrongful death lawsuit against Complexity AI, the FTC's campaign of censorship against ad agencies, and the absurdities of Europe's Digital Services Act.Links:Shownotes, courtesy of So to Speak: The Free Speech Podcast
Ein Vortrag der Strafrechtlerin Johanna RinceanuModeration: Katrin Ohlendorf ********** Das Netz ist mittlerweile der Ort für gesellschaftlichen und politischen Diskurs. Damit unterliegt er auch den Strukturen und Dynamiken des Internets, vor allem der Social-Media-Plattformen. Was bedeutet das für die Meinungsfreiheit? Johanna Rinceanu ist Strafrechtlerin und Senior Researcher am Max-Planck-Institut zur Erforschung von Kriminalität, Sicherheit und Recht. Ihren Vortrag "Meinungsfreiheit in der digitalen Welt?" hat sie am 13.05.2025 im Rahmen der Reihe "Die Verfassung der Freiheit – Demokratieprobleme der Gegenwart" am Hamburger Institut für Sozialforschung gehalten. ********** +++ Deutschlandfunk Nova +++ Hörsaal +++ Vortrag +++ Menschenrechte +++ Meinungsfreiheit +++ Freiheit +++ Datenschutz +++ Menschenwürde +++ Recht +++ Gesetze +++ Internet +++ Überwachung +++ Zensur +++ Hate Speech +++ Desinformation +++ Fake News +++ Soziale Medien +++ Social Media +++ Meta +++ Facebook +++ Instagram +++ Alphabet +++ Apple +++ Microsoft +++ Amazon +++ Big Five +++ Big Tech +++ GAFAM +++ GAMAM +++ GAMMA +++ Tiktok +++ Google +++ EU +++ Europäische Union +++ Digital Services Act +++ DSA +++ China +++ USA +++**********Ihr hört in diesem Hörsaal:00:02:19 - Vortragsbeginn**********Quellen aus der Folge:Das Gesetz über digitale Dienste, engl.: Digital Services Act (DAS)Rinceanu, Johanna, & Stephenson, Randall (2022). Eine Diagnose digitaler Krankheiten. MaxPlanckForschung, 2022(3), 14-19.**********Mehr zum Thema bei Deutschlandfunk Nova:KI im demokratischen Rechtsstaat: Digital Services Act und Robo-RichterTiktok: EU-Politik fehlt bislang auf PlattformHongkong: China könnte Autonomie Hongkongs entmachten**********Den Artikel zum Stück findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
Luis Paulitsch ist Jurist, Zeithistoriker und Medienethiker, arbeitet für die österreichische Datum-Stiftung – und er hat ein Buch geschrieben mit dem Titel “Alternative Medien: Definition, Geschichte und Bedeutung“* Darin: Markus Linden: Der Aufstieg der Mosaik-Rechten – Negative Öffentlichkeit und die prekäre Zukunft der Demokratie, LTO: BVerwG hebt Compact-Verbot auf, Digital Services Act, Reuters Digital News Report *Affiliate-Link: Wer […]
Luis Paulitsch ist Jurist, Zeithistoriker und Medienethiker, arbeitet für die österreichische Datum-Stiftung – und er hat ein Buch geschrieben mit dem Titel “Alternative Medien: Definition, Geschichte und Bedeutung“* Darin: Markus Linden: Der Aufstieg der Mosaik-Rechten – Negative Öffentlichkeit und die prekäre Zukunft der Demokratie, LTO: BVerwG hebt Compact-Verbot auf, Digital Services Act, Reuters Digital News Report *Affiliate-Link: Wer […]
We're checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission's consent decree with Omnicom and Interpublic Group, the European Union's Digital Services Act, and what comes next after the Supreme Court's Free Speech Coalition v. Paxton decision. Guests: Ari Cohn — lead counsel for tech policy, FIRE Corbin Barthold — internet policy counsel, TechFreedom Timestamps: 00:00 Intro 02:38 State AI regulation moratorium fails in Congress 20:04 Character AI lawsuit 41:10 FTC, Omnicom x IPG merger, and Media Matters 56:09 Digital Services Act 01:02:43 FSC v. Paxton decision 01:10:49 Outro Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: “The AI will see you now” Paul Sherman (2025) Megan Garcia, plaintiff, v. Character Technologies, Inc. et. al., defendants, United States District Court (2025) Proposed amicus brief in support of appeal - Garcia v. Character Technologies, Inc. FIRE (2025) “Amplification and its discontents: Why regulating the reach of online content is hard” Daphne Kelly (2021) “Omnicom Group/The Interpublic Group of Co.” FTC (2025)
As the EU's sweeping Digital Services Act takes effect, U.S. tech companies—and American free speech—face unprecedented foreign censorship pressure. Jeremy Tedesco from the Alliance Defending Freedom joins us to explain. Get the facts first with Morning Wire. - - - Privacy Policy: https://www.dailywire.com/privacy
Vorträge der Wirtschaftsrechtlerin Henrike Weiden und des Richters Sebastian DötterlModeration: Katja Weber**********Was kann und soll der Digital Services Act? Wie kann er produktiv für uns werden? Das beschreibt die Juristin Henrike Weiden. Und: Ist es denkbar und sinnvoll, im Gerichtssaal mit KI-Systemen zu arbeiten? Die Frage stellt und beantwortet der Richter Sebastian Dötterl. **********Henrike Weiden lehrt Wirtschaftsprivatrecht und Recht der Digitalisierung an der Hochschule für angewandte Wissenschaften München. Sie ist Vorsitzende des Beirats der Koordinierungsstelle für Digitale Dienste bei der Bundesnetzagentur. Ihren Vortrag mit dem Titel "Plattformökonomie und Macht: Wie wirkungsvoll ist der Digital Services Act?" hat sie am 25. April 2025 anlässlich des Internationalen For..Net Symposiums mit dem Titel "KI im demokratischen Rechtsstaat" gehalten. Die Tagung wird jährlich gemeinsam vom TUM Center for Digital Public Services (CDPS) und dem Bayerischen Forschungsinstitut für Digitale Transformation (bidt) ausgerichtet. Nach ihr hat dort Sebastian Dötterl vorgetragen, er ist Richter am Oberlandesgericht München und lehrt an der Ludwig-Maximilians-Universität München, unter anderem zu Künstlicher Intelligenz und Recht. Sein Vortrag heißt "Richterliche Co-Creation mit KI: Bessere Rechtsfindung oder verfassungsrechtlicher Alptraum?".**********Schlagworte: +++ Digital Services Act +++ KI +++ Künstliche Intelligenz +++ Plattformökonomie +++ Demokratie +++ Rechtsstaat +++ Recht +++ Verfassungsrecht +++ Gerichtssaal +++**********Ihr hört in diesem Hörsaal:02:19 - Beginn des Vortrags von Henrike Weiden - Der DSA in der Theorie12:54 - Der DSA in der Praxis17:04 - Wie wollen wir den DSA auslegen?25:35 - Beginn des Vortrags von Sebastian Dötterl - Mit Richterin Regina Richtig im Gerichtssaal28:31 - Ausgangslage: Stärken und Schwächen von Mensch und KI47:18 - Wie kommen wir zu besserem Recht?**********Quellen aus der Folge:Digital Services Coordinator (Bundesnetzagentur)Beschwerdeportal für VerbraucherWie das Grundlagenseminar "Künstliche Intelligenz und Recht" das Bewusstsein für KI-Kompetenzen der Studierenden stärken soll (Interview)**********Mehr zum Thema bei Deutschlandfunk Nova:Maschinenliebe: Intimität im Zeitalter Künstlicher IntelligenzPrüfungsrecht: Warum KI-Einsatz bei Klausuren erlaubt sein sollteKI in der Medizin: Mensch oder Maschine**********Den Artikel zum Stück findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
Jakub Szymik, Founder of CEE Digital Democracy Watch, discusses the EU's ongoing initiatives to fight disinformation, regulate political advertising, and protect election integrity on social media. Jakub shares his insights on the Digital Services Act, the European Democracy Shield, and how these initiatives relate to digital political advertising on social media. We also discuss the latest Polish Presidential elections in terms of digital ad spending, Gen AI content, and political influencers.Links:CEE Digital Democracy Watch has just published a new report outlining how 60 stakeholder from CEE countries perceive the effectiveness of EU tech regulation and what challenges remain moving forward.Read more about CEE Digital Democracy Watch's proposed policy actions for the European Democracy Shield, including the disinformation monitoring service mentioned in the episode.
In this packed Friday edition of The Daily Herold, Jon Herold tackles the controversy around James Comey's now-deleted “8647” Instagram post, which the Trump administration is investigating as a potential call for assassination. Jon weighs the possible meanings, whether it was an actual coded threat or just another case of elite hubris, and reacts to Trump's own fiery response. Next, Herold dives into breaking developments within the FBI as Cash Patel reveals the agency is abandoning the Hoover Building and decentralizing operations across the U.S., a move Trump hinted at months earlier. He connects the dots between the shake-up and broader efforts to purge corruption, including the shutdown of the FBI's public corruption unit that once helped launch the Arctic Frost probe into Trump. Other highlights include UnitedHealth's dramatic collapse amid federal fraud investigations, Trump's post-Middle East warning to 150 countries about pending tariffs, and the ongoing GOP failure to pass the “Big Beautiful Bill.” Jon also covers the EU's Digital Services Act and the Trump administration's pushback against global censorship laws. Blending breaking news, geopolitical analysis, and unfiltered commentary, this episode captures the chaos and clarity of a movement reclaiming power one headline at a time.
So even the people that follow the topic closely are stunned by the digital landscape that engulfs our children, how quickly it evolves, and the potential social cost. Two people in a unique position to explain all this are our guest today, Jeffrey Chester and Kathryn Montgomery, both from the Center for Digital Democracy. Jeff is executive director of the Center, and Kathryn is its research director and senior strategist, as well as professor emerita of communication at American University. Jeff and Kathryn have been pioneers in this work and have been uniquely strong voices for protecting children. Interview Summary Let me congratulate the two of you for being way ahead of your time. I mean the two of you through your research and your advocacy and your organizational work, you were onto these things way before most people were. I'm really happy that you're joining us today, and welcome to our podcast. Kathryn, let me begin with you. So why be concerned about this digital landscape? Kathryn - Well, certainly if we're talking about children and youth, we have to pay attention to the world they live in. And it's a digital world as I think any parent knows, and everybody knows. In fact, for all of us, we're living in a digital world. So young people are living their lives online. They're using mobile phones and mobile devices all the time. They're doing online video streaming. They form their communications with their peers online. Their entire lives are completely integrated into this digital media landscape, and we must understand it. Certainly, the food and beverage industry understand it very well. And they have figured out enormously powerful ways to reach and engage young people through these digital media. You know, the extent of the kids' connection to this is really remarkable. I just finished a few minutes ago recording a podcast with two people involved with the Children and Screens organization. And, Chris Perry, who's the executive director of that organization and Dmitri Christakis who was with us as well, were saying that kids sometimes check their digital media 300 times a day. I mean, just unbelievable how much of this there is. There's a lot of reasons to be concerned. Let's turn our attention to how bad it is, what companies are doing, and what might be done about it. So, Jeff, tell us if you would, about the work of the Center for Digital Democracy. Jeff - Well, for more than a quarter of a century, we have tracked the digital marketplace. As you said at the top, we understood in the early 1990s that the internet, broadband what's become today's digital environment, was going to be the dominant communications system. And it required public interest rules and policies and safeguards. So as a result, one of the things that our Center does is we look at the entire digital landscape as best as we can, especially what the ultra-processed food companies are doing, but including Google and Meta and Amazon and GenAI companies. We are tracking what they're doing, how they're creating the advertising, what their data strategies are, what their political activities are in the United States and in many other places in the world. Because the only way we're going to hold them accountable is if we know what they're doing and what they intend to do. And just to quickly follow up, Kelly, the marketers call today's global generation of young people Generation Alpha. Meaning that they are the first generation to be born into this complete digital landscape environment that we have created. And they have developed a host of strategies to target children at the earliest ages to take advantage of the fact that they're growing up digitally. Boy, pretty amazing - Generation Alpha. Kathryn, I have kind of a niche question I'd like to ask you because it pertains to my own career as well. So, you spent many years as an academic studying and writing about these issues, but also you were a strong advocacy voice. How did you go about balancing the research and the objectivity of an academic with advocacy you were doing? Kathryn - I think it really is rooted in my fundamental set of values about what it means to be an academic. And I feel very strongly and believe very strongly that all of us have a moral and ethical responsibility to the public. That the work we do should really, as I always have told my students, try to make the world a better place. It may seem idealistic, but I think it is what our responsibility is. And I've certainly been influenced in my own education by public scholars over the years who have played that very, very important role. It couldn't be more important today than it has been over the years. And I think particularly if you're talking about public health, I don't think you can be neutral. You can have systematic ways of assessing the impact of food marketing, in this case on young people. But I don't think you can be totally objective and neutral about the need to improve the public health of our citizens. And particularly the public health of our young people. I agree totally with that. Jeff let's talk about the concept of targeted marketing. We hear that term a lot. And in the context of food, people talk about marketing aimed at children as one form of targeting. Or, toward children of color or people of color in general. But that's in a way technological child's play. I understand from you that there's much more precise targeting than a big demographic group like that. Tell us more. Jeff - Well, I mean certainly the ultra-processed food companies are on the cutting edge of using all the latest tools to target individuals in highly personalized way. And I think if I have one message to share with your listeners and viewers is that if we don't act soon, we're going to make an already vulnerable group even more exposed to this kind of direct targeted and personalized marketing. Because what artificial intelligence allows the food and beverage companies and their advertising agencies and platform partners to do is to really understand who we are, what we do, where we are, how we react, behave, think, and then target us accordingly using all those elements in a system that can create this kind of advertising and marketing in minutes, if not eventually milliseconds. So, all of marketing, in essence, will be targeted because they know so much about us. You have an endless chain of relationships between companies like Meta, companies like Kellogg's, the advertising agencies, the data brokers, the marketing clouds, et cetera. Young people especially, and communities of color and other vulnerable groups, have never been more exposed to this kind of invasive, pervasive advertising. Tell us how targeted it can be. I mean, let's take a 11-year-old girl who lives in Wichita and a 13-year-old boy who lives in Denver. How much do the companies know about those two people as individuals? And how does a targeting get market to them? Not because they belong to a big demographic group, but because of them as individuals. Jeff - Well, they certainly are identified in various ways. The marketers know that there are young people in the household. They know that there are young people, parts of families who have various media behaviors. They're watching these kinds of television shows, especially through streaming or listening to music or on social media. Those profiles are put together. And even when the companies say they don't exactly know who the child is or not collecting information from someone under 13 because of the privacy law that we helped get enacted, they know where they are and how to reach them. So, what you've had is an unlimited amassing of data power developed by the food and beverage companies in the United States over the last 25 years. Because really very little has been put in their way to stop them from what they do and plan to do. So presumably you could get some act of Congress put in to forbid the companies from targeting African American children or something like that. But it doesn't sound like that would matter because they're so much more precise in the market. Yes. I mean, in the first place you couldn't get congress to pass that. And I think this is the other thing to think about when you think about the food and beverage companies deploying Generative AI and the latest tools. They've already established vast, what they call insights divisions, market research divisions, to understand our behavior. But now they're able to put all that on a fast, fast, forward basis because of data processing, because of data clouds, let's say, provided by Amazon, and other kinds of tools. They're able to really generate how to sell to us individually, what new products will appeal to us individually and even create the packaging and the promotion to be personalized. So, what you're talking about is the need for a whole set of policy safeguards. But I certainly think that people concerned about public health need to think about regulating the role of Generative AI, especially when it comes to young people to ensure that they're not marketed to in the ways that it fact is and will continue to do. Kathryn, what about the argument that it's a parent's responsibility to protect their children and that government doesn't need to be involved in this space? Kathryn - Well, as a parent, I have to say is extremely challenging. We all do our best to try to protect our children from unhealthy influences, whether it's food or something that affects their mental health. That's a parent's obligation. That's what a parent spends a lot of time thinking about and trying to do. But this is an environment that is overwhelming. It is intrusive. It reaches into young people's lives in ways that make it virtually impossible for parents to intervene. These are powerful companies, and I'm including the tech companies. I'm including the retailers. I'm including the ad agencies as well as these global food and beverage companies. They're extremely powerful. As Jeff has been saying, they have engaged and continue to engage in enormous amounts of technological innovation and research to figure out precisely how to reach and engage our children. And it's too much for parents. And I've been saying this for years. I've been telling legislators this. I've been telling the companies this. It's not fair. It's a very unfair situation for parents. That makes perfect sense. Well, Jeff, your Center produces some very helpful and impressive reports. And an example of that is work you've done on the vast surveillance of television viewers. Tell us more about that, if you would. Jeff - Well, you know, you have to keep up with this, Kelly. The advocates in the United States and the academics with some exceptions have largely failed to address the contemporary business practices of the food and beverage companies. This is not a secret what's going on now. I mean the Generative AI stuff and the advanced data use, you know, is recent. But it is a continuum. And the fact is that we've been one of the few groups following it because we care about our society, our democracy, our media system, et cetera. But so much more could be done here to track what the companies are doing to identify the problematic practices, to think about counter strategies to try to bring change. So yes, we did this report on video streaming because in fact, it's the way television has now changed. It's now part of the commercial surveillance advertising and marketing complex food and beverage companies are using the interactivity and the data collection of streaming television. And we're sounding the alarm as we've been sounding now for too long. But hopefully your listeners will, in fact, start looking more closely at this digital environment because if we don't intervene in the next few years, it'll be impossible to go back and protect young people. So, when people watch television, they don't generally realize or appreciate the fact that information is being collected on them. Jeff - The television watches you now. The television is watching you now. The streaming companies are watching you now. The device that brings you streaming television is watching you now is collecting all kinds of data. The streaming device can deliver personalized ads to you. They'll be soon selling you products in real time. And they're sharing that data with companies like Meta Facebook, your local retailers like Albertsons, Kroger, et cetera. It's one big, huge digital data marketing machine that has been created. And the industry has been successful in blocking legislation except for the one law we were able to get through in 1998. And now under the Trump administration, they have free reign to do whatever they want. It's going to be an uphill battle. But I do think the companies are in a precarious position politically if we could get more people focused on what they're doing. Alright, we'll come back to that. My guess is that very few people realize the kind of thing that you just talked about. That so much information is being collected on them while they're watching television. The fact that you and your center are out there making people more aware, I think, is likely to be very helpful. Jeff - Well, I appreciate that, Kelly, but I have to say, and I don't want to denigrate our work, but you know, I just follow the trades. There's so much evidence if you care about the media and if you care about advertising and marketing or if you care, just let's say about Coca-Cola or Pepsi or Mondalez. Pick one you can't miss all this stuff. It's all there every day. And the problem is that there has not been the focus, I blame the funders in part. There's not been the focus on this marketplace in its contemporary dimensions. I'd like to ask you both about the legislative landscape and whether there are laws protecting people, especially children from this marketing. And Kathy, both you and Jeff were heavily involved in advocacy for a landmark piece of legislation that Jeff referred to from 1998, the Children's Online Privacy Protection Act. What did this act involve? And now that we're some years in, how has it worked? Kathryn - Well, I always say I've been studying advertising in the digital media before people even knew there was going to be advertising in digital media. Because we're really talking about the earliest days of the internet when it was being commercialized. But there was a public perception promoted by the government and the industry and a lot of other institutions and individuals that this was going to be a whole new democratic system of technology. And that basically it would solve all of our problems in terms of access to information. In terms of education. It would open up worlds to young people. In many ways it has, but they didn't talk really that much about advertising. Jeff and I working together at the Center for Media Education, were already tracking what was going on in that marketplace in the mid-1990s when it was very, very new. At which point children were already a prime target. They were digital kids. They were considered highly lucrative. Cyber Tots was one of the words that was used by the industry. What we believed was that we needed to get some public debate and some legislation in place, some kinds of rules, to guide the development of this new commercialized media system. And so, we launched a campaign that ultimately resulted in the passage of the Children's Online Privacy Protection Act. Now it only governs commercial media, online, digital media that targets children under the age of 13, which was the most vulnerable demographic group of young people. We believe protections are really, really very important for teenagers. There's a lot of evidence for that now, much more research actually, that's showing their vulnerable abilities. And it has required companies to take young people into account when developing their operations. It's had an impact internationally in a lot of other countries. It is just the barest minimum of what we need in terms of protections for young people. And we've worked with the Federal Trade Commission over the years to ensure that those rules were updated and strengthened so that they would apply to this evolving digital media system. But now, I believe, that what we need is a more global advocacy strategy. And we are already doing that with advocates in other countries to develop a strategy to address the practices of this global industry. And there are some areas where we see some promising movement. The UK, for example, passed a law that bans advertising on digital media online. It has not yet taken effect, but now it will after some delays. And there are also other things going on for ultra processed foods, for unhealthy foods and beverages. So, Kathryn has partly answered this already, Jeff, but let me ask you. That act that we've talked about goes back a number of years now, what's being done more recently on the legislative front? Perhaps more important than that, what needs to be done? Well, I have to say, Kelly, that when Joe Biden came in and we had a public interest chair at the Federal Trade Commission, Lena Khan, I urged advocates in the United States who are concerned about unhealthy eating to approach the Federal Trade Commission and begin a campaign to see what we could do. Because this was going to be the most progressive Federal Trade Commission we've had in decades. And groups failed to do so for a variety of reasons. So that window has ended where we might be able to get the Federal Trade Commission to do something. There are people in the United States Congress, most notably Ed Markey, who sponsored our Children's Privacy Law 25 years ago, to get legislation. But I think we have to look outside of the United States, as Kathryn said. Beyond the law in the United Kingdom. In the European Union there are rules governing digital platforms called the Digital Services Act. There's a new European Union-wide policy safeguards on Generative AI. Brazil has something similar. There are design codes like the UK design code for young people. What we need to do is to put together a package of strategies at the federal and perhaps even state level. And there's been some activity at the state level. You know, the industry has been opposed to that and gone to court to fight any rules protecting young people online. But create a kind of a cutting-edge set of practices that then could be implemented here in the United States as part of a campaign. But there are models. And how do the political parties break down on this, these issues? Kathryn - I was going to say they break down. Jeff - The industry is so powerful still. You have bipartisan support for regulating social media when it comes to young people because there have been so many incidences of suicide and stalking and other kinds of emotional and psychological harms to young people. You have a lot of Republicans who have joined with Democrats and Congress wanting to pass legislation. And there's some bipartisan support to expand the privacy rules and even to regulate online advertising for teens in our Congress. But it's been stymied in part because the industry has such an effective lobbying operation. And I have to say that in the United States, the community of advocates and their supporters who would want to see such legislation are marginalized. They're under underfunded. They're not organized. They don't have the research. It's a problem. Now all these things can be addressed, and we should try to address them. But right now it's unlikely anything will pass in the next few months certainly. Kathryn - Can I just add something? Because I think what's important now in this really difficult period is to begin building a broader set of stakeholders in a coalition. And as I said, I think it does need to be global. But I want to talk about also on the research front, there's been a lot of really important research on digital food marketing. On marketing among healthy foods and beverages to young people, in a number of different countries. In the UK, in Australia, and other places around the world. And these scholars have been working together and a lot of them are working with scholars here in the US where we've seen an increase in that kind of research. And then advocates need to work together as well to build a movement. It could be a resurgence that begins outside of our country but comes back in at the appropriate time when we're able to garner the kind of support from our policymakers that we need to make something happen. That makes good sense, especially a global approach when it's hard to get things done here. Jeff, you alluded to the fact that you've done work specifically on ultra processed foods. Tell us what you're up to on that front. Jeff - As part of our industry analysis we have been tracking what all the leading food and beverage companies are doing in terms of what they would call their digital transformation. I mean, Coca-Cola and Pepsi on Mondelez and Hershey and all the leading transnational processed food companies are really now at the end of an intense period of restructuring to take advantage of the capabilities provided by digital data and analytics for the further data collection, machine learning, and Generative AI. And they are much more powerful, much more effective, much more adept. In addition, the industry structure has changed in the last few years also because of digital data that new collaborations have been created between the platforms, let's say like Facebook and YouTube, the food advertisers, their marketing agencies, which are now also data companies, but most notably the retailers and the grocery stores and the supermarkets. They're all working together to share data to collaborate on marketing and advertising strategies. So as part of our work we've kept abreast of all these things and we're tracking them. And now we are sharing them with a group of advocates outside of the United States supported by the Bloomberg Philanthropies to support their efforts. And they've already made tremendous progress in a lot of areas around healthy eating in countries like Mexico and Argentina and Brazil, et cetera. And I'm assuming all these technological advances and the marketing muscle, the companies have is not being used to market broccoli and carrots and Brussels sprouts. Is that right? Jeff - The large companies are aware of changing attitudes and the need for healthy foods. One quick takeaway I have is this. That because the large ultra processed food companies understand that there are political pressures promoting healthier eating in North America and in Europe. They are focused on expanding their unhealthy eating portfolio, in new regions specifically Asia Pacific, Africa, and Latin America. And China is a big market for all this. This is why it has to be a global approach here, Kelly. First place, these are transnational corporations. They are creating the, our marketing strategies at the global level and then transmitting them down to be tailored at the national or regional level. They're coming up with a single set of strategies that will affect every country and every child in those countries. We need to keep track of that and figure out ways to go after that. And there are global tools we might be able to use to try to protect young people. Because if you could protect young, a young person in China, you might also be able to protect them here in North Carolina. This all sounds potentially pretty scary, but is there reason to be optimistic? Let's see if we can end on a positive note. What do you think. Do you have reason to be optimistic? Kathryn - I've always been an optimist. I've always tried to be an optimist, and again, what I would say is if we look at this globally and if we identify partners and allies all around the world who are doing good work, and there are many, many, many of them. And if we work together and continue to develop strategies for holding this powerful industry and these powerful industries accountable. I think we will have success. And I think we should also shine the spotlight on areas where important work has already taken place. Where laws have been enacted. Where companies have been made to change their practices and highlight those and build on those successes from around the world. Thanks. Jeff, what about you? Is there reason to be optimistic? Well, I don't think we can stop trying, although we're at a particularly difficult moment here in our country and worldwide. Because unless we try to intervene the largest corporations, who are working and will work closely with our government and other government, will be able to impact our lives in so many ways through their ability to collect data. And to use that data to target us and to change our behaviors. You can change our health behaviors. You can try to change our political behaviors. What the ultra-processed food companies are now able to do every company is able to do and governments are able to do. We have to expose what they're doing, and we have to challenge what they're doing so we can try to leave our kids a better world. It makes sense. Do you see that the general public is more aware of these issues and is there reason to be optimistic on that front? That awareness might lead to pressure on politicians to change things? Jeff - You know, under the Biden administration, the Federal Trade Commission identified how digital advertising and marketing works and it made it popular among many, many more people than previously. And that's called commercial surveillance advertising. The idea that data is collected about you is used to advertise and market to you. And today there are thousands of people and certainly many more advocacy groups concerned about commercial surveillance advertising than there were prior to 2020. And all over the world, as Kathryn said, in countries like in Brazil and South Africa and Mexico, advocates are calling attention to all these techniques and practices. More and more people are being aware and then, you know, we need obviously leaders like you, Kelly, who can reach out to other scholars and get us together working together in some kind of larger collaborative to ensure that these techniques and capabilities are exposed to the public and we hold them accountable. Bios Kathryn Montgomery, PhD. is Research Director and Senior Strategist for the Center for Digital Democracy (CDD). In the early 90s, she and Jeff Chester co-founded the Center for Media Education (CME), where she served as President until 2003, and which was the predecessor organization to CDD. CME spearheaded the national campaign that led to passage of the 1998 Children's Online Privacy Protection Act (COPPA) the first federal legislation to protect children's privacy on the Internet. From 2003 until 2018, Dr. Montgomery was Professor of Communication at American University in Washington, D.C., where she founded and directed the 3-year interdisciplinary PhD program in Communication. She has served as a consultant to CDD for a number of years and joined the full-time staff in July 2018. Throughout her career, Dr. Montgomery has written and published extensively about the role of media in society, addressing a variety of topics, including: the politics of entertainment television; youth engagement with digital media; and contemporary advertising and marketing practices. Montgomery's research, writing, and testimony have helped frame the national public policy debate on a range of critical media issues. In addition to numerous journal articles, chapters, and reports, she is author of two books: Target: Prime Time – Advocacy Groups and the Struggle over Entertainment Television (Oxford University Press, 1989); and Generation Digital: Politics, Commerce, and Childhood in the Age of the Internet (MIT Press, 2007). Montgomery's current research focuses on the major technology, economic, and policy trends shaping the future of digital media in the Big Data era. She earned her doctorate in Film and Television from the University of California, Los Angeles. Jeff Chester is Executive Director of the Center for Digital Democracy (CDD), a Washington, DC non-profit organization. CDD is one of the leading U.S. NGOs advocating for citizens, consumers and other stakeholders on digital privacy and consumer protections online. Founded in 1991, CDD (then known as the Center for Media Education) led the campaign for the enactment of the Children's Online Privacy Protection Act (COPPA, 1998). During the 1990s it also played a prominent role in such issues as open access/network neutrality, diversity of media ownership, public interest policies for children and television, as well the development of the FCC's “E-Rate” funding to ensure that schools and libraries had the resources to offer Internet services. Since 2003, CDD has been spearheading initiatives designed to ensure that digital media in the broadband era fulfill their democratic potential. A former investigative reporter, filmmaker and Jungian-oriented psychotherapist, Jeff Chester received his M.S.W. in Community Mental Health from U.C. Berkeley. He is the author of Digital Destiny: New Media and the Future of Democracy (The New Press, 2007), as well as articles in both the scholarly and popular press. During the 1980s, Jeff co-directed the campaign that led to the Congressional creation of the Independent Television Service (ITVS) for public TV. He also co-founded the National Campaign for Freedom of Expression, the artist advocacy group that supported federal funding for artists. In 1996, Newsweek magazine named Jeff Chester one of the Internet's fifty most influential people. He was named a Stern Foundation “Public Interest Pioneer” in 2001, and a “Domestic Privacy Champion” by the Electronic Privacy Information Center in 2011. CDD is a member of the Transatlantic Consumer Dialogue (TACD). Until January 2019, Jeff was the U.S. co-chair of TACD's Information Society (Infosoc) group, helping direct the organization's Transatlantic work on data protection, privacy and digital rights.
A chilling look at the growing global machinery of censorship and financial control, this conversation dives into declassified plans from the Biden administration targeting Americans for their political and religious views—flagging symbols like "2A" and using government influence to de-bank and silence dissenters. Meanwhile, figures like Nina Jankowicz and EU institutions leverage laws like the Digital Services Act to suppress opposition speech overseas, with chilling implications for free expression even in the U.S. From Silicon Valley investors turning on Democrats to billionaires under fire, the message is clear: speak out, and risk being erased—digitally and financially.
In this fiery discussion, Tara and Lee unpack Nina Jankowicz's return to the spotlight, her role in European censorship efforts, and how the EU's Digital Services Act is being used to suppress dissenting political speech—even from Americans. They explore the broader implications of global censorship, the potential financial targeting of Elon Musk, and shocking revelations about domestic surveillance and debanking under the Biden administration. A deep dive into what they call a growing, globalized assault on free expression.
This eye-opening segment exposes a classified domestic terrorism plan from the Biden administration, recently declassified by Tulsi Gabbard. The discussion reveals how political and religious views—symbolized by hashtags like “2A”—were allegedly used to target Americans for censorship and debanking. With billionaires like Marc Andreessen sounding the alarm and parallels drawn to Europe's Digital Services Act, the hosts argue this is a creeping Orwellian crackdown on dissent, free speech, and financial independence in America.
ChatGPT Search may soon fall under the EU's Digital Services Act, OpenAI and the Washington Post ink a deal to display Post content in ChatGPT, and Bluesky adopts a centralized verification system. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this wouldContinue reading "ChatGPT Search May Soon Fall Under The EU's Digital Services Act – DTH"
Free speech is facing an unprecedented crackdown across the UK and Europe. The UK's Online Safety Act and the EU's sweeping Digital Services Act are driving small forums offline, pressuring U.S. platforms to silence users, and even tasking British pubs with policing conversations under new “hate speech” rules. Is this the end of open dialogue in the West?Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.
We travel from America to Europe, Russia, China, and more places to answer the question: Is there a global free speech recession? Guests: - Sarah McLaughlin: FIRE senior scholar, global expression - James Kirchick: FIRE senior fellow - Jacob Mchangama: FIRE senior fellow Timestamps: 00:00 Intro 03:52 Free speech global surveys 07:49 Freedom of expression deteriorating 11:43 Misinformation and disinformation 18:05 Russian state-sponsored media 24:55 Europe's Digital Services Act 29:26 Chinese censorship 34:33 Radio Free Europe 54:57 Mohammad cartoons 01:04:14 Outro Read the transcript here. Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: - Authoritarians in the academy: How the internationalization of higher education and borderless censorship threaten free speech Sarah McLaughlin (2025) - “The First Amendment created gay America” So to Speak (2022) - “Secret city: The hidden history of gay Washington” James Kirchick (2022) - “Who in the world supports free speech?” The Future of Free Speech (2025) - “V-DEM democracy report 2025: 25 years of autocratization — democracy trumped?” V-Dem Institute (2025) - Global risks report 2024 World Economic Forum (2025) - “Gay reporter kicked off Kremlin network after protesting anti-gay law” Washington Free Beacon (2013) - Free speech: A history from Socrates to social media (paperback) Jacob Mchangama (2025) - Europe's Digital Services Act (DSA) (2022) - Careless people: A cautionary tale of power, greed, and lost idealism Sarah Wynn-Williams (2025) - “The Voice of America falls silent” The New York Times (2025) - Text of Havel's speech to Congress The Washington Post (1990) - Voice of America wins in court, for now, as judge blocks Trump administration from firing staff AP News (2025)
2025 will be a pivotal year for technology regulation in the United States and around the world. The European Union has begun regulating social media platforms with its Digital Services Act. In the United States, regulatory proposals at the federal level will likely include renewed efforts to repeal or reform Section 230 of the Communications Decency Act. Meanwhile, States such as Florida and Texas have tried to restrict content moderation by major platforms, but have been met with challenges to the laws' constitutionality. On March 19, NYU Law hosted a Forum on whether it is lawful, feasible, and desirable for government actors to regulate social media platforms to reduce harmful effects on U.S. democracy and society with expert guests Daphne Keller, Director of the Program on Platform Regulation at Stanford Law School's Cyber Policy Center, and Michael Posner, Director of the Center for Business and Human Rights at NYU Stern School of Business. Tess Bridgeman and Ryan Goodman, co-editors-in-chief of Just Security, moderated the event, which was co-hosted by Just Security, the NYU Stern Center for Business and Human Rights and Tech Policy Press. Show Notes: Tess Bridgeman Ryan GoodmanDaphne Keller Michael PosnerJust Security's coverage on Social Media PlatformsJust Security's coverage on Section 230Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)