POPULARITY
Épisode 1394 : Le 18 novembre 2025, plusieurs députés de la majorité ont déposé une proposition de loi visant à interdire l'accès aux réseaux sociaux aux moins de 15 ans. L'annonce a immédiatement déclenché un débat national.Entre appels à une “urgence sanitaire” et interrogations sur la faisabilité réelle du projet, la France se retrouve face à un dilemme : comment protéger les plus jeunes sans les exclure d'un espace social devenu central dans leur vie quotidienne ?Derrière cette proposition, il y a :une pression politique forte,une inquiétude croissante des parents,une communauté scientifique qui alerte depuis des années,et l'arrière-plan européen du Digital Services Act, qui ouvre la porte à une nouvelle ère de régulation.Mais la question reste entière : interdire est-il le bon outil, ou le symptôme d'un manque chronique de régulation depuis 10 ans ?. . . Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs. Nous sommes une agence social media basée à Lyon : https://supernatifs.com. Ensemble, nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Ensemble, nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
The global markets are on edge, fears of an AI bubble burst are circling and investor anxiety is building as the world awaits Nvidia's earnings report. The chip giant which is the world's most valuable company, has powered the AI boom and helped drive stock markets to record highs. But what could the results mean for the future of AI?We hear how online retail giant, Amazon, lost a legal battle with a European court after requesting to be exempt from the EU's Digital Services Act. The period of volatility and tensions in global trade is far from over, according to the boss of parcel delivery giant, DHL Group. We hear from their CEO, Tobias Meyer, whose company is investing €1 billion in India.And why is a court in Paris suspending the sale of the world's ‘first calculator' invented in 1642?Presenter: Leanna Byrne Producer: Niamh McDermott Editor: David Cann
L'exécutif européen doit présenter, ce mercredi 19 novembre, un projet de loi de simplification numérique. Une révision des textes en vigueur qui ont fait de l'Europe la superpuissance régulatrice des activités en ligne. Objectif affiché : relancer la compétitivité européenne et favoriser le développement de l'intelligence artificielle (IA). Mais les associations spécialisées craignent surtout une dilution de la protection de la vie privée des Européens sous la pression de Washington et des Gafam. Règlement général sur la protection des données (RGPD) en 2018, Digital Markets Act et Digital Services Act en 2022, toute première règlementation au monde sur l'intelligence artificielle en 2024 : à défaut de briller sur le plan de l'innovation, depuis bientôt dix ans l'Union européenne se présente comme la superpuissance régulatrice du numérique. Forte de son marché unique aux 500 millions de consommateurs, l'Europe est parvenue à imposer ses règles aux grandes plateformes du numérique, qu'elles soient américaines ou chinoises. Mais tout pourrait changer avec la proposition dite « Omnibus numérique » que la Commission doit présenter ce mercredi 19 novembre. Un projet de loi qui devrait de toute façon en passer par le Parlement européen et les États membres mais dont une première version a fuité une semaine avant sa présentation. L'exécutif européen y défend une série de mesures présentées comme le moyen de relancer la compétitivité européenne et notamment dans le domaine de l'intelligence artificielle. Pour cela la Commission propose de réécrire tous les textes en vigueur pour en assouplir les règles ou laisser une plus grande place à l'interprétation. De quoi susciter l'inquiétude des associations spécialisées qui y voient une « dilution » de tout le bouclier réglementaire dont s'est dotée l'Union européenne ces dix dernières années. Mort à petit feu du parapluie réglementaire européen Par exemple, à l'heure actuelle la loi européenne empêche les acteurs du numérique d'exploiter les données personnelles d'un internaute, pour lui proposer de la publicité ciblée en fonction de son état de santé, son orientation politique ou sexuelle. Des données très sensibles mais qu'il est facile d'obtenir ou de deviner si vous avez accès à l'historique de navigation d'un internaute, à la liste des applications installées sur son téléphone, ce genre de chose. Avec les nouvelles règles proposées par la Commission au nom de la compétitivité, tant que l'internaute en question n'est pas directement identifié, les entreprises pourraient exploiter ces données personnelles : les vendre à des tiers ou les utiliser pour entraîner une intelligence artificielle. L'ONG autrichienne Noyb (pour « None of your business »), la référence en la matière, estime que ce projet de la Commission est de nature à tuer à petit feu tout le parapluie réglementaire qui protège la vie privée des Européens. La Commission se défend en expliquant vouloir ne pas vouloir freiner le développement des acteurs européens. Certaines entreprises se réjouissent par avance de la diminution de ce qu'elles considèrent comme un carcan administratif trop lourd. Mais les mesures proposées vont surtout servir les intérêts des géants américains, déjà en place : Google, Amazon, Meta, Open AI... Vassalisation numérique Alors pourquoi ce soudain volte-face européen ? Peut-être à cause d'un certain Donald Trump qui n'a de cesse d'accuser l'Europe de faire preuve de protectionnisme en citant son cadre réglementaire et en particulier numérique. Déjà mise en difficulté par les droits de douane l'UE semble vouloir donner des gages à Washington, quitte à devenir plus encore le vassal numérique des États-Unis.
-Coda Music used the recent backlash around Spotify co-founder Daniel Ek as a way to differentiate itself from the number one streamer, calling out Ek's controversial funding of defense technology firm Helsing earlier in the year. Coda Music announced a new feature that feels designed to answer another of the recent Spotify controversies: AI slop music flooding the platform. In response, Coda Music is launching AI identification tools with the purpose of finding and labeling songs that weren't composed by actual humans. -WhatsApp is following in the footsteps of fellow Meta-owned companies, Instagram and Facebook. Bloomberg reported that the European Commission will give WhatsApp's open channels the designation of Very Large Online Platform (VLOP) under the Digital Services Act. -Threads is rolling out some new features for sharing podcasts and podcast episodes. Creators have the option to add the link to their show to their bio page; after that, their posts with show and episode links will appear with an audio preview. Learn more about your ad choices. Visit podcastchoices.com/adchoices
icepremier Simon Harris wywołał falę krytyki, gdy stwierdził, że „liczba imigrantów w Irlandii jest zbyt wysoka, a kraj potrzebuje jasnych zasad i spójności społecznej”.Słowa te Lewica uznała za „język nienawiści”, ostrzegając przed powrotem retoryki podziału.Migracja jest dobra, ale tylko w ramach zasadprzypominał premier Michael Martin.Autorzy audycji zauważają jednak, że w Irlandii coraz bardziej brakuje „silnej, odpowiedzialnej i wyrazistej prawicy”. Ich zdaniem, jeśli tę lukę wypełni frustracja, zrobi to ulica — a protesty potrafią zmienić społeczeństwo szybciej niż jakakolwiek fala migracyjna.Kolejnym tematem jest raport irlandzkiego kontrolera i audytora generalnego, który ujawnił finansowy chaos w systemie ochrony międzynarodowej.Jak czytamy na portalu Polska-IE.com, instytucja zajmująca się zakwaterowaniem uchodźców — nadpłaciła prywatnym firmom 13,8 mln euro, z czego 7,4 mln to błędne obliczenia podatku VAT.Dublin coraz bardziej przypomina biuro rachunkowe, w którym nikt nie pilnuje paragonówZapowiadany przez rząd system Bad Space ma uporządkować finanse i monitorować miejsca noclegowe, lecz — jak ironicznie zauważa Tomasza Wybranowski — to tylko cyfrowy makijaż przykrywający stare grzechy.Polska parafia w Dublinie – ostoja na krawędziW drugiej części audycji poruszany zostaje dramatyczny temat polskiej parafii w Dublinie. Po śmierci proboszcza, ks. Stanisława Hajkowskiego, wspólnota zmaga się z poważnymi problemami finansowymi.Roczny koszt ubezpieczenia to 12 tysięcy euro, ogrzewania – kolejne 10 tysięcy. A parafia zebrała zaledwie pięćRedaktor Bogdan Feręc dodaje refleksyjnie:Może właśnie w tym jest sens – w nieświętych obcowaniu ludzi, którzy pomimo różnic próbują ocalić coś wspólnego. Choćby ten mały kościół pod wezwaniem św. AudeonaCenzura w sieci – „Facebook wie lepiej niż rząd”W swoim felietonie „Zasięg po irlandzku, czyli jak Facebook nauczył się kneblować prawdę” Bogdan Feręcc opisuje przypadek zablokowania posta z cytatem z oficjalnego komunikatu rządu.Wolność słowa na Facebooku istnieje tylko w reklamach. Algorytmy rozstrzygają, co wolno widzieć i wiedzieć.Współgospodarz Studia Dublin zwraca uwagę, że unijne przepisy Digital Services Act dają platformom prawo do „ograniczania szkodliwych treści”, choć "nikt nie zdefiniował, co jest naprawdę szkodliwe".
Emmanuel Macron plaide pour un encadrement plus strict des plateformes, mais jusqu'où faut-il aller sans basculer dans la censure ?Extrait de L'Hebdo du 1er novembre 2025Un discours présidentiel sans détourLors du Forum de la Paix à Paris, Emmanuel Macron a livré une violente charge contre les réseaux sociaux. Il a dénoncé un modèle économique fondé sur la manipulation de l'attention, accusant les plateformes américaines – et en particulier X – de favoriser les contenus les plus extrêmes. Selon lui, cette dérive menace directement nos démocraties, car l'émotion y supplante la vérité et l'argumentation.Vers une nouvelle ère de régulation ?Le chef de l'État appelle à “reprendre le contrôle” de notre vie démocratique et informationnelle. Transparence des algorithmes, suppression des faux comptes, fin de l'anonymat total… autant de pistes qui s'inscrivent dans un mouvement européen plus large, initié avec le Digital Services Act.Mais ce discours de fermeté révèle aussi un paradoxe : comment encadrer les géants du numérique sans freiner l'innovation ? Et surtout, comment réguler des entreprises souvent hors de portée du droit européen ?Le risque d'un glissement autoritaireImposer l'identité réelle sur les réseaux pose une question fondamentale : faut-il renoncer à l'anonymat pour plus de sécurité ? Ce serait oublier que le pseudonymat protège aussi la liberté d'expression, notamment celle des lanceurs d'alerte ou des citoyens ordinaires. Lutter contre les dérives en ligne ne peut se faire au prix d'une surveillance généralisée.Les réseaux sociaux, toujours sources d'informationEmmanuel Macron affirme que “les réseaux sociaux ne sont plus des lieux où l'on peut s'informer”. Je ne partage pas ce constat. Entre les médias traditionnels, les médias alternatifs et les témoignages directs, ces plateformes restent des espaces d'expression essentiels. Le vrai enjeu, c'est l'éducation au discernement : apprendre à reconnaître la désinformation, à vérifier les sources, à penser contre soi-même.Réguler, oui — mais sans oublier de former les citoyens à exercer leur esprit critique.-----------♥️ Soutienhttps://donorbox.org/monde-numerique
In Episode 146 des c't-Datenschutz-Podcasts widmen sich Holger und Joerg einem Grundsatzthema: Wo steht steht der Datenschutz zwischen notwendiger Machtbegrenzung und störendem Innovationshindernis? Juraprofessorin Hannah Ruschemeier beschäftigt sich genau mit derlei Fragen. Sie forscht an der Universität Osnabrück zu KI-Regulierung, Plattformrecht und den Herausforderungen der digitalen Transformation. Auf dem DatenTag der Stiftung Datenschutz hatte sie jüngst ihre Thesen zur "Datenmacht" in einer Keynote zusammengefasst (siehe Shownotes). Ruschemeier bestreitet vehement, dass Datenschutz obsolet oder gar tot sei. Sie beobachte zwar eine gewisse Resignation in der Gesellschaft, sehe aber gerade deshalb die Notwendigkeit für mehr Aufklärung. Viele Menschen verstünden nicht, was mit ihren Daten passiert und welche Macht große Tech-Konzerne damit ausüben. Diese "informationelle Machtasymmetrie" zwischen Datenkonzernen und Verbrauchern hält sie für ein zentrales Problem. Besonders kritisch sieht die Professorin das Geschäftsmodell vieler Tech-Giganten, die nach dem Prinzip "move fast and break things" Fakten schaffen und sich erst später um rechtliche Konformität kümmern. Während Meta oder Google Milliardenstrafen quasi aus der Portokasse zahlen können, kämpfen kleine und mittlere Unternehmen mit hohen Compliance-Kosten. Diese Asymmetrie zeige sich auch im mangelnden Vollzug: Große Player würden unzureichend belangt, während kleinere Betriebe unter der Bürokratielast leiden. Aus der Praxis berichtet Joerg, dass die bürokratischen Hürden für Start-ups und kleine Unternehmen enorm sein können. Seiner These, dass Datenschutz durchaus Innovationen ausbremse, steht Ruschemeier allerdings kritisch gegenüber. Sie fordert hier eine differenziertere Sichtweise. Der Begriff "Innovation" dürfe kein Totschlagargument gegen jede Regulierung sein. Vielmehr müsse man fragen, wem eine Neuerung nützt. Sie plädiert für stärker gemeinwohlorientierte Definitionen und Entwicklungen. Regulierung schütze, statt zu hemmen - Europa solle stolz auf seinen starken Grundrechtsschutz sein. Ruschemeier plädiert für eine umfassende Reform der europäischen Datenschutz-Grundverordnung (DSGVO). Sie spricht sich im Podcast für einen risikobasierten Ansatz aus: Unternehmen mit besonders datenintensiven Geschäftsmodellen sollten strenger reguliert werden, während kleine Betriebe entlastet werden könnten. Gleichzeitig warnt sie davor, das Schutzniveau generell abzusenken. Stattdessen brauche es eine bessere Abstimmung zwischen den verschiedenen Digital-Gesetzen wie DSGVO, KI-Verordnung und Digital Services Act. Die Expertin sieht Europa nicht im Wettrennen mit USA und China um die nächste große Plattform, insbesondere im Bereich KI. Stattdessen sollte sich der Kontinent auf seine Stärken konzentrieren: starker Grundrechtsschutz, Rechtssicherheit und industrielle Anwendungen. Diese könnten durchaus Standortvorteile sein, wenn man sie richtig nutze und kommuniziere.
W Poranku Radia Wnet prof. Katarzyna Chałubińska-Jentkiewicz mówiła o Digital Services Act – czyli rozporządzeniu o jednolitym rynku usług cyfrowych – które przebudowuje zasady odpowiedzialności w sieci. Przypomniała, że DSA wprowadza nowe obowiązki informacyjne oraz procedury dla dostawców usług, od hostingodawców po „bardzo wielkie platformy” i wyszukiwarki.To rozporządzenie zmienia zasadniczo myślenie o tym, jak regulujemy środowisko internetowe. Można powiedzieć, że kończy się era wolności internetu. Będziemy teraz odpowiadać za działania i treści, które jako użytkownicy udostępniamy na platformach – wskazała.Kluczowa zmiana dotyczy ciężaru odpowiedzialności – szczególnie po stronie pośredników. To właściciele infrastruktury mają realne narzędzia do moderacji: usuwania treści czy blokowania kont. Jednocześnie definicja „treści nielegalnych” pozostaje w gestii prawa krajowego.Bardzo słusznie prawodawca unijny zdecydował, że treści nielegalne będą definiowane na poziomie państw członkowskich. Każde państwo najlepiej wie, co jest niezgodne z porządkiem prawnym i moralnością publiczną – mówiła.W Polsce trwają prace nad nowelizacją ustawy o świadczeniu usług drogą elektroniczną, która ma pełniej wdrożyć DSA i zbudować mechanizmy koordynacji. W projekcie wskazano, że rolę krajowego koordynatora rynku cyfrowego miałby pełnić prezes UKE.Propozycja jest taka, żeby koordynatorem spraw DSA był prezes Urzędu Komunikacji Elektronicznej. Chodzi o sieć proceduralnych powiązań między państwami UE przy usługach transgranicznych – także wobec wielkich platform spoza Unii – dodała.Wątpliwości budzi od lat ryzyko „miękkiej cenzury”. Profesor podkreśla, że cenzury prewencyjnej (ex ante) prawo nie dopuszcza, lecz platformy mają obowiązek moderowania środowiska i niedopuszczania do ponownego pojawienia się raz uznanych treści nielegalnych (np. w obszarach przestępczości komputerowej czy ochrony małoletnich).Nie ma tu ryzyka cenzury prewencyjnej. Dostawcy będą zobowiązani do moderacji i dbania, by treść uznana za nielegalną nie pojawiła się ponownie. To wynika też z przepisów o usługach audiowizualnych i ustawy o radiofonii i telewizji – wskazała.Katalog treści nielegalnych w Polsce ma opierać się przede wszystkim na Kodeksie karnym i przepisach szczególnych, ale – jak zaznacza profesor – to wciąż za wąski wykaz. Jednocześnie dezinformacja w projekcie wdrożeniowym nie jest definiowana jako treść nielegalna.„W projekcie mamy wskazania oparte na przestępczości z kodeksu karnego. To za krótkie. Dezinformacji nie da się po prostu zdefiniować jako przestępczości – to inne zjawisko.”Na koniec – odwieczny dylemat między wolnością a bezpieczeństwem. DSA ma według profesor zwiększyć to drugie, nie unieważniając pierwszego.„Powinniśmy się cieszyć, że będziemy bardziej bezpieczni w sieci. Ale to zawsze dylemat: więcej wolności czy więcej bezpieczeństwa. Jestem po stronie wolności – i jednocześnie uważam bezpieczeństwo za bardzo ważne.”Profesor zapowiedziała, że 4 listopada odbędzie się wysłuchanie publiczne w sprawie polskich rozwiązań wdrożeniowych – tam mogą zapaść ważne rozstrzygnięcia co do konstrukcji systemu i roli krajowego koordynatora.
//The Wire//2300Z October 23, 2025////ROUTINE////BLUF: UNREST CONTINUES IN IRELAND AS CENSORSHIP CRACKDOWN INTENSIFIES. ILLEGAL MIGRANT FROM INDIA CAUSES FATAL CRASH IN CALIFORNIA. MIGRANT WOES CONTINUE THROUGHOUT EUROPE.// -----BEGIN TEARLINE------International Events-Middle East: The war continues as before. Following Israeli strikes in Lebanon a few days ago, more significant airstrikes were reported in southern Lebanon overnight. Yesterday evening the Israeli Knesset voted to annex the West Bank, a move which has drawn slight criticism from the United States due to the war continuing to widen after the ceasefire was broken last week. SECSTATE Marco Rubio condemned this vote and VP Vance stated that Israel will not be allowed to annex the West Bank.Italy: A man was arrested on Tuesday after attempting to kidnap a child from a stroller near the train station in Bologna. Local authorities state that the man, who is an illegal immigrant from Gambia, approached a family on the sidewalk and attempted to grab their newborn child from their stroller. After local citizens fought back against him, he fled the scene before being arrested later in an adjacent park. The suspect has not been identified due to allegedly being a juvenile, but locals did state that he was an asylum seeker living at the local migrant housing complex near the attack site.-HomeFront-California: The driver of a semi truck who caused a fatal pileup on I-10 yesterday has been arrested. Jashanpreet Singh, an illegal immigrant from India, was arrested on multiple counts after the crash, including DUI. 3x people were killed during the crash, and 4x others wounded.Analyst Comment: Singh was illegally in the United States after walking over the southern border in 2022. After being initially detained, he was released into society by Customs and Border Patrol due to the catch-and-release open-border policy at the time.-----END TEARLINE-----Analyst Comments: In Ireland, the situation remains as tense as expected. One of the major issues with discerning the truth regarding events on the ground is the now-infamous Online Safety Act, which has been adopted by the European Union as the Digital Services Act (DSA) This is more or less the EU copying Starmer's homework and bringing the Online Safety Act to most of Europe. The DSA and the local regulations put into effect by the Irish government (most notably the chairman of the media commission Jeremy Godfrey) have resulted in a lot of the footage of protest activity being banned from being seen from Irish IP addresses. Users on X for example, have noted that their content is banned from view in Ireland, even content that is benign in nature but related to protest activity. This means that either the Irish government and/or the European Commission itself has ordered X to remove certain content by flagging it as "adult content" and thus subject to the Digital Services Act.In other words, exactly what everyone thought would happen, has happened. And not only happened once, but has been industrialized. Content which makes government agencies look bad, or otherwise highlights the crimes of violent migrants gets flagged as "adult content" and therefore is subject to censorship via the Online Safety Act and requires a photo ID (or a digital ID) to see. Right now, there are probably a lot of people in Ireland that have no understanding of the scale of protests and demonstrations (or even what the issue is all about) due to this censorship. This has been made worse by other larger issues that have actually made it into the mainstream press.Yesterday, the family of Josef Puska was sentenced for their various roles in the murder of Ashling Murphy. These sentences have inflamed tensions once more due to the nature of the crime itself, and the systemic coverup of the incident. While walking on a hiking path back in 20
The European Commission has accused Meta and TikTok of violating the Digital Services Act, Intel’s revenue rose 3% year-over-year to $13.7 billion, and a federal court has ordered Meta to release internal documents related to teen harm. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–withoutContinue reading "Anthropic And Google Have Formed A Cloud Partnership Valued At Tens Of Billions Of Dollars – DTH"
Tara dives deep into the mounting threats to Americans both domestically and internationally. From Portland's “ICE wars” with lasers, armed confrontations, and rogue courts, to Europe's escalating censorship regimes targeting 4chan, LinkedIn, and even basic scientific truths, the episode reveals a world where free speech is under siege. Add in rising political violence in North Carolina and global geopolitical maneuvering with Russia, and listeners get an unfiltered look at how law, media, and politics collide to create danger at home and abroad.
New research shows that children who go onto the social media app TikTok and make enquiries about mental health will quickly find depressive content, and that within a few hours they are bound to see content from users expressing the will to kill themselves. That research from Amnesty Tech is entitled "Dragged Into the Rabbit Hole". The organisation says the findings highlight TikTok's ongoing failure to address its systemic design risks affecting children and young people, and also illustrate the failings of the European Union's Digital Services Act. Since 2023, the act requires platforms to identify and mitigate systemic risks to children's rights. In Perspective, we spoke to Lauren Armistead, deputy director at Amnesty Tech.
Europe created regulatory frameworks, including the Digital Markets Act and the Digital Services Act, in attempt to reassert control, but these laws have recently faced backlash from the Trump administration. In this episode of the TechTank podcast, Courtney Radsch speaks with Marietje Schaake about these challenges and the digital sovereignty debate. Hosted on Acast. See acast.com/privacy for more information.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Thomas Hughes, CEO of Appeals Centre Europe and former Director at the Oversight Board. Together they discuss:Appeals Centre Europe Transparency Report (ACE)Most people want platforms (not governments) to be responsible for moderating content (Reuters Institute) Happy Birthday, Digital Services Act! – Time for a Reality Check (Algorithm Watch)Proof-of-age ID leaked in Discord data breach (The Guardian)Update on a Security Incident Involving Third-Party Customer Service (Discord)Another Day, Another Age Verification Data Breach: Discord's Third-Party Partner Leaked Government IDs (Techdirt)Exclusive: Apple Quietly Made ICE Agents a Protected Class (Migrant Insider)My Email to Tim Cook (Wiley Hodges — Substack) Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
It's been three years since Europe's Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms' original decisions were overturned, either because they were incorrect, or because the platform didn't provide the content for review at all.Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:Thomas Hughes is the CEO of Appeals Center EuropePaddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.
Thomas Hughes, CEO of Appeals Centre Europe, explains out-of-court dispute settlement bodies (ODS bodies) and the role they play in the Digital Services Act. We discuss Appeals Centre Europe's first Transparency Report*, which provides an initial glimpse into how Facebook, Instagram, TikTok, and YouTube are handling requests from ODS bodies. We cover some top line descriptives from the Centre's first year: how many disputes were submitted, from which platform, and what types of content they were about. We also talk about 'signposting' and how the platforms are communicating their digital rights to users.*Just a friendly heads up that the report is 25MB, so you may want to download it on WiFi!
This episode unpacks the media circus around Jimmy Kimmel and the supposed “first amendment crisis,” revealing the real forces shaping speech in America. From ABC's temporary pull of Kimmel's show to Hillary Clinton traveling to Europe to push the EU's Digital Services Act, we explore how censorship is being globalized and weaponized against conservative voices. We break down the difference between theatrical outrage over the FCC and actual government-backed speech monitoring, including the FBI's actions on Twitter and John Brennan's international influence. Hear how Americans' perception of free speech is shifting, with 79% now feeling it's under threat, and why the real authoritarian moves are coming from entrenched elites rather than elected Republicans. A deep dive into the clash between political theater, media narratives, and the global fight over what can—and cannot—be said online.
The European Union’s Digital Services Act applies to digital platforms and service providers offering services to users in the EU, regardless of where the company is based—including U.S. companies.EU officials contend the Digital Services Act is needed to protect democracy from misinformation, disinformation, and hate speech online. Regulators in Brussels promise it will create a safer digital space by holding platforms such as Google, Amazon, Meta, and X accountable for policing these categories. Service providers that fail to comply risk fines of up to 6% of global annual revenue, restricted access to the EU market, or suspension of operations.House Judiciary Republicans recently issued a report warning that European regulators could use the Digital Services Act to chill speech, suppress political dissent, and establish a global censorship regime. By contrast, House Judiciary Democrats argue the Digital Services Act includes procedural safeguards, judicial oversight of content moderation, and democratic accountability within the EU.Will the Act make Brussels the new “sheriff of the digital public square”? Could it export European hate speech laws—which have at times been used against individuals peacefully expressing their views—beyond Europe? And what steps can governments, companies, and citizens take to safeguard free expression online?Join the Federalist Society for a discussion with experts on the EU, the Digital Services Act, and freedom of expression as we consider whether the United States should support—or oppose—the Act.Featuring: Stéphane Bonichot, Partner, Briard Bonichot & AssociésDr. Adina Portaru, Senior Counsel, Alliance Defending Freedom InternationalDr. John Rosenthal, Independent scholar and journalistBerin Szóka, President, TechFreedomModerator: Prof. Maimon Schwarzschild, Professor of Law, University of San Diego School of Law
In this episode of The Winston Marshall Show, I sit down with Mike Benz, former State Department official and leading critic of the Censorship Industrial Complex, to expose how governments and NGOs are working hand-in-glove with Big Tech to police free speech across the West.Mike explains how the EU's Digital Services Act and Britain's Online Safety Act are being used to impose AI-driven censorship, threatening not only European citizens but also Americans posting from within the US. We explore how these laws function as a “digital censorship regime,” backed by crippling fines and international coordination.From USAID funding NGOs to shut down cross-border political movements, to the Hate Lab in Cardiff feeding real-time speech data to UK police, Benz reveals how censorship has become industrialised — and how Elon Musk's takeover of Twitter briefly broke the system. We also discuss the Biden administration's role, the UK's crackdown on comedians and dissidents, and the disturbing reality of AI censorship ‘Death Stars' designed to silence debate on elections, COVID, climate, and more.A deep dive into the machinery of modern censorship — and what it means for the future of free speech in the West.-----------------------------------------------------------------------------------------------------------------------To see more exclusive content and interviews consider subscribing to my substack here: https://www.winstonmarshall.co.uk/-----------------------------------------------------------------------------------------------------------------------FOLLOW ME ON SOCIAL MEDIA:Substack: https://www.winstonmarshall.co.uk/X: https://twitter.com/mrwinmarshallInsta: https://www.instagram.com/winstonmarshallLinktree: https://linktr.ee/winstonmarshall----------------------------------------------------------------------------------------------------------------------Chapters 00:00 Introduction01:32 What Is the Censorship Industrial Complex?06:10 How the Government Outsources Censorship to NGOs11:45 The EU's Role in Exporting Censorship to the US17:30 AI-Powered Censorship & The Death of Free Speech22:05 Twitter Files & the Exposure of Government Pressure28:40 How “Trust & Safety” Became a Weapon35:15 The Role of the National Security State41:10 Elon Musk vs The Censorship Machine47:35 How the Online Safety Bill Extends UK Censorship53:20 The Global Coordination of Digital Repression1:00:45 How NGOs Control the Narrative Across Borders1:07:12 Mike Benz's Plan to Dismantle the Censorship Network1:12:40 Final Thoughts & A Warning for the West Hosted on Acast. See acast.com/privacy for more information.
Épisode 1367 : C'est Adam Mosseri, le patron d'Instagram, qui a pris la parole en vidéo le 24 septembre 2025. Il y dévoile une interface accessible à tous les utilisateurs. Pour fêter ce cap, la plateforme dévoile une nouvelle fonctionnalité baptisée Your Algorithm. Un outil qui pourrait bien redéfinir la manière dont les contenus sont recommandés sur le réseau.Un bouton pour reprendre la main sur l'algorithme.C'est Adam Mosseri, le patron d'Instagram, qui a pris la parole en vidéo le 24 septembre 2025. Il y dévoile une interface accessible à tous les utilisateurs. Objectif : permettre à chacun de personnaliser ses préférences de recommandation.Le principe : afficher la liste des sujets que l'algorithme a identifiés comme étant vos centres d'intérêts, et vous donner la possibilité de les modifier manuellement. Ajouter des thématiques. En supprimer d'autres. Corriger ce que l'algorithme a mal interprété.Un lancement stratégique sur Reels.La fonctionnalité est d'abord déployée sur Instagram Reels, avant d'être étendue à Explorer puis au feed principal. Ce choix n'est pas anodin. Reels est le format où l'algorithme joue un rôle central dans la distribution.L'interface est simple. Deux listes :• Ce que vous aimez• Ce que vous ne voulez plus voirRegardé une vidéo de chat par erreur ? Votre feed s'est rempli de félins ? Vous pourrez désormais supprimer cette catégorie d'un clic.Instagram s'aligne avec la demande utilisateur.Cela fait longtemps que les utilisateurs réclament plus de contrôle sur leur fil. Beaucoup souhaitent retrouver davantage de contenu issu des comptes qu'ils suivent réellement. Instagram répond ici à une frustration récurrente.3 piliers stratégiques pour Instagram : DMs, Reels, Reco.Mosseri l'a dit clairement : la croissance d'Instagram est portée par trois éléments : les messages privés, les Reels, les recommandations algorithmiques. L'app s'oriente donc entièrement autour de ces usages. Exemple concret : le bouton d'upload remplacé par un raccourci vers les DMs.Une nouvelle donne pour les créateurs.Ce changement bouleverse la stratégie des créateurs de contenu. Plus question de compter uniquement sur l'algorithme. Avec des utilisateurs qui filtrent eux-mêmes ce qu'ils souhaitent voir, il va falloir proposer du contenu plus engageant, plus ciblé, plus authentique.—Plus de transparence, plus de pouvoir aux utilisateurs.Meta accélère sur la transparence algorithmique. Une nécessité imposée par le Digital Services Act européen. Depuis 2023, l'entreprise multiplie les initiatives pour rendre ses algorithmes lisibles. La fonction "Your Algorithm" s'ajoute à celle de réinitialisation des recommandations, disponible depuis fin 2024.——Ce contrôle utilisateur pose plusieurs questions :• Est-ce que les préférences manuelles vont vraiment primer sur les autres signaux de l'algorithme ?. . . Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs. Nous sommes une agence social media basée à Lyon : https://supernatifs.com. Ensemble, nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Ensemble, nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Hello, and welcome to episode 188 of the Financial Crime Weekly Podcast, I am Chris Kirkbride. In this episode, beginning with sanctions, where the UK government has published a comprehensive starter guide for businesses and imposed asset freezes on two entities, Embers of an Empire and Rampage Productions, for alleged links to terrorism through music. The US has also taken action, sanctioning Indian nationals and an online pharmacy for trafficking counterfeit fentanyl pills. In money laundering news, a joint effort by Eurojust and Europol dismantled a sophisticated network laundering at least €30 million through gold bar transactions across France and Italy, seizing nearly 100 kilos of gold and luxury assets. We cover significant fraud operations, including INTERPOL's Operation HAECHI VI, which recovered $439 million globally from various cyber-enabled financial crimes, and the situation surrounding global insurer Allianz, which is grappling with a surge in fraud detection in the UK while simultaneously facing a $16.8 million fine in Australia for misleading claims. Rounding out the episode, we look at the EU demanding that major tech platforms like Google and Apple disclose their anti-scam measures under the Digital Services Act and discuss the recent UK arrest connected to the cyber-attack on Collins Aerospace that disrupted European flights.A transcript of this podcast, with links to the stories, will be available at www.crimes.financial.
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Greg Lukianoff, president of the Foundation for Individual Rights and Expression, joins The Winston Marshall Show for a sweeping conversation on free speech, censorship, and the dangers of state overreach.Lukianoff warns that Britain is sliding into authoritarianism with the Online Safety Act, the arrest of comedian Graham Linehan for “offensive tweets,” and thousands of citizens detained each year for speech crimes. He explains why Americans should be alarmed—not only because of cultural ties, but because UK and EU laws like the Digital Services Act now risk exporting censorship to the United States.They discuss the chilling effect of “non-crime hate incidents,” the hypocrisy of politicians who ignore Islamist extremism while cracking down on online speech, and why Big Tech is tripping over itself to appease Brussels bureaucrats. Lukianoff contrasts this with the Trump administration's free speech battles on university campuses, where anti-Semitism and harassment rationales are being used to justify speech codes.They also dive into the deeper history—Lenin, Marx, communism, and why the West never reckoned with the crimes of socialism—leaving us vulnerable to a new wave of ideological totalitarianism.All this—Britain's speech crisis, EU overreach, Trump and the universities, and the global struggle to preserve freedom of expression in the 21st century…-----------------------------------------------------------------------------------------------------------------------To see more exclusive content and interviews consider subscribing to my substack here: https://www.winstonmarshall.co.uk/-----------------------------------------------------------------------------------------------------------------------FOLLOW ME ON SOCIAL MEDIA:Substack: https://www.winstonmarshall.co.uk/X: https://twitter.com/mrwinmarshallInsta: https://www.instagram.com/winstonmarshallLinktree: https://linktr.ee/winstonmarshall----------------------------------------------------------------------------------------------------------------------Chapters 00:00 - Introduction & Guest Welcome 02:49 - Graham Linehan's Arrest and Free Speech in the UK 05:08 - Arrests for Offensive Speech in the UK 07:34 - Non-Crime Hate Incidents and Social Attitudes 10:03 - Malcolm Gladwell, Trans Issues, and Chilling Effects 14:03 - The Online Safety Act and Censorship of Americans 17:04 - Why Americans Care About British Free Speech 19:14 - The EU Digital Services Act and Global Censorship 22:20 - Protecting Children Online: Policy and Parental Controls 28:41 - Free Speech in America: Trump, Wokeism, and Ideology 32:31 - Communism, Fascism, and Totalitarianism: Historical Parallels 36:30 - Reckoning with Socialism and Communism 39:30 - Karl Popper, Tolerance, and Hate Speech Laws 42:30 - Trump Administration, Campus Speech Codes, and Harassment 45:32 - Political Correctness and Speech Codes in Higher Ed 51:02 - Harvard, Civil Rights Act, and Federal Oversight 55:56 - The Right Way to Reform University Funding 58:52 - Free Speech for Pro-Palestinian Students and Deportation 1:10:19 - Why Harvard Ranks Lowest for Free Speech 1:13:21 - FIRE's New Books and Campus Free Speech Rankings 1:13:56 - Closing Remarks Hosted on Acast. See acast.com/privacy for more information.
Nachdem das US-Außenministerium Diplomaten dazu aufgerufen hatte gegen EU-Digitalgesetze vorzugehen, berichten US-Quellen nun über mögliche Sanktionen gegen EU-Politiker. Das Weiße Haus hält sich mit Informationen zurück.
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Klickibunti: Wie Windows 95 den modernen PC prägte – Vor 30 Jahren revolutionierte Microsoft mit Windows 95 die PC-Welt und legte den Grundstein für die moderne Computernutzung. Was machte Windows 95 so wegweisend? Welche Designprinzipien prägen noch heute unsere Computer? Und wie veränderte das Betriebssystem die Art, wie wir mit Computern umgehen? - Das riecht nach Ärger: US-Regierung will EU-Regulierer sanktionieren – Die Trump-Regierung plant angeblich Sanktionen gegen EU-Verantwortliche wegen des Digital Services Act. Wie ernst ist diese Drohung zu nehmen? Welche Auswirkungen hätte ein Handelsstreit zwischen USA und EU auf die Tech-Regulierung? Und kann die EU ihre digitale Souveränität gegen US-Druck behaupten? - Anrufchaos: Legen automatisierte Notrufe Rettungsleitstellen lahm? – Smarte Geräte und automatisierte Systeme überlasten zunehmend die Notfall-Infrastruktur mit Fehlalarmen. Wie groß ist das Problem der automatisierten Notrufe wirklich? Welche technischen Lösungen gibt es, um echte Notfälle von Fehlalarmen zu unterscheiden? Und wie können Rettungsleitstellen mit der wachsenden Zahl vernetzter Geräte umgehen? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
President Donald Trump called out countries trying to regulate U.S. tech companies earlier this week, warning they could face new tariffs. The White House has struck a provisional trade deal with the European Union, but tensions remain over the bloc's sweeping tech laws, like the Digital Services Act, which requires platforms to moderate illegal content and disinformation.
President Donald Trump called out countries trying to regulate U.S. tech companies earlier this week, warning they could face new tariffs. The White House has struck a provisional trade deal with the European Union, but tensions remain over the bloc's sweeping tech laws, like the Digital Services Act, which requires platforms to moderate illegal content and disinformation.
In this urgent update, we break down the political firestorm as JD Vance and Donald Trump confront the UK and EU over digital privacy and censorship. We'll cover the forced withdrawal of the UK's demand for iPhone backdoors, the potential for unprecedented US sanctions and travel bans on foreign officials, and the escalating battle against what they call the "Orwellian" Online Safety Act and Digital Services Act. This episode dives into the real-world implications of these policies, the threat of central bank digital currencies (CBDCs), and the broader fight for free speech against global tech regulation.Support the showThanks for listening. For more commentary, join my newsletter! Click here to stay in touchSee you next time!
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Klickibunti: Wie Windows 95 den modernen PC prägte – Vor 30 Jahren revolutionierte Microsoft mit Windows 95 die PC-Welt und legte den Grundstein für die moderne Computernutzung. Was machte Windows 95 so wegweisend? Welche Designprinzipien prägen noch heute unsere Computer? Und wie veränderte das Betriebssystem die Art, wie wir mit Computern umgehen? - Das riecht nach Ärger: US-Regierung will EU-Regulierer sanktionieren – Die Trump-Regierung plant angeblich Sanktionen gegen EU-Verantwortliche wegen des Digital Services Act. Wie ernst ist diese Drohung zu nehmen? Welche Auswirkungen hätte ein Handelsstreit zwischen USA und EU auf die Tech-Regulierung? Und kann die EU ihre digitale Souveränität gegen US-Druck behaupten? - Anrufchaos: Legen automatisierte Notrufe Rettungsleitstellen lahm? – Smarte Geräte und automatisierte Systeme überlasten zunehmend die Notfall-Infrastruktur mit Fehlalarmen. Wie groß ist das Problem der automatisierten Notrufe wirklich? Welche technischen Lösungen gibt es, um echte Notfälle von Fehlalarmen zu unterscheiden? Und wie können Rettungsleitstellen mit der wachsenden Zahl vernetzter Geräte umgehen? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
Basic freedoms are under threat in the Western world, as the EU's Digital Services Act and the UK's Online Safety Act raise concerns about Orwellian-style censorship—regulations that could extend across the Atlantic, impacting American citizens and businesses alike.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.
Elon Musk is taking his beef with Sam Altman AND Tim Apple to court. The Trump administration could sanction EU officials over the Digital Services Act. Spotify has added DMs. And Ben Thompson's deep analysis of the whole Intel situation. Links: Elon Musk's xAI sues Apple and OpenAI over AI competition, App Store rankings (Reuters) Exclusive: Trump administration weighs sanctions on officials implementing EU tech law, sources say (Reuters) Spotify is adding DMs (The Verge) Attorneys General To AI Chatbot Companies: You Will ‘Answer For It' If You Harm Children (404Media) Intel says Trump deal has risks for shareholders, international sales (CNBC) U.S. Intel (Stratechery) Learn more about your ad choices. Visit megaphone.fm/adchoices
President Trump dismisses Federal Reserve governor Lisa Cook over alleged mortgage loan improprieties but Cook says Trump has no grounds to remove her. In France, Prime Minister Francois Bayrou says he will put his budget cuts to a confidence vote in parliament. The three main opposition parties have already vowed to oppose his measures. And President Trump has threatened further tariffs on countries that tax or regulate American tech companies and his administration is also considering sanctions on EU or member state officials for implementing the Digital Services Act. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
As the U.S. and EU celebrate a historic trade agreement, a different restriction comes into focus: the EU's sweeping Digital Services Act. In this encore episode of Morning Wire, Jeremy Tedesco from the Alliance Defending Freedom joins us to explain the unprecedented foreign censorship pressure applied to U.S. tech companies and everyday Americans. Get the facts first on Morning Wire. - - - Wake up with new Morning Wire merch: https://bit.ly/4lIubt3 - - - Today's Sponsors: ZocDoc - Find and instantly book a top-rated doctor today. Visit https://Zocdoc.com/WIRE #sponsored - - - Privacy Policy: https://www.dailywire.com/privacy morning wire,morning wire podcast,the morning wire podcast,Georgia Howe,John Bickley,daily wire podcast,podcast,news podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
This is a free preview of a paid episode. To hear more, visit www.racket.newsThe European Commission holds a seminar on implementation of its Digital Services Act, offering a disturbing window into a tightening speech landscapeNarrated by Jared Moore
Is free speech in global decline? A new survey suggests public support for free expression is dropping worldwide, with citizens in authoritarian countries like Venezuela and Hungary showing stronger commitment to free speech than many living in democracies.From the unfulfilled digital promises of the Arab Spring to Europe's controversial Digital Services Act, the Internet hasn't necessarily delivered the free speech revolution many predicted. Americans under 30 are less committed to free speech principles than previous generations, while both of the U.S.'s major political parties face accusations of using government power to control information.As AI reshapes how we communicate and governments worldwide rethink speech regulations, what does this mean for the future of human expression? Are we witnessing a fundamental shift in how societies value free speech, or simply recycling ancient debates in digital form?Evan is joined by Jacob Mchangama, Executive Director of The Future of Free Speech at Vanderbilt, and author of Free Speech: A History From Socrates to Social Media, and Ashkhen Kazaryan, Senior Legal Fellow at The Future of Free Speech. Previously, she was the lead for North and Latin America on the content regulation team at Meta.
Host Stefan Lassnig nimmt die Hörer:innen in dieser Spezialfolge „Ganz Offen gesagt“ mit auf eine Pressereise ins Europäische Parlament in Brüssel. Gemeinsam mit den Gästen Lena Schilling, Helmut Brandstätter, Hannes Heide und Lukas Mandl beleuchtet er die akuten Gefahren für die Pressefreiheit in Europa – vom Einfluss großer US-Techkonzerne über wachsende Propagandakanäle rechtsextremer Gruppen bis zum Verlust unabhängiger Medien als Stütze der Demokratie.Ausgewählte Zitate aus dieser Episode:• Lena Schilling: „Aber ja, im Jahr 2025 brauchen wir eine europäische Union, die Pressefreiheit in Nationalstaaten sichert, weil nationalstaatliche Tendenzen leider, zB in Ungarn oder auch in Bulgarien, in eine Richtung gehen, die mir arg Bauchweh macht.“• Helmut Brandstätter: „Und wenn Herr Vance dann nach Europa kommt und sagt, die Meinungsfreiheit ist eingeschränkt, dann ist das schlicht und ergreifend falsch. Wenn der amerikanische Präsident gewisse Journalisten nicht mehr zulässt, dann ist die Meinungsfreiheit eingeschränkt. Wenn er Medien, Radio und Fernsehstationen lobt, die eindeutig russische Propaganda bringen, dann ist die Meinungsfreiheit eingeschränkt. • Hannes Heide: „Und da gibt es den Digital Service Act, der sich zum Ziel gesetzt hat, und ich sage es einfach wirklich so, den wilden Westen im Internet zu beenden."• Lukas Mandl: „Es sind eindeutig Tendenzen, die Pressefreiheit in Frage stellen oder sogar hintanhalten wollen, die freie Medien versuchen zu unterdrücken oder gar abzuschaffen, im Ansteigen begriffen. Man muss das mit einem klaren Blick auf die weltweite Situation sehen. Und auch in Europa gibt es Tendenzen, die besorgniserregend sind.“Links zur Folge:Welttag der Pressefreiheit: EU-Erklärung 2025EU-Medienfreiheitsgesetz & Schutz für JournalistenDigital Services Act: Was gilt seit 2024?Podcastempfehlung der Woche:2040 - So schön kann die Zukunft sein Wir würden uns sehr freuen, wenn Du "Ganz offen gesagt" auf einem der folgenden Wege unterstützt:Werde Unterstützer:in auf SteadyKaufe ein Premium-Abo auf AppleKaufe Artikel in unserem FanshopSchalte Werbung in unserem PodcastFeedback bitte an redaktion@ganzoffengesagt.at
Our host, Corbin Barthold, heads over to So to Speak: The Free Speech Podcast. He talks with Nico Perrino and Ari Cohn of FIRE about the death of the AI moratorium, a bogus wrongful death lawsuit against Complexity AI, the FTC's campaign of censorship against ad agencies, and the absurdities of Europe's Digital Services Act.Links:Shownotes, courtesy of So to Speak: The Free Speech Podcast
Ein Vortrag der Strafrechtlerin Johanna RinceanuModeration: Katrin Ohlendorf ********** Das Netz ist mittlerweile der Ort für gesellschaftlichen und politischen Diskurs. Damit unterliegt er auch den Strukturen und Dynamiken des Internets, vor allem der Social-Media-Plattformen. Was bedeutet das für die Meinungsfreiheit? Johanna Rinceanu ist Strafrechtlerin und Senior Researcher am Max-Planck-Institut zur Erforschung von Kriminalität, Sicherheit und Recht. Ihren Vortrag "Meinungsfreiheit in der digitalen Welt?" hat sie am 13.05.2025 im Rahmen der Reihe "Die Verfassung der Freiheit – Demokratieprobleme der Gegenwart" am Hamburger Institut für Sozialforschung gehalten. ********** +++ Deutschlandfunk Nova +++ Hörsaal +++ Vortrag +++ Menschenrechte +++ Meinungsfreiheit +++ Freiheit +++ Datenschutz +++ Menschenwürde +++ Recht +++ Gesetze +++ Internet +++ Überwachung +++ Zensur +++ Hate Speech +++ Desinformation +++ Fake News +++ Soziale Medien +++ Social Media +++ Meta +++ Facebook +++ Instagram +++ Alphabet +++ Apple +++ Microsoft +++ Amazon +++ Big Five +++ Big Tech +++ GAFAM +++ GAMAM +++ GAMMA +++ Tiktok +++ Google +++ EU +++ Europäische Union +++ Digital Services Act +++ DSA +++ China +++ USA +++**********Ihr hört in diesem Hörsaal:00:02:19 - Vortragsbeginn**********Quellen aus der Folge:Das Gesetz über digitale Dienste, engl.: Digital Services Act (DAS)Rinceanu, Johanna, & Stephenson, Randall (2022). Eine Diagnose digitaler Krankheiten. MaxPlanckForschung, 2022(3), 14-19.**********Mehr zum Thema bei Deutschlandfunk Nova:KI im demokratischen Rechtsstaat: Digital Services Act und Robo-RichterTiktok: EU-Politik fehlt bislang auf PlattformHongkong: China könnte Autonomie Hongkongs entmachten**********Den Artikel zum Stück findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
Luis Paulitsch ist Jurist, Zeithistoriker und Medienethiker, arbeitet für die österreichische Datum-Stiftung – und er hat ein Buch geschrieben mit dem Titel “Alternative Medien: Definition, Geschichte und Bedeutung“* Darin: Markus Linden: Der Aufstieg der Mosaik-Rechten – Negative Öffentlichkeit und die prekäre Zukunft der Demokratie, LTO: BVerwG hebt Compact-Verbot auf, Digital Services Act, Reuters Digital News Report *Affiliate-Link: Wer […]
We're checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission's consent decree with Omnicom and Interpublic Group, the European Union's Digital Services Act, and what comes next after the Supreme Court's Free Speech Coalition v. Paxton decision. Guests: Ari Cohn — lead counsel for tech policy, FIRE Corbin Barthold — internet policy counsel, TechFreedom Timestamps: 00:00 Intro 02:38 State AI regulation moratorium fails in Congress 20:04 Character AI lawsuit 41:10 FTC, Omnicom x IPG merger, and Media Matters 56:09 Digital Services Act 01:02:43 FSC v. Paxton decision 01:10:49 Outro Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: “The AI will see you now” Paul Sherman (2025) Megan Garcia, plaintiff, v. Character Technologies, Inc. et. al., defendants, United States District Court (2025) Proposed amicus brief in support of appeal - Garcia v. Character Technologies, Inc. FIRE (2025) “Amplification and its discontents: Why regulating the reach of online content is hard” Daphne Kelly (2021) “Omnicom Group/The Interpublic Group of Co.” FTC (2025)
As the EU's sweeping Digital Services Act takes effect, U.S. tech companies—and American free speech—face unprecedented foreign censorship pressure. Jeremy Tedesco from the Alliance Defending Freedom joins us to explain. Get the facts first with Morning Wire. - - - Privacy Policy: https://www.dailywire.com/privacy
Vorträge der Wirtschaftsrechtlerin Henrike Weiden und des Richters Sebastian DötterlModeration: Katja Weber**********Was kann und soll der Digital Services Act? Wie kann er produktiv für uns werden? Das beschreibt die Juristin Henrike Weiden. Und: Ist es denkbar und sinnvoll, im Gerichtssaal mit KI-Systemen zu arbeiten? Die Frage stellt und beantwortet der Richter Sebastian Dötterl. **********Henrike Weiden lehrt Wirtschaftsprivatrecht und Recht der Digitalisierung an der Hochschule für angewandte Wissenschaften München. Sie ist Vorsitzende des Beirats der Koordinierungsstelle für Digitale Dienste bei der Bundesnetzagentur. Ihren Vortrag mit dem Titel "Plattformökonomie und Macht: Wie wirkungsvoll ist der Digital Services Act?" hat sie am 25. April 2025 anlässlich des Internationalen For..Net Symposiums mit dem Titel "KI im demokratischen Rechtsstaat" gehalten. Die Tagung wird jährlich gemeinsam vom TUM Center for Digital Public Services (CDPS) und dem Bayerischen Forschungsinstitut für Digitale Transformation (bidt) ausgerichtet. Nach ihr hat dort Sebastian Dötterl vorgetragen, er ist Richter am Oberlandesgericht München und lehrt an der Ludwig-Maximilians-Universität München, unter anderem zu Künstlicher Intelligenz und Recht. Sein Vortrag heißt "Richterliche Co-Creation mit KI: Bessere Rechtsfindung oder verfassungsrechtlicher Alptraum?".**********Schlagworte: +++ Digital Services Act +++ KI +++ Künstliche Intelligenz +++ Plattformökonomie +++ Demokratie +++ Rechtsstaat +++ Recht +++ Verfassungsrecht +++ Gerichtssaal +++**********Ihr hört in diesem Hörsaal:02:19 - Beginn des Vortrags von Henrike Weiden - Der DSA in der Theorie12:54 - Der DSA in der Praxis17:04 - Wie wollen wir den DSA auslegen?25:35 - Beginn des Vortrags von Sebastian Dötterl - Mit Richterin Regina Richtig im Gerichtssaal28:31 - Ausgangslage: Stärken und Schwächen von Mensch und KI47:18 - Wie kommen wir zu besserem Recht?**********Quellen aus der Folge:Digital Services Coordinator (Bundesnetzagentur)Beschwerdeportal für VerbraucherWie das Grundlagenseminar "Künstliche Intelligenz und Recht" das Bewusstsein für KI-Kompetenzen der Studierenden stärken soll (Interview)**********Mehr zum Thema bei Deutschlandfunk Nova:Maschinenliebe: Intimität im Zeitalter Künstlicher IntelligenzPrüfungsrecht: Warum KI-Einsatz bei Klausuren erlaubt sein sollteKI in der Medizin: Mensch oder Maschine**********Den Artikel zum Stück findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
Jakub Szymik, Founder of CEE Digital Democracy Watch, discusses the EU's ongoing initiatives to fight disinformation, regulate political advertising, and protect election integrity on social media. Jakub shares his insights on the Digital Services Act, the European Democracy Shield, and how these initiatives relate to digital political advertising on social media. We also discuss the latest Polish Presidential elections in terms of digital ad spending, Gen AI content, and political influencers.Links:CEE Digital Democracy Watch has just published a new report outlining how 60 stakeholder from CEE countries perceive the effectiveness of EU tech regulation and what challenges remain moving forward.Read more about CEE Digital Democracy Watch's proposed policy actions for the European Democracy Shield, including the disinformation monitoring service mentioned in the episode.
In this packed Friday edition of The Daily Herold, Jon Herold tackles the controversy around James Comey's now-deleted “8647” Instagram post, which the Trump administration is investigating as a potential call for assassination. Jon weighs the possible meanings, whether it was an actual coded threat or just another case of elite hubris, and reacts to Trump's own fiery response. Next, Herold dives into breaking developments within the FBI as Cash Patel reveals the agency is abandoning the Hoover Building and decentralizing operations across the U.S., a move Trump hinted at months earlier. He connects the dots between the shake-up and broader efforts to purge corruption, including the shutdown of the FBI's public corruption unit that once helped launch the Arctic Frost probe into Trump. Other highlights include UnitedHealth's dramatic collapse amid federal fraud investigations, Trump's post-Middle East warning to 150 countries about pending tariffs, and the ongoing GOP failure to pass the “Big Beautiful Bill.” Jon also covers the EU's Digital Services Act and the Trump administration's pushback against global censorship laws. Blending breaking news, geopolitical analysis, and unfiltered commentary, this episode captures the chaos and clarity of a movement reclaiming power one headline at a time.
This eye-opening segment exposes a classified domestic terrorism plan from the Biden administration, recently declassified by Tulsi Gabbard. The discussion reveals how political and religious views—symbolized by hashtags like “2A”—were allegedly used to target Americans for censorship and debanking. With billionaires like Marc Andreessen sounding the alarm and parallels drawn to Europe's Digital Services Act, the hosts argue this is a creeping Orwellian crackdown on dissent, free speech, and financial independence in America.
A chilling look at the growing global machinery of censorship and financial control, this conversation dives into declassified plans from the Biden administration targeting Americans for their political and religious views—flagging symbols like "2A" and using government influence to de-bank and silence dissenters. Meanwhile, figures like Nina Jankowicz and EU institutions leverage laws like the Digital Services Act to suppress opposition speech overseas, with chilling implications for free expression even in the U.S. From Silicon Valley investors turning on Democrats to billionaires under fire, the message is clear: speak out, and risk being erased—digitally and financially.
In this fiery discussion, Tara and Lee unpack Nina Jankowicz's return to the spotlight, her role in European censorship efforts, and how the EU's Digital Services Act is being used to suppress dissenting political speech—even from Americans. They explore the broader implications of global censorship, the potential financial targeting of Elon Musk, and shocking revelations about domestic surveillance and debanking under the Biden administration. A deep dive into what they call a growing, globalized assault on free expression.
ChatGPT Search may soon fall under the EU's Digital Services Act, OpenAI and the Washington Post ink a deal to display Post content in ChatGPT, and Bluesky adopts a centralized verification system. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this wouldContinue reading "ChatGPT Search May Soon Fall Under The EU's Digital Services Act – DTH"
Free speech is facing an unprecedented crackdown across the UK and Europe. The UK's Online Safety Act and the EU's sweeping Digital Services Act are driving small forums offline, pressuring U.S. platforms to silence users, and even tasking British pubs with policing conversations under new “hate speech” rules. Is this the end of open dialogue in the West?Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.