POPULARITY
Künstliche Intelligenz verändert die juristische Arbeit – aber wie genau? In dieser Folge von „Sag doch mal“ spricht Dr. Sebastian Lach, Partner im Bereich Compliance & Investigations bei Hogan Lovells, darüber, wie Technologie und Legal Tech den Arbeitsalltag von Wirtschaftsstrafrechtlern verändern. Sebastian erzählt im Gespräch mit unserer Hostin Janina, warum er ursprünglich gar nicht in eine Großkanzlei wollte, weshalb ihn gerade die menschliche Seite von Compliance-Fällen fasziniert und wie er schließlich doch seit über 20 Jahren bei Hogan Lovells geblieben ist. Außerdem gibt er Einblicke in typische Betrugsvorwürfe, neue Risiken durch Cybercrime und die wachsende Bedeutung von Datenanalysen in Ermittlungen.Im Mittelpunkt dieser Episode steht zudem die Frage, welche Rolle künstliche Intelligenz künftig im Recht spielen wird. Sebastian erklärt, warum KI besonders bei der Analyse großer Dokumentenmengen hilft, wie aus einem Kanzlei-Projekt das Legal-Tech-Unternehmen ELTEMATE entstanden ist – und warum er überzeugt ist: Auch im digitalen Zeitalter bleibt der Mensch im Recht unverzichtbar.
AI is nothing new but everything changed when ChatGPT landed. Now that agentic AI is moving at speed through every economy, it’s incumbent on boards to ensure their organisations can, if not outpace change, then at least try to keep up with it. In this episode, we trace AI’s trajectory – from leaps in technology to regulatory dilemmas – and we flag up issues and actions for directors. In this episode of our UK Governance & Compliance mini-series, host Will Chalk is joined by two experts on AI governance, Ashurst’s Fiona Ghosh and Matt Worsfold. Drawing on Ashurst’s recent Board Priorities 2026, they peel back the layers of AI hype and market complexity to get to the heart of the matter: what do directors need to know and – most importantly – what do they need to do? In less than 25 minutes, the trio explain the story so far (AI governance in the past three years) and confront today’s great challenge: ensuring frameworks are genuinely embedded and operationalised. They also discuss contrasting regulatory approaches (including mixed fortunes for the EU’s ambitious AI Act and the UK’s more light-touch approach) and the common thread across jurisdictions: uncertainty. Our panel also gets practical – strongly recommending hands-on AI literacy for directors. As Fiona points out, "Until you are actually grappling with the topic yourself, you can't possibly understand some of the issues and some of the risks that are presented.” The discussion also explores the changing remit of non-executive directors and the widening breadth of skills they now require to be effective in their roles. And our experts highlight relevant guidance available for directors who want to get on the front foot. To listen to this and subscribe to future episodes in our governance mini-series, search for “Ashurst Legal Outlook” on Apple Podcasts, Spotify or your favourite podcast player. You can also find out more about the full range of Ashurst podcasts at ashurst.com/podcasts. To receive updates and alerts on the issues raised in this podcast mini-series, subscribe to Ashurst’s regular Governance and Compliance Updates. The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.See omnystudio.com/listener for privacy information.
Human Oversight, Transparency & What It Means for Global AI RegulationIn this episode, Todd (COO & CISO) and Nate (Director of Cybersecurity) to discuss South Korea's new AI Basic Act and what it signals for AI regulation globally. They outline the law's focus on guardrails such as mandatory disclosure when AI is used, clear labeling of AI-generated content, human oversight in high-impact areas (like healthcare and critical infrastructure), and explainability around how AI-driven decisions are made. The conversation compares South Korea's approach as a middle ground between the EU's stricter governance (EU AI Act) and the US's current emphasis on innovation and some AI deregulation, while noting the US has limited election-related rules around deepfakes. They discuss real-world risks like AI errors in allergen data, misleading or harmful AI content, and business impacts including compliance pressure, the importance of human validation, and maintaining an inventory of where AI is used to address “shadow AI.” They also note that South Korea's fines appear relatively small compared to EU-style penalties, and that broader impact will depend on enforcement and how regulations evolve over the next several years.00:00 AI Regulation Kickoff00:26 South Korea AI Act Basics01:17 Transparency and Labeling Rules02:22 Elections and Deepfakes03:21 Real World Risk Examples04:39 Global Approaches EU US Korea05:41 How Global Rules Shift07:20 US Policy Tensions and Lawsuits09:00 Business Compliance and Fines10:10 Explainable AI in Banking11:16 Regulated Marketing Disclosures13:23 Shadow AI and Inventories14:33 Explainable AI Expectations14:46 Disclosure and AI Skepticism15:50 AI in Customer Delivery16:29 Code Risk and Accountability17:18 Hallucinations in Compliance18:03 Human in the Loop18:59 Leadership and Validation19:50 High Stakes Decisions20:59 Regulation and Enforcement21:34 Wrap Up and Next Episode
I denne episode af EDB 5.0 taler vi med Søren Ragaard fra Milestone Systems - en global frontløber indenfor video management software og ansvarlig håndtering af videodata. Med Projekt Hafnia som omdrejningspunkt, dykker vi ned i hvordan AI-modeller trænes på enorme mængder videodata, hvordan anonymisering og privacy-by-design er indbygget fra starten, og hvorfor compliance og AI Act ikke er en bremse, men en forudsætning for skalerbar innovation. Samtidig taler vi avancerede video-AI-modeller. Shownotes: 00.00-12.29: Introduktion til Søren, Milestone Systems og Projekt Hafnia 12.29-38.11: Projekt Hafnia, træning af videomateriale, anonymisering, samarbejdet med NVIDIA, fine-tuning af modeller, sikkerhed, compliance & AI Act og løsninger for Projekt Hafnia 38.11-45.56: Infrastruktur, digital suverænitet og fremtiden Vært: Mathias Mengesha Emiliussen
OpenAI tancée par le gouvernement canadien : la responsable de la tuerie survenue début février avait préparé son coup avec l'aide de ChatGPT mais la compagnie américaine n'a rien dit à la police.Avec Bruno Guglielminetti (Mon Carnet https://moncarnet.com/)OpenAI face à la tragédie : quand l'IA détecte, mais n'alerte pasAprès une tuerie survenue en Colombie-Britannique, un article du Wall Street Journal relance une question explosive : que doit faire une plateforme quand un échange avec une IA laisse entrevoir une intention violente ? Selon Bruno, des conversations entre l'auteure présumée et OpenAI via ChatGPT auraient été signalées en interne, sans transmission aux autorités, déclenchant l'ire du ministre canadien de l'IA Evan Solomon. L'affaire met aussi en lumière la “veille” automatisée : détection, escalade vers des équipes de sécurité, puis arbitrage humain. Et, en toile de fond, la question qui revient sans cesse : à partir de quel seuil une entreprise doit-elle contacter la police ?Mistral accusé d'entraîner son modèle sur des œuvres protégéesEn France, c'est Mistral AI qui se retrouve dans la tourmente après une enquête de Mediapart affirmant que des contenus soumis au droit d'auteur (livres, chansons, presse) auraient servi à l'entraînement. Jérôme rappelle que la pratique du “scraping” est largement répandue dans l'industrie, mais que l'Europe n'offre pas le même cadre que les États-Unis et leur notion de “fair use”.En filigrane, une tension centrale : comment concilier innovation et respect du droit, notamment avec le AI Act ? Et surtout, quelles règles — et quelles compensations — pour permettre un développement de l'IA sans “open bar” sur les contenus culturels ?Anthropic accuse le chinois DeepSeek d'avoir pillé son LLMAnthropic accuse le modèle chinois DeepSeek d'avoir récupéré des sorties de Claude via des comptes massifs, pour entraîner ses propres modèles par “distillation”. Une pratique répandue, mais qui devient explosive lorsqu'elle se fait à grande échelle et sans autorisation. Une situation ironique puisque nombre d'acteurs, y compris Anthropic, sont accusés de maux similaires. La grosse colère de Jean-Baptiste Kempf (VLC) Une autre histoire fait du bruit : Jean-Baptiste Kempf, cofondateur de VLC / VideoLAN, publie un long message sur LinkedIn menaçant de quitter la France, après un blocage administratif touchant sa femme dans le cadre du concours d'entrée à l'ENM. L'affaire devient politique lorsque le ministre de la Justice Gérald Darmanin lui répond, avant qu'un contact direct ne semble débloquer la situation.À noter : JB Kempf était récemment l'invité de la série “Innovateurs” de Monde Numérique, à écouter ici : Jean-Baptiste Kempf : de VLC à Kyber, portrait d'un innovateur éthique.Un MacBook tactile en 2026 ? La rumeur qui s'accrocheBloomberg fait état d'un possible MacBook à écran tactile pour cette année. Toutefois, la rumeur revient régulièrement. On s'étonne que le débat existe encore tant le tactile est courant sur PC.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Podcast „Prawo dla Biznesu” jest integralnym elementem czasopisma o tej samej nazwie. Nagrania mają charakter informacyjno-edukacyjny, a także promocyjny.Wydawca: Kancelaria Prawna Kantorowski, Głąb i Wspólnicy Sp.j.Redaktor naczelny: radca prawny Piotr KantorowskiPodcast zawiera:autopromocję wydawcy,reklamy produktów i usług innych przedsiębiorców,treści edukacyjne i eksperckie z zakresu prawa dla biznesu._______________________________________________________________________________Poznaj ryzyka prawne wykorzystania AI w marketingu – od RODO, przez AI Act, po prawo autorskie. Praktyczne wskazówki dla przedsiębiorców.
La versione podcast automatica della newsletter #Techy del 2 marzo 2026L'AI smette di essere un "giocattolo": scopri come l'intelligenza artificiale sta diventando un'infrastruttura critica che tocca operation, CRM e supply chain, uscendo finalmente dalle sandbox di prova.La guerra dei 650 miliardi: analizziamo il maxi-investimento delle Big Tech in data center, chip ed energia per dominare la nuova geografia del cloud.Agenti AI nel tuo smartphone: dalle aziende alla vita quotidiana, con i futuri Pixel 10 e Samsung S26 che integreranno veri assistenti in grado di agire autonomamente.Physical AI e umanoidi: i robot non sono più il futuro, ma il presente. Dai modelli Tesla Optimus ai robot per le consegne urbane, l'AI ha finalmente un corpo.Sovranità e compliance: perché per le imprese europee la vera sfida si gioca su controllo dei dati e AI Act, con soluzioni "sovereign" che sfidano i colossi USA.Efficienza contro gigantismo: la rivoluzione dei piccoli modelli che battono i giganti consumando una frazione dell'energia.Il caso Gucci e l'AI nel lusso: quando l'uso della tecnologia genera critiche e mette alla prova l'identità artistica dei brand di alta moda
Parce que… c'est l'épisode 0x716! Shameless plug 31 mars au 2 avril 2026 - Forum INCYBER - Europe 2026 14 au 17 avril 2026 - Botconf 2026 20 au 22 avril 2026 - ITSec Code rabais de 15%: Seqcure15 28 et 29 avril 2026 - Cybereco Cyberconférence 2026 9 au 17 mai 2026 - NorthSec 2026 3 au 5 juin 2026 - SSTIC 2026 19 septembre 2026 - Bsides Montréal 1 au 3 décembre 2026 - Forum INCYBER - Canada 2026 24 et 25 février 2027 - SéQCure 2027 Notes IA Confrontation DoW et Anthropic Anthropic digs in heels in dispute with Pentagon, source says Anthropic to Pentagon: Robo-weapons could hurt US troops Anthropic CEO says it cannot ‘accede' to Pentagon's demands for AI use Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight Trump admin blacklists Anthropic; AI firm refuses Pentagon demands Our agreement with the Department of War Statement on the comments from Secretary of War Pete Hegseth Anthropic Folie d'utilisation du IA Kevin Beaumont: “The incredible thing about thi…” - Cyberplace Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. Kevin Beaumont: “Accenture are firing people wh…” - Cyberplace Le grand remplacement IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization Infosec community panics over Anthropic Claude Code Security Long Before Tech CEOs Turned To Layoffs To Cover AI Expenses, There Was WorldCom Microsoft execs worry AI will eat entry level coding jobs AI gets good at finding bugs, not as good at fixing them Rapid AI-driven development makes security unattainable Claude Code Security Shows Promise, Not Perfection OpenClaw Google Antigravity falls to Earth under compute burden Malicious OpenClaw Skills Used to Trick Users into Manual Password Entry for AMOS Infection A Meta AI security researcher said an OpenClaw agent ran amok on her inbox The OpenClaw Hype: Analysis of Chatter from Open-Source Deep and Dark Web Sandboxes Won't Save You From OpenClaw This AI Agent Is Designed to Not Go Rogue AWS says 600+ FortiGate firewalls hit in AI-augmented attack Why the EU's AI Act is about to become every enterprise's biggest compliance challenge Detecting and preventing distillation attacks Anthropic Is AI Good for Democracy? Identity-First AI Security: Why CISOs Must Add Intent to the Equation Microsoft adds Copilot data controls to all storage locations AI models suck slightly less at math than they did last year Canadian government demands safety changes from OpenAI WA drivers reeling after passengers caught out by AI-powered safety cameras Souveraineté ou tout ce que je peux faire sur mon terrain Sovereignty in a System Prompt - POP RDI; RET; Danish government agency to ditch Microsoft software in push for digital independence US orders diplomats to fight data sovereignty initiatives Privacy ou tout ce qui devrait rester à la maison Enough Is Enough Five security lessons from the FBI's Washington Post raid Banning children from VPNs and social media will erode adults' privacy EU lawmakers propose that youth under 16 be barred from social media without parental consent Instagram to start alerting parents when children search for terms relating to self-harm Red ou tout ce qui est brisé Ransomware gangs advancing Moscow's geopolitical aims, Romanian cyber chief warns Android mental health apps with 14.7M installs filled with security flaws Discord pushes back age verification debut to 2H'26 Ransomware payment rate drops to record low as attacks surge Blue ou tout ce qui améliore notre posture Identity Prioritization isn't a Backlog Problem - It's a Risk Math Problem Windows 11 KB5077241 update improves BitLocker, adds Sysmon tool The Case for Why Better Breach Transparency Matters Some Linux LTS Kernels Will Be Supported Even Longer, Announces Greg Kroah-Hartman Collaborateurs Nicolas-Loïc Fortin Crédits Montage par Intrasecure inc Locaux réels par Intrasecure inc
(00:00:00) Il ciondolo di Apple e la fine degli schermi (00:00:13) Introduzione all'Intelligenza Artificiale (00:04:24) L'Attesa e i Prossimi Eventi (00:08:19) Esperienze e Dispositivi di Francesco (00:12:51) La Visione del Futuro (00:16:55) Riflessioni sulla Tecnologia e Privacy (00:20:11) Verso il 2030: Cosa Aspettarci (00:25:42) La Domanda delle Domande (00:34:58) Conclusioni e Prospettive Future Nel 2026 tutto è Intelligenza Artificiale, ma a cosa serve davvero? Partendo da una risposta di Francesco nello scorso episodio (finiremo per usare solo l'app di Gemini ignorando Siri?), rifletto sul futuro dell'ecosistema Apple. I rumor parlano di AirPods con microcamere, smart glasses e persino un "ciondolo" intelligente. E se l'iPhone e il Mac avessero i giorni contati? L'integrazione AI che oggi ci sembra macchinosa è solo la preparazione a un mondo post-schermi.Visita Digiteee e scopri tutte le notizie sulla tecnologiaSegui Digiteee su TikTokDimmi la tua su Twitter, su Threads, su Telegram, su Mastodon, su BlueSky o su Instagram.Mail jacoporeale@yahoo.it Scopri dove ascoltare il podcast e lascia una recensione su Apple Podcast o Spotify.Ascolta An iPad guy su YouTube Podcast.Supporta il podcast
AI can be tricked into saying almost anything. That's what a BBC journalist recently discovered. He found an easy way to make AI say whatever he wanted.Are authorities doing enough to regulate AI? At the EU level, is the AI Act doing its job?Production: By Europod, in co-production with the Sphera network.Follow us on:LinkedInInstagram Hosted on Acast. See acast.com/privacy for more information.
Last week, Irish consumers were warned that a range of household devices, including TV "dodgy boxes", could be secretly controlled by cybercriminals. Millions of these devices worldwide — including smart light bulbs, TVs, and other internet-connected gadgets — are susceptible to such attacks. Once inside your home network, attackers can monitor online activity and send fake messages that appear convincingly real. This is just one example of how quickly cyber threats are evolving. Critical infrastructures — such as hospitals, energy grids, and government services — are under constant attack. In response, researchers and policymakers are turning to artificial intelligence (AI) to strengthen digital defences. One of the most ambitious initiatives in this area is the EU-funded SYNAPSE project, a collaboration of 14 partners across eight countries. SYNAPSE aims to deliver an integrated risk and resilience management platform that provides holistic Situational Awareness (SA), cyber-incident response, and training and preparedness capabilities to safeguard critical environments. The platform is designed to detect cyber threats early, predict potential attacks, and guide security teams on how to respond effectively. To achieve this, SYNAPSE uses three powerful AI tools — explained here in simple terms. First, the platform learns what "normal" looks like within an organisation: how users log in, which files they access, and how devices communicate. When something deviates from this baseline, it raises an alert. Second, another component continuously scans global cybersecurity reports, databases, and open sources. It is like having an AI system that reads every cybersecurity article and threat bulletin worldwide and instantly identifies emerging risks relevant to your organisation. Third, the system connects different warning signals to forecast potential attacks before they fully unfold. It not only detects threats — it also recommends response strategies, helping security teams react faster and more effectively. These systems are currently being validated in real-world environments, including a hydrogen energy station in Germany, Cyprus's National Healthcare System, and a cyber-insurance company in Greece. But building powerful AI is only half the story. Whenever AI is deployed, strong ethical governance is essential. Eunomia Ltd, an Irish company, acts as the ethics and legal partner, ensuring that regulatory compliance, fundamental rights considerations, and trustworthy AI principles are embedded throughout the project lifecycle. Europe's new EU Artificial Intelligence Act (AI Act), which entered into force last year, classifies AI systems depending on the risk they pose for safety and fundamental rights and tailors the level of the intervention to the level of risk. The most regulated systems are the high-risk AI systems, which are those that may significantly affect individuals' safety or fundamental rights. Typical examples include AI used in employment and HR decisions (e.g., CV screening), access to education (e.g., exam scoring), creditworthiness and access to essential services (e.g., loan approvals), migration and border management (e.g., risk profiling), and certain law-enforcement or critical infrastructure uses. This classification triggers strict requirements, including robust risk management systems, human oversight mechanisms, transparency and documentation, technical robustness and accuracy, continuous monitoring and post-deployment evaluation. In parallel, the General Data Protection Regulation (GDPR) continues to regulate how personal data may be processed, including in the context of automated monitoring. While the AI Act does not apply directly to research-stage systems, responsible projects must anticipate these requirements. For this reason, SYNAPSE is being evaluated based on the EU's Assessment List for Trustworthy Artificial Intelligence (ALTAI) — a voluntary but influential framework developed by the Euro...
In deze podcast luister je naar het gesprek dat Jeroen Prinse (voormalig CISO bij het NCSC, nu strategisch adviseur) en Rob van der Veer (Chief AI Officer bij SIG en AI standaardmaker bij ISO en de AI Act) hadden tijdens het webinar van 12 februari 2026.We willen AI het liefst aan alles koppelen en naar onze data laten kijken, als we het kunnen vertrouwen. Want: waar gaat die data naar toe en hoe voorkomen we dat AI gemanipuleerd wordt? Rob en Jeroen hebben het over AI toepassen voor security, over het programmeren met AI en over het beveiligen van AI systemen, inclusief Agentic AI. Daarvoor putten de heren samen uit 20 jaar ervaring in security plus 34 jaar in AI. Zij geven een duidelijk overzicht, praktische tips en verwijzingen naar nuttige bronnen zoals owaspai.org en ncsc.nl/artificial-intelligence.
This is a real estate podcast. But there is no part of the modern world that is insulated from advances in AI, as well as the negative consequences of AI. Today we're talking about the EU AI Act, and what it can mean for real estate investors who own property in Europe, especially if you use “smart” building tech like security cameras, access control, tenant screening, or building automation. I've long held the opinion that European legislation can often serve as a canary in the coal mine for what might happen elsewhere in the world. So we are looking at Europe's AI Act in that context. ------------**Real Estate Espresso Podcast:** Spotify: [The Real Estate Espresso Podcast](https://open.spotify.com/show/3GvtwRmTq4r3es8cbw8jW0?si=c75ea506a6694ef1) iTunes: [The Real Estate Espresso Podcast](https://podcasts.apple.com/ca/podcast/the-real-estate-espresso-podcast/id1340482613) Website: [www.victorjm.com](http://www.victorjm.com) LinkedIn: [Victor Menasce](http://www.linkedin.com/in/vmenasce) YouTube: [The Real Estate Espresso Podcast](http://www.youtube.com/@victorjmenasce6734) Facebook: [www.facebook.com/realestateespresso](http://www.facebook.com/realestateespresso) Email: [podcast@victorjm.com](mailto:podcast@victorjm.com) **Y Street Capital:** Website: [www.ystreetcapital.com](http://www.ystreetcapital.com) Facebook: [www.facebook.com/YStreetCapital](https://www.facebook.com/YStreetCapital) Instagram: [@ystreetcapital](http://www.instagram.com/ystreetcapital)
The European Union has launched the cPAID project, short for Cloud-based Platform-agnostic Adversarial AI Defence Framework, to address one of today's most urgent digital challenges: securing Artificial Intelligence (AI). AI is now crucial to healthcare, transport, energy, and environmental monitoring, yet it faces new kinds of cyberattacks, such as poisoned training data, deceptive inputs, and model theft, which are risks that traditional security cannot stop. cPAID, an HORIZON project, launched in 2024 with a lifespan of 3 years, brings together 17 organisations across Europe, including universities, research institutes, technology companies, and a hospital. Its goal is to create a framework that protects AI systems throughout their entire lifecycle, from data collection and training to deployment and real-time operation. The project is developing tools to test AI against simulated cyberattacks, monitor behaviour for abnormalities, and adapt dynamically to emerging risks. It extends established practices in software development by embedding privacy, security, and explainability into every stage of AI. Generative AI will be used to create realistic attack scenarios, strengthening defences before systems are exposed to real-world threats. To ensure its solutions are practical, cPAID will be validated in five pilot projects. The project includes five pilot projects, each targeting a critical domain. In Energy, worker robots are deployed to monitor EV batteries. In Surveillance, 5G-enabled drones are used to detect wildfires in remote forest areas. In Health, efforts focus on securing remote, AI-assisted medical devices. In Transportation, pilots will test the robustness of object-detection systems for autonomous ships. Finally, in Cybersecurity Awareness, experts are trained to simulate real-world challenges. Each pilot provides a demanding environment to test the framework and demonstrate its value in critical sectors. By making AI security a built-in feature rather than an afterthought, cPAID will help organisations innovate with confidence while protecting users. More than that, cPAID aspires to support Europe's digital autonomy and prepares the ground for compliance with regulations such as the upcoming AI Act and cybersecurity directives, while at the same time, for citizens, making AI secure means safer services, stronger data protection, and greater trust in the AI systems shaping everyday life. More information is available here: https://cpaid.eu/
Künstliche Intelligenz kann im gesamten Bewerbungsprozess zum Einsatz kommen, sagt Wirtschaftsprofessorin Claudia Bünte. Im Gespräch mit Host Stella-Sophie Wojtczak führt sie auf, was Unternehmen in Deutschland bei der Nutzung beachten müssen. Es geht um die Gefahr von Diskriminierung, rechtliche Risiken und auch den AI Act der Europäischen Union. Abschließend nennt Bünte KI-Tools, die im Recruiting genutzt werden können und gibt allgemeine Tipps zur Auswahl und Nutzung. _Hinweis: Dieser Podcast wird von einem Sponsor unterstützt. Alle Infos zu unseren Werbepartnern findest du [hier](https://linktr.ee/t3npodcast)_.
Einblick – Der Podcast«, der Podcast für den tieferen aber knackigen Einblick in die relevanten Ereignisse des Gesundheitswesens der vergangenen Woche vom Gesundheitsmanagement der Berlin-Chemie. Immer freitags um 12 Uhr. In dieser Ausgabe: Koalitionsstreit um GKV-Milliardenlücke: SPD will neue Abgabe auf Miet- und Kapitaleinkünfte Krankenhausreform: Union und SPD erzielen Einigung – Verabschiedung bis Ostern möglich Primärarztsystem: Nina Warken droht mit Selbstzahlung bei Facharztbesuch ohne Überweisung Zahnbehandlungen bleiben Kassenleistung: Ministerin erteilt entsprechenden Vorschlägen eine klare Absage
Neste episódio falamos sobre recrutamento com o AI Act, visibilidade em Gen AI, o que se sabe dos anúncios no ChatGPT, Super Bowl e o vício das redes sociais.
Digitale Souveränität – zwischen Machtverschiebung und RealitätsschockDigitale Souveränität war lange ein politisches Schlagwort.Jetzt ist sie strategischer Imperativ.In dieser Folge diskutieren wir, warum das Thema gerade massiv an Relevanz gewinnt – und warum es kein IT-Projekt, sondern eine Machtfrage ist.Worum es in dieser Episode gehtWir zerlegen das Thema in drei Ebenen:1. IndividuumAbhängigkeit von Plattformen, Cloud-Services, KI-Tools.Warum Ausstieg theoretisch einfach – praktisch aber extrem schwer ist (Netzwerkeffekte).2. UnternehmenCloud-First, Hyperscaler, KI-Integration.Was passiert, wenn Infrastruktur nicht mehr neutral ist?Warum digitale Souveränität Teil der KI-Governance werden muss.3. Europa als WirtschaftsraumRegulierung ist vorhanden (AI Act etc.) –aber Regulierung ersetzt keine technologische Substanz.Wo stehen wir wirklich bei Cloud, Chips, Energie und Foundation Models?Chapters00:00 Einführung in die digitale Souveränität02:39 Warum das Thema gerade jetzt eskaliert05:50 Regulierung vs. Bürokratisierung in Europa08:40 Netzwerkeffekte, Monopolisierung und Abhängigkeit11:32 Europäische Alternativen – Wunsch oder realistische Option?14:13 Wirtschaftliche Souveränität als strategische Notwendigkeit17:27 Cloud-Infrastruktur als geopolitischer Hebel20:21 Konkrete Implikationen für Unternehmen23:09 Zukunftsperspektiven europäischer KI-Modelle Hosted on Acast. See acast.com/privacy for more information.
In der 351. Episode von IMR spricht Marc mit Sebastian Mauer, Rechtsanwalt im Datenrecht bei Graf von Westphalen in Frankfurt. Sebastian schildert seinen eher zufälligen Weg ins Jurastudium, seine entspannte Haltung zu Noten und Leistungsdruck sowie seine Erfahrungen im Corona-Referendariat. Er berichtet über erste Berührungspunkte mit Datenschutz, den bewussten Verzicht auf eine Promotion und den Wechsel von der Großkanzlei in ein Umfeld mit planbareren Arbeitszeiten. Inhaltlich geht es um Datenrecht, Datenschutz, IT-Recht und die praktischen Auswirkungen der DSGVO sowie des AI Acts. Sebastian erklärt, wann Unternehmen unter den AI Act fallen, welche Pflichten auch bei der Nutzung externer KI-Systeme bestehen und warum Schulung und Transparenz zentrale Rollen spielen. Zudem gibt er Einblicke in die Arbeit als externer Datenschutzbeauftragter und schildert typische Konflikte rund um Auskunftsansprüche und strategische Klagen. Wie gelangt man entspannt durch Studium und Referendariat? Was bedeutet der AI Act konkret für Unternehmen? Und wie praxisnah ist die Arbeit im Datenschutz wirklich? Antworten auf diese und viele weitere Fragen erhaltet Ihr in dieser Folge von IMR. Viel Spaß!
Dr. Gabriela Zanfir-Fortuna is a globally recognized data protection law expert, with 15 years of experience in the field split between Europe and the U.S., spanning academia, public service, consulting and policy. She currently is Vice President for Global Privacy at the Future of Privacy Forum, a global non-profit headquartered in Washington DC, coordinating FPF's offices and partners in Brussels, Tel Aviv, Singapore, Nairobi, and New Delhi, and leading the work on global privacy and data protection developments related to new technologies, including AI. She is also a founding Advisory Board Member of Women in AI Governance, and an affiliated researcher to the LSTS Center of Vrije Universiteit Brussel.Dr. Zanfir-Fortuna worked for the European Data Protection Supervisor and is a member of the Reference Panel of the Global Privacy Assembly – the international organization reuniting data protection authorities around the world, as well as a member of the T20 engagement group of the G20 under Brazil's Presidency in 2024.She was elected to be part of the Executive Committee of ACM's Fairness, Accountability and Transparency (FaccT) Conference (2021-2022). Her scholarship on the GDPR is referenced by the Court of Justice of the EU, and in 2023 she won the Stefano Rodota Award of the Council of Europe for the paper “The Thin Red Line: Refocusing Data Protection Law on Automated-Decision-Making“, alongside her co-authors. Dr. Zanfir-Fortuna holds a PhD in Law with a thesis on the rights of the data subject under EU Data Protection Law, and an LLM in Human Rights (University of Craiova).With our guest, here for a third time, we have gone through the logic of the Digital Omnibus package aiming to reform a cluster of important EU regulations, the “birth defects” of the AI Act, the importance of South Korea in the global data protection panorama, and the potential consequences of the recent CJEU case, Russmedia.References:* Gabriela Zanfir-Fortuna at the Future of Privacy Forum* Gabriela Zanfir-Fortuna on LinkedIn* Gabriela Zanfir-Fortuna: A world tour of data protection laws (Masters of Privacy, April 2021)* Data Protection vs. Privacy and Data Privacy: a January 28th conundrum (with Gabriela Zanfir-Fortuna, Masters of Privacy - 2025)* X v Russmedia Digital SRL (CJEU, December 2, 2025). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
Artificial intelligence is transforming the way societies interact with information, offering new opportunities for innovation while raising important questions about trust and accountability. In recent years, the EU has taken significant steps to ensure that AI development is human-centric and trustworthy, notably through the AI Act and complementary initiatives to support adoption and compliance.Building on these foundations, the AI Continent Action Plan and Apply AI Strategy, launched in 2025, aim to make Europe a global leader in AI. These initiatives seek to boost research and industrial capacity, strengthen competitiveness, and ensure that AI technologies uphold fundamental rights and democratic principles. They include measures to support AI adoption across sectors, enhance skills through the AI Skills Academy, and facilitate compliance with the AI Act via dedicated services.At the same time, large-scale disinformation campaigns remain a major challenge for Europe. The rapid spread of false narratives online threatens media freedom and democratic resilience, requiring timely detection and effective countermeasures. AI-based tools, combined with human expertise, can play a role in monitoring and analysing vast volumes of content across platforms and languages, supporting fact-checkers and media professionals in identifying emerging risks.Listen to this Euractiv Hybrid Conference, supported by the Horizon Europe project AI4TRUST, to discuss how AI can strengthen Europe's response to disinformation while safeguarding media freedom and trust. Questions to be addressed include:- How can AI-based tools complement human fact-checking and improve detection of disinformation across platforms and languages?- What policy frameworks are needed to ensure transparency, accountability, and ethical use of AI in combating disinformation?- How can the EU foster collaboration between researchers, media professionals, and policymakers to build a resilient information ecosystem?- What role should European initiatives such as the AI Act, Democracy Action Plan, and European Media Freedom Act play in supporting these efforts?This project has received funding from the European Union's Horizon Europe Programme under Grant Agreement no 101070190.Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
I episode 122 af EDB 5.0 får vi besøg af Kristian Storgaard, en erfaren jurist med over 20 års erfaring som advokat hos Kromann Reumert. Kristian har gennem sin karriere specialiseret sig i krydsfeltet mellem teknologi og jura, hvor han blandt andet har arbejdet med at sikre compliance inden for immaterielle rettigheder, GDPR og, senest, EU's AI Act. I denne episode dykker vi ned i AI Act og ser på, hvordan virksomheder kan navigere i den nye regulering. Kristian deler sin ekspertise og giver her konkrete bud på, hvordan man fremadrettet kan sikre en ansvarlig anvendelse af AI. Shownotes: 00.00-10.46: Intro til Kristian, hans tid i Kromann Reumert og sammenligning mellem GDPR og AI Act 10.46-49.49: Hvad er, og hvordan skal du forholde dig til AI Act? 49.49-1.01.16: AI's indtog i advokatbranchen Vært: Mathias Mengesha Emiliussen
In dieser Episode steht ein derzeit viel diskutiertes Vorhaben der EU im Mittelpunkt, der sogenannte „digitale Omnibus“. Das, zumindest erklärte, Ziel der so benannten Gesetzesinitiative der EU-Kommission ist es, zentrale Regelungen des digitalen Rechts, insbesondere die DSGVO und den AI Act, zu vereinfachen und Unternehmen zu entlasten. Anders formuliert: Der Datenschutz soll dereguliert, also zulasten der Verbraucher aufgeweicht werden. Was nach Bürokratieabbau klingt, erweist sich bei näherer Betrachtung als tiefgreifender Eingriff mit erheblichen Folgen für den Datenschutz. Gemeinsam mit unserer Gästin, Rechtsanwältin Elisabeth Niekrenz, ordnen wir daher ein, was unter dem digitalen Omnibus zu verstehen ist und welche Bereiche des digitalen Rechts konkret verändert werden sollen. Elisabeth Niekrenz (LinkedIn, Bluesky) ist Rechtsanwältin bei Spirit Legal in Leipzig und auf Datenschutzrecht spezialisiert. Seit 2021 setzt sie sich insbesondere gerichtlich für Schadenersatz bei Datenschutzverletzungen ein und engagiert sich gegen exzessives Onlinetracking sowie biometrische Prüfungsüberwachung. Sie berät Unternehmen und öffentliche Stellen zur datenschutzkonformen Gestaltung von Prozessen. Zuvor war sie politische Referentin bei der Digitale Gesellschaft e. V., der sie weiterhin als Mitglied verbunden ist. Elisabeth Niekrenz studierte Rechtswissenschaft in Leipzig mit interdisziplinären Schwerpunkten und ist Jurymitglied der BigBrotherAwards. Lockerung der Dokumentationspflichten Zunächst geht es um die Einführung der neuen Unternehmensgröße „Small & Midcaps“ und die Frage, für welche Unternehmen Pflichten wie die Führung von Verzeichnissen von Verarbeitungstätigkeiten künftig entfallen sollen und ob diese Entlastung nicht sogar zum Nachteil kleinerer Unternehmen werden könnte. Einschränkung der Definition personenbezogener Daten Ein weiterer Vorschlag ist pseudonyme Daten künftig häufiger als anonyme Daten einzustufen. Dies könnte datengetriebenen Geschäftsmodellen entgegenkommen, zugleich aber den Schutz der Betroffenen deutlich schwächen. KI-Training mit personenbezogenen Daten Zentral ist außerdem die geplante Erleichterung beim Training Künstlicher Intelligenz. Künftig soll die Verarbeitung personenbezogener Daten, einschließlich besonderer Kategorien wie Gesundheits-, Religions- oder Sexualdaten, auch ohne Einwilligung der betroffenen Personen auf Grundlage berechtigter Interessen möglich sein. Cookies, Banner und Transparenz Auch die vorgesehenen Änderungen bei der Cookie-Regulierung werden eingeordnet. Die entscheidende Frage lautet, führt der digitale Omnibus tatsächlich zu weniger Cookie-Bannern oder lediglich zu Vorteilen für die Marketingbranche?? Fazit und Ausblick Das Fazit der Folge fällt deutlich aus: Der digitale Omnibus verspricht Vereinfachung, bringt aber erhebliche Rechtsunsicherheit mit sich. Statt klarer und konsistenter Regeln drohen neue Abgrenzungsfragen, Verschiebungen zulasten des Datenschutzes und eine spürbare Schwächung bewährter Schutzmechanismen zugunsten von Tech-Konzernen. Zeitmarken 00:00:00 – Vorstellung des Themas und Begrüßung der Gästin 00:02:30 – Was ist der digitale Omnibus und welche Bereiche des digitalen Rechts sollen verändert werden? 00:08:30 – Wer fällt unter die neue Unternehmensgröße „Small & Midcaps“ und für wen soll die Pflicht zur Führung von Verzeichnissen von Verarbeitungstätigkeiten entfallen? 00:15:30 – Was ist ein „hohes Risiko“ im Sinne der DSGVO und des AI Acts? 00:27:00 – Übergang und Einordnung der geplanten Reformansätze 00:28:00 – Änderungen bei der Definition personenbezogener Daten: Kein Personenbezug mehr bei pseudonymen Daten? 00:41:00 – Nachteile für Verbraucher, Vorteile für die Marketingbranche? Auswirkungen einer Einschränkung des Personenbezugs 00:49:00 – KI-Training mit personenbezogenen Daten: Verarbeitung auch ohne Einwilligung auf Grundlage berechtigter Interessen – einschließlich besonderer Kategorien 01:01:00 – Neue Regeln für die Cookie-Setzung: Gibt es künftig weniger Cookie-Banner? 01:07:00 – Fazit: Der digitale Omnibus als „Verkehrsunfall mit einem betrunkenen Fahrer“ 01:17:00 – Aktueller Stand des digitalen Omnibus: Wann ist mit den Änderungen zu rechnen? Links und Urteile zum Thema Throwing your rights under the Omnibus – How the EU’s reform agenda threatens to erase a decade of digital rights – Vortrag von Thomas Lohninger and Ralf Bendrath beim 39C3. EuGH, 04.09.2025 - C-413/23 P – Zum Personenbezug pseudonymer Daten. Meta darf Nutzerdaten für das KI-Training verwenden – Artikel ´von David Wasilewski zu OLG Köln (Besch. v. 23.05.2025, Az. 15 UKl. 2/25)) bei LTO. Der Beitrag EU-Pläne: KI statt Datenschutz? – Rechtsbelehrung 144 erschien zuerst auf Rechtsbelehrung.
This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubCheck out more here:https://gotopia.tech/episodes/409Dr. Larysa Visengeriyeva - Author of "The AI Engineer's Guide to Surviving the EU AI Act" & Independent Consultant for EU AI Act EngineeringBarbara Lampl - Behavioral Mathematician at empathic business by Barbara LamplRESOURCESLarysahttps://x.com/visengerhttps://bsky.app/profile/visenger.bsky.socialhttps://github.com/visengerhttps://www.linkedin.com/in/larysavisengerBarbarahttps://x.com/BarbaraLamplhttps://www.linkedin.com/in/barbaralamplhttps://barbara-lampl.tumblr.comLinkshttps://ml-ops.orghttps://github.com/visenger/awesome-mlopshttps://eur-lex.europa.eu/eli/reg/2024/1689/oj/enghttps://machinelearningcanvas.comhttps://louisdorard.gumroad.com/l/mlcanvashttps://ml-ops.org/content/crisp-mlDESCRIPTIONBarbara Lampl interviews Larysa Visengeriyeva, software engineer and "godmother of MLOps", about her new book on AI engineering and compliance. What starts as a discussion about the EU AI Act quickly reveals a deeper truth: the real challenge isn't regulatory compliance - it's fundamental engineering practices.Larysa argues that quality AI systems require robust MLOps, comprehensive documentation, and proper data governance, whether regulation mandates it or not. Drawing from frameworks like CRISP-ML and the Machine Learning Canvas, the book provides practical checklists and methodologies for taking AI projects from prototype to production. Written partially in Ukraine during wartime, this "battle-tested" guide addresses the gap between technical and non-technical stakeholders, offering a common language for building sustainable AI systems.RECOMMENDED BOOKSLarysa Visengeriyeva • The AI Engineer's Guide to Surviving the EU AI Act • https://amzn.to/42SKOuULakshmanan, Robinson & Munn • Machine Learning Design Patterns • https://amzn.to/4ox4EosPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZDiana Montalion • Learning Systems Thinking • https://amzn.to/3ZpycdJBernd Rücker • Practical Process Automation • https://amzn.to/3cs3BSHLauren Maffeo • Designing Data Governance from the Ground Up • https://amzn.to/3QhIlnVKatharine Jarmul • Practical Data Privacy • https://amzn.to/46XPrnsZhamak Dehghani • Data Mesh • https://amzn.to/3tTCwACKate Stanley & Mickael Maison • Kafka Connect • https://amzn.to/40Jq5JzBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Podcast „Prawo dla Biznesu” jest integralnym elementem czasopisma o tej samej nazwie. Nagrania mają charakter informacyjno-edukacyjny, a także promocyjny.Wydawca: Kancelaria Prawna Kantorowski, Głąb i Wspólnicy Sp.j.Redaktor naczelny: radca prawny Piotr KantorowskiPodcast zawiera:autopromocję wydawcy,reklamy produktów i usług innych przedsiębiorców,treści edukacyjne i eksperckie z zakresu prawa dla biznesu._______________________________________________________________________________Poznaj najważniejsze zmiany w prawie 2026 dla przedsiębiorców — od KSeF i e-doręczeń po dyrektywę płacową, AI Act i nowe zasady pracy platformowej.
(00:00:00) 10 Profezie per il 2026 (e le mutande tassate) (00:00:29) Inizio del 2026 (00:00:40) Previsioni tecnologiche del 2026 (00:02:10) Il lancio di GTA 6 (00:09:58) Il mondo del gaming nel 2025 (00:13:42) Nuove tecnologie di Valve (00:19:54) Innovazioni nel settore smartphone (00:24:57) Futuro degli smartphone pieghevoli (00:27:51) Connettività satellitare nel 2026 (00:30:34) Potenziali cambiamenti nelle reti cellulari (00:35:17) Identità digitale e privacy (00:41:09) Intelligenza artificiale nella vita quotidiana (00:48:32) Regolamentazione dell'IA in Europa (01:03:31) Smart glasses del futuro (01:09:28) Conclusione e saluti L'ultimo episodio del 2025 è un intenso faccia a faccia con Francesco Graziani. Insieme commentiamo le 10 predizioni tech generate dall'AI per il 2026: dal dominio culturale di GTA 6 alla nuova offensiva hardware di Valve, passando per la morte delle "zone morte" grazie al satellite nativo. Non mancano le polemiche: tra AI Act, tasse doganali europee (spiegate con un curioso esempio di intimo usato) e il sogno di un'identità digitale sicura.Visita Digiteee e scopri tutte le notizie sulla tecnologiaSegui Digiteee su TikTokDimmi la tua su Twitter, su Threads, su Telegram, su Mastodon, su BlueSky o su Instagram.Mail jacoporeale@yahoo.it Scopri dove ascoltare il podcast e lascia una recensione su Apple Podcast o Spotify.Ascolta An iPad guy su YouTube Podcast.Supporta il podcast
The European medical device sector is a vital pillar of healthcare innovation, employing over 930,000 people and representing a market of approximately €170 billion. However, since the implementation of MDR and IVDR, manufacturers—especially SMEs—have faced increasing regulatory complexity, long certification timelines, and reduced market predictability.In this podcast episode, we explore the 2025 EU proposal designed to address these challenges by simplifying regulatory processes while preserving patient safety.The discussion covers:Key shortcomings of the current MDR/IVDR frameworkThe impact on innovation, availability of devices, and SMEsThe eight reform pillars, including proportionality, digitalisation, international cooperation, and improved coordination with EMA and Notified BodiesHow upcoming EU legislation (AI Act, Cybersecurity Act, Biotech Act) will interact with medical device regulationsThis episode provides practical insights for manufacturers, regulatory professionals, and policymakers seeking to anticipate regulatory changes and adapt their strategies accordingly.Who is Monir El Azzouzi? Monir El Azzouzi is a Medical Device Expert specializing in Quality and Regulatory Affairs. After working for many years with big Healthcare companies, particularly Johnson and Johnson, he decided to create EasyMedicalDevice.com to help people better understand Medical Device Regulations worldwide. He has now created the consulting firm Easy Medical Device GmbH and developed many ways to deliver knowledge through videos, podcasts, online courses… His company also acts as Authorized Representative for the EU, UK, and Switzerland. Easy Medical Device becomes a one-stop shop for medical device manufacturers that need support on Quality and Regulatory Affairs.Links Adam Linkedin: https://www.linkedin.com/in/adam-isaacs-rae/Social Media to follow Monir El Azzouzi Linkedin: https://linkedin.com/in/melazzouzi Twitter:https://twitter.com/elazzouzim Pinterest: https://www.pinterest.com/easymedicaldevice Instagram: https://www.instagram.com/easymedicaldevice
Which parts of AI Act, NIS2, DORA, and DSA overlap so you can cover more with less? What basics raise your baseline fast: central logs, backups, risk assessments, and human-in-the-loop governance? Could a simple mailing list make incident comms painless? We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel
On this episode of Reaganism, Roger sits down with Chairman John Moolenaar of the Select Committee on the Chinese Communist Party to discuss the strategic competition between the United States and the CCP, focusing on the implications of China's actions on national security and economic interests. Chairman Moolenaar highlights the bipartisan efforts of the Committee to address these challenges, emphasizing the importance of maintaining technological and economic advantages over China. They also explore the GAIN Act, which aims to prioritize American access to advanced AI chips, ensuring that the U.S. remains a leader in innovation while safeguarding national security. The discussion underscores the ideological differences between the U.S. and China, particularly in terms of individual freedoms and government control, and the need for policies that reflect American values.
On November 19, the European Commission unveiled two major omnibus packages as part of its European Data Union Strategy. One package proposes several changes to the EU General Data Protection Regulation, while the other proposes significant changes to the recently minted EU AI Act, including a proposed delay to the regulation of so-called high-risk AI systems. Laura Caroli was a lead negotiator and policy advisor to AI Act co-rapporteur Brando Benifei and was immersed in the high-stakes negotiations leading to the AI regulation. She is also a former senior fellow at the Center for Strategic and International Studies, but recently moved back to Brussels during a time of major complexity in the EU. IAPP Editorial Director Jedidiah Bracy caught up with Caroli to discuss her views on the proposed changes to the AI Act in the omnibus package and how she thinks the negotiations will play out. Here's what she had to say.
In this episode of French Insider, Melissa Hughes, a senior associate in Sheppard Mullin's Labor and Employment Practice Group and member of the French Desk, joins us to explore the use of AI for automated decision-making throughout the employment life cycle, including the associated risks and how they can be mitigated. What we discussed in this episode: How does AI interact with the workplace? From an employment perspective, where does AI carry the most risk? Why is the use of AI in employment decisions particularly concerning? How can employers mitigate the risks associated with AI tools? What should employers consider when selecting an AI tool? Does the U.S. have any AI regulations comparable to the E.U.'s AI Act? What U.S. trends should employers be aware of? What advice would you give companies as they roll out AI tools or increase the use of AI to do business? Disclaimer: This episode was recorded prior to the signing of Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence." As a result, some discussions may not reflect the policies or guidance established by this order. About Melissa Hughes As a senior associate in the Labor and Employment Practice Group in Sheppard Mullin's San Francisco office, Mellissa Hughes defends and counsels employers in a range of disputes, involving harassment, discrimination, retaliation, failure to accommodate, wrongful termination, wage and hour claims, PAGA actions, and class actions. She also has traditional labor law experience, including arbitration, unfair labor practice proceedings, and litigation under the National Labor Relations Act. Melissa represents employers of all sizes in state and federal courts, administrative proceedings, and every phase of litigation, from pre-suit strategy through post-trial motions. She also serves as a trusted advisor on day-to-day workplace issues, including disability accommodations, leaves of absence, performance management, workplace investigations, and compliance with California's complex wage and hour laws. As a member of Sheppard Mullin's French Desk, Melissa advises French companies and groups operating in or expanding to the U.S. on a full range of employment and personnel matters in both French and English. About Inès Briand Inès Briand is an associate in Sheppard Mullin's Corporate Practice Group and French Desk Team in the firm's Brussels office, where her practice primarily focuses on domestic and cross-border mergers and acquisition transactions (with special emphasis on operations involving French companies). She also has significant experience in general corporate matters and compliance for foreign companies settled in the United States. As a member of the firm's French Desk, Inès has advised companies and private equity funds in both the United States and Europe on mergers and acquisitions, commercial contracts, and general corporate matters, including the expansion of French companies in the United States. Contact Information Mellissa Hughes Inès Briand Thank you for listening! Don't forget to SUBSCRIBE to the show to receive every new episode delivered straight to your podcast player every week. If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show in Apple Podcasts, Deezer, Amazon Music, or Spotify. It helps other listeners find this show. This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.
Want a quick map of EU compliance for engineers? How do you classify AI by risk and tell users when AI is used? When do you send a 24-hour heads-up and a one-month report after an incident? Does NIS2 make your board liable and your logs mandatory? We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel
Tras años trabajando con LLMs, agentes y flujos autónomos, la conclusión es estratégica: el chat ha muerto como interfaz corporativa. Según el análisis del documento (ver páginas 2–5): ✔ La conversación es ineficiente ✔ Los agentes n8n cerrados toman decisiones sin intervención humana ✔ La superficie de ataque se expande y exige cumplimiento con el AI Act ✔ 2026 abre paso al concepto de Sistemas Operativos Líquidos Las organizaciones que sigan usando chatbots tradicionales quedarán fuera del nuevo estándar operativo europeo. ¿Quieres preparar a tu empresa para esta transición? Estoy construyendo soluciones SRIA basadas en IA agéntica segura.
Our 226th episode with a summary and discussion of last week's big AI news!Recorded on 11/24/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode: New AI model releases include Google's Gemini 3 Pro, Anthropic's Opus 4.5, and OpenAI's GPT-5.1, each showcasing significant advancements in AI capabilities and applications.Robotics innovations feature Sunday Robotics' new robot Memo and a $600M funding round for Visual Intelligence, highlighting growth and investment in the robotics sector.AI safety and policy updates include Europe's proposed changes to GDPR and AI Act regulations, and reports of AI-assisted cyber espionage by a Chinese state-sponsored group.AI-generated content and legal highlights involve settlements between Warner Music Group and AI music platform UDIO, reflecting evolving dynamics in the field of synthetic media.Timestamps:(00:00:10) Intro / Banter(00:01:32) News Preview(00:02:10) Response to listener commentsTools & Apps(00:02:34) Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch(00:05:49) Google launches Nano Banana Pro powered by Gemini 3(00:10:55) Anthropic releases Opus 4.5 with new Chrome and Excel integrations | TechCrunch(00:15:34) OpenAI releases GPT-5.1-Codex-Max to handle engineering tasks that span twenty-four hours(00:18:26) ChatGPT launches group chats globally | TechCrunch(00:20:33) Grok Claims Elon Musk Is More Athletic Than LeBron James — and the World's Greatest LoverApplications & Business(00:24:03) What AI bubble? Nvidia's strong earnings signal there's more room to grow(00:26:26) Alphabet stock surges on Gemini 3 AI model optimism(00:28:09) Sunday Robotics emerges from stealth with launch of ‘Memo' humanoid house chores robot(00:32:30) Robotics Startup Physical Intelligence Valued at $5.6 Billion in New Funding - Bloomberg(00:34:22) Waymo permitted areas expanded by California DMV - CBS Los Angeles - Waymo enters 3 more cities: Minneapolis, New Orleans, and Tampa | TechCrunchProjects & Open Source(00:37:00) Meta AI Releases Segment Anything Model 3 (SAM 3) for Promptable Concept Segmentation in Images and Videos - MarkTechPost(00:40:18) [2511.16624] SAM 3D: 3Dfy Anything in Images(00:42:51) [2511.13998] LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software EngineeringResearch & Advancements(00:45:10) [2511.08544] LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics(00:50:08) [2511.13720] Back to Basics: Let Denoising Generative Models DenoisePolicy & Safety(00:52:08) Europe is scaling back its landmark privacy and AI laws | The Verge(00:54:13) From shortcuts to sabotage: natural emergent misalignment from reward hacking(00:58:24) [2511.15304] Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models(01:01:43) Disrupting the first reported AI-orchestrated cyber espionage campaign(01:04:36) OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist | WIREDSynthetic Media & Art(01:07:02) Warner Music Group Settles AI Lawsuit With UdioSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on YouTube, Apple Podcasts, Spotify, and all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI
echtgeld.tv - Geldanlage, Börse, Altersvorsorge, Aktien, Fonds, ETF
Roboter, KI, Smart Food & Demografie – steht Deutschland vor einem neuen Wirtschaftsboom? In dieser Folge spricht Tobias Kramer mit Zukunftsforscher Lars Thomsen über sechs Megatrends, die Deutschlands Wirtschaft in den nächsten Jahren massiv verändern – und welche Chancen sich daraus für Unternehmen, Anleger und den Standort insgesamt ergeben.
What if the rules we write today could make tomorrow's technology more human, safer, and genuinely worth wanting? We sit down with Anna Aseeva, a legal strategist working at the intersection of sustainability, intellectual property, and AI, to map a smarter path for digital innovation that starts with design and ends with systems people trust.We dig into the significant shifts shaping tech governance right now. Anna explains a practical model for aligning IP and sustainability: protect early to nurture fragile ideas through sandboxes and investment, then open up mature solutions with licensing that shares benefits and safeguards intent. This conversation is equally about culture and code. We talk about legal design that reads like plain talk, citizen participation that turns evidence into policy input, and civic apps that could let communities steer platform rules. We cover digital sustainability beyond emissions—lighter websites, greener hosting, and product decisions that fight digital obesity and planned obsolescence. And we don't shy away from the realities of AI: hallucinated footnotes, invented coauthors, and the simple fixes that come from a careful human in the loop.If you're a builder or curious listener who wants technology to serve people and planet, you'll find clear takeaways: design for sustainability from day one, keep humans in charge of final decisions, protect what's fragile, open what's ready, and invite people into the process. Subscribe, share with a friend, and tell us: where should human review be non-negotiable?Send us a textCheck out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats. The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
Chaque semaine, un regard croisée sur l'actualité numérique, entre Paris et Montréal.Avec Bruno Guglielminetti (Mon Carnet)OVHcloud au sommetLe OVHcloud Summit 2025 s'est tenu à Paris à la Maison de la Mutualité. Moment fort de l'événement : le retour du fondateur Octave Klaba, accueilli comme une rockstar. Il reprend les rênes de l'entreprise avec une vision résolument axée sur l'intelligence artificielle et la souveraineté numérique. L'objectif est clair : positionner OVH non plus comme un acteur français, mais comme un champion européen du cloud, à contre-courant des géants américains. L'accueil enthousiaste du public montre que le message passe.Gemini 3 Pro : Google frappe fort en IALancé discrètement, Gemini 3 Pro, le nouveau modèle de Google, impressionne. Nous saluons ses performances, sa vitesse de génération d'images et sa capacité à produire du code avec une fluidité bluffante. Contrairement au lancement très orchestré de GPT-5, Google a surpris par son efficacité sans fanfare. Gemini 3 Pro s'annonce comme un sérieux rival dans le domaine de l'intelligence artificielle grand public et professionnelle.Cloudflare fait vaciller InternetUne panne de Cloudflare a entraîné l'indisponibilité de près de 20 % du web mondial pendant plusieurs heures. L'incident rappelle à quel point l'infrastructure Internet reste fragile, malgré sa complexité. Pourtant, la réaction globale a été étonnamment calme, comme si une forme de résilience collective s'était installée face à ces aléas devenus presque banals.L'Europe veut réformer sa régulation numériqueBruno et Jérôme abordent également le projet d'« omnibus numérique », un texte en préparation à Bruxelles. Objectif : simplifier le millefeuille réglementaire européen – RGPD, AI Act, ePrivacy, etc. – et alléger certaines contraintes, notamment autour des bannières cookies. Mais la crainte d'un détricotage des protections fondamentales demeure, et les soupçons de lobbying américain planent sur cette volonté de réforme.Windows fête ses 40 ansPetit clin d'œil historique : Windows a 40 ans. L'occasion pour les deux chroniqueurs de se remémorer les débuts de l'interface graphique sur PC, quand il fallait encore taper des commandes en ligne de code pour la lancer. Une nostalgie assumée.-----------♥️ Soutien : https://mondenumerique.info/don
This week, the European Commission unveiled a sweeping plan to overhaul how the EU enforces its digital and privacy rules as part of a ‘Digital Omnibus,' aiming to ease compliance burdens and speed up implementation of the bloc's landmark laws. Branded as a “simplification” initiative, the omnibus proposal touches core areas of EU tech regulation — notably the AI Act and the General Data Protection Regulation (GDPR).The Commission argues that this update is necessary to ensure practical implementation of the laws, but civil society organizations see the proposed reform as the “biggest rollback of digital fundamental rights in EU history.”At the same time, leaders are talking loudly about digital sovereignty — including at last week's summit in Berlin. But with the Omnibus appearing to weaken protections and tilt power toward large tech firms, what kind of sovereignty is actually being built?Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to understand what the EU is trying to achieve:Leevi Saari, EU Policy Fellow at AI Now InstituteJulia Smakman, Senior Researcher at the Ada Lovelace Institute
In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration's draft executive order to preempt state AI laws (07:46) and break down the European Commission's new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic's report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).
The European Commission, backed strongly by France and Germany, is preparing to roll out a sweeping “digital simplification” package. This Wednesday the Commission will present a major omnibus plan to simplify digital rules, everything from data protection to the freshly minted AI Act.Officials call it a way to ease burdens on European companies. Critics including MEPs, NGOs, and a good number of lawyers, say it's more like opening Pandora's box. But what does this digital simplification mean?Join us on our journey through the events that shape the European continent and the European Union.Production: By Europod, in co production with Sphera Network.Follow us on:LinkedInInstagram Hosted on Acast. See acast.com/privacy for more information.
Europa no solo inventó la pizza y el drama político: ahora quiere ponerle reglas a la inteligencia artificial. El AI Act no es un asunto lejano: si usas o vendes IA, podrías tener que cumplir aunque vivas a miles de kilómetros. Las multas llegan hasta 35 millones de euros. ¿La lección? La IA no es solo técnica… también es responsabilidad.#IAAplicada #LuisGyG #InteligenciaArtificial #AIAct #InnovaciónResponsableConviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/ia-aplicada-con-luisgyg--909634/support.
Even without new comprehensive privacy laws passed in 2025, regulators have kept busy. California finalized major CCPA updates—introducing risk assessments, cybersecurity audits, and automated decision-making rules—while amendments and new state laws in Maryland, Indiana, Kentucky, and Rhode Island take effect soon. Colorado also extended the deadline for its AI Act. This episode breaks down what's changing, when key obligations begin, and why businesses need to start mapping their compliance timelines now. Hosted by Simone Roach. Based on a blog post by Aaron J. Burstein, Alexander I. Schneider, and Meaghan M. Donahue
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit