European Union regulation on the processing of personal data
POPULARITY
Categories
Get FantasyLife+ for free ($100 value) by going to https://www.fantasylife.com/comet Become the best at watching sports with Xfinity! If you call yourself a sports fan, you gotta have Xfinity. All the games, all in one place. https://ad.doubleclick.net/ddm/trackc...{GDPR};gdpr_consent=${GDPR_CONSENT_755};ltd=;dc_tdv=4 For all the tools and advice you need to win your league, subscribe to FantasyLife+: https://fantasylife.com/pricing Use code “IAN” for 20% off your subscription! Welcome to Fantasy Life with Ian Hartitz! We're here to give you all of the fantasy football news, advice and stats you need (with a little bit of fun chaos along the way)! Week 5 is here and fantasy football expert Ian Hartitz is joined by fantasy football experts Dwain McFarland and Matthew Freedman to break down their rankings for Week 5 of the fantasy football season! In today's episode: Will Jaylen Waddle boom now that Tyreek Hill is out for the season? Will Bo Nix struggle against a tough Eagles defense? Will Chris Godwin keep up his elite usage in week 5? We're discussing all this, plus answering YOUR start/sit questions for Week 5 of fantasy football! ______________________ If you want more of Fantasy Life, check us out at FantasyLife.com, where all our analysis is free, smart, fun, and has won a bunch of awards. We have an awesome free seven-day-a-week fantasy newsletter (which would win awards if they existed, we assure you!): https://www.fantasylife.com/fantasy-n... And if you want to go deeper, check out our suite of also-award-winning premium tools at FantasyLife.com/pricing But really we hope you just are enjoying what you clicked on here, and come back for more. We are here to help you win!! Learn more about your ad choices. Visit megaphone.fm/adchoices
In this milestone episode of the Fit4Privacy podcast, host Punit Bhatia is joined by three distinguished privacy experts — Dr. Kerry Miller (AI Governance Expert, U.S.), Heidi Waem (Partner, DLA Piper, Brussels), and Dr. Valerie Lyons (COO, BH Consulting; Academic & Author) — to reflect on 7 years of GDPR and explore what lies ahead. Whether you're a privacy professional, business leader, or just curious about how data protection shapes our digital lives, this conversation offers both a critical reflection on GDPR's first seven years and foresight into its future role in AI and trust. KEY CONVERSION 00:03:25 Panelist Introductions and Initial Thoughts on GDPR 00:09:06 Significant challenge that remains in up to 7-9 years of GDPR 00:18:10 Has there been a fair amount of reporting on compliance failures over the years? 00:21:11 EU Compliance Gaps and How Companies Can Avoid Them 00:29:56 Has the GDPR has been successful in balancing the power equilibrium of organization and data subjects? 00:35:35 Role of trust after 7 years of GDPR 00:41:39 From GDPR compliance in AI World, what can be done additionally? ABOUT GUEST Heidi Waem is the head of the data protection practice at DLA Piper Belgium and specialized in data protection and privacy. She assists clients with all aspects of EU Regulatory Data Protection compliance including the ‘structuring' of data processing and sharing activities to achieve an optimal use of data, advising on data transfers and the processing of personal data by means of new technologies (AI, facial recognition,…).Dr. Cari Miller is the Principal and Lead Researcher for the Center for Inclusive Change. She is a subject matter expert in AI risk management and governance practices, an experienced corporate strategist, and a certified change manager. Dr. Miller creates and delivers AI literacy training, AI procurement guidance, AI policy coaching, and AI audit and assessment advisory services.Dr. Valerie Lyons is a globally recognized authority in privacy, cybersecurity, data protection, and AI governance. Holding a PhD in Information Privacy along with CDPSE, CISSP, and CIPP/E certifications, she serves as a trusted strategic advisor to regulatory bodies and organizations across both public and private sectors. Valerie has played an influential role in shaping EU-wide data protection frameworks and enforcement strategies, and is an active member of the European Data Protection Board's pool of experts, as well as other global cyber and data protection bodies. ABOUT HOSTPunit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCESWebsites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/heidiwaem/, https://www.linkedin.com/in/cari-miller/, https://www.linkedin.com/in/valerielyons-privsec/ Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
News and Updates: Pope Leo XIV rejected a proposal to create an AI-powered “virtual pope,” calling the idea of a digital clone horrifying. He warned that deepfakes, automation, and artificial substitutes erode trust, strip dignity from work, and risk turning life into “an empty, cold shell.” His stance echoes concerns as layoffs at Microsoft and Salesforce mount amid AI adoption. OpenAI released its first major study on ChatGPT usage, showing that over 70% of queries are non-work-related, with people mainly seeking tutoring, how-to guidance, brainstorming, and writing help. Only 4% of consumer queries involve coding, with writing far more dominant. Work-related use centers on information gathering and decision-making. Adoption is now global, especially in low- and middle-income countries, with 10% of adults worldwide estimated to use ChatGPT. A preliminary deal to keep TikTok in the U.S. has been reached: existing investors and new U.S. backers, including Oracle and Silver Lake, will control about 80%. ByteDance's stake drops below 20% to comply with U.S. law. Oracle will safeguard U.S. user data, while the recommendation algorithm will be licensed, retrained under U.S. oversight, and cut off from Beijing's influence. The U.S. government is also set to receive a multibillion-dollar facilitation fee. The European Commission is considering scrapping the cookie consent banner requirement, part of the 2009 e-Privacy Directive. Alternatives include setting preferences once at the browser level or exempting “technically necessary” cookies. Any change would fold into GDPR, but privacy advocates are likely to resist. Samsung has begun testing ads on its Family Hub smart refrigerators in the U.S. Despite previously denying plans, a software update now pushes “promotions and curated ads” to fridge screens when idle. Samsung calls it a pilot to “strengthen value,” but users blasted the move as another step in the company's “screens everywhere” strategy.
If you have built your BigLaw career around a thriving regulatory or enforcement practice, you know how difficult it can be for you and your practice when that work suddenly isn't there. One month you are buried in nvestigations motivated by government inquiries or merger reviews, and the next your phone goes quiet because enforcement priorities shifted, agency budgets got cut, or a new administration has redirected resources. It is unsettling, especially when your brand, reputation and and client base are tied to that flow of work. In this episode, I walk through the reality of what it can feel like and what to do when your once-busy enforcement and regulatory practice slows. I share how to distinguish between cyclical downturns and structural changes that reshape a practice like this long term, and share some specific examples across areas such as FCPA, antitrust, and privacy to illustrate how BigLaw attorneys can pivot effectively. I also outline practical steps to stay visible with clients as well as inside your firm so that even when the billable work is not there, your value and future opportunities are. At a Glance: 00:00 Introduction need to navigate BigLaw downturns in regulatory and enforcement work 01:20 When busy practices suddenly dry up: regulatory shifts and enforcement changes 02:14 How external forces such as politics, budgets, and agency leadership reshape your practice overnight 03:03 Early warning signs that your work is slowing down in these areas 03:37 The emotional impact: anxiety, uncertainty, and fear of career derailment 04:08 Diagnosing cyclical vs. structural downturns with concrete indicators 05:16 Why this distinction matters for your long-term career strategy 05:39 Examples of temporary pivots that kept practices alive (FCPA, antitrust, GDPR, privacy) 07:04 How lawyers can broaden their practices to adapt to structural changes 08:08 The importance of proactive client communication, including with “good news” updates 09:37 What to do when billable hours stall: seeking work across departments and staying visible 10:41 Positioning yourself as a thought leader through articles, CLEs, and conferences 11:29 Documenting outreach, cross-practice contributions, and client loyalty for firm leadership 12:21 Demonstrating cross-practice value: aligning with busier groups inside your firm 13:30 How client loyalty and referrals strengthen your standing even in slow periods 13:58 Reframing your practice to be less narrowly defined by one enforcement area 14:27 How one partner survived cuts by documenting value and broadening expertise 15:16 Long-game mindset: showing your firm that you are indispensable beyond billable hours Rate, Review, & Follow on Apple Podcasts & Spotify Do you enjoy listening to Big Law Life? Please consider rating and reviewing the show! This helps support and reach more people like you who want to grow a career in Big Law. For Apple Podcasts, click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let me know what you loved most about the episode! Also, if you haven't done so already, follow the podcast here! For Spotify, tap here on your mobile phone, follow the podcast, listen to the show, then find the rating icon below the description, and tap to rate with five stars. Interested in doing 1-2-1 coaching with Laura Terrell? Or learning more about her work coaching and consulting? Here are ways to reach out to her: www.lauraterrell.com laura@lauraterrell.com LinkedIn: https://www.linkedin.com/in/lauralterrell/ Instagram: https://www.instagram.com/lauraterrellcoaching/ Show notes: https://www.lauraterrell.com/podcast
“Our approach is simple: remove the PII from the data stream, and you don't have to worry about compliance,” said Bill Placke, President, Americas at SecurePII. At WebexOne in San Diego, Doug Green, Publisher of Technology Reseller News, spoke with Jason Thals, COO of BroadSource, and Placke of SecurePII about their finalist recognition in Cisco's Dynamic Duo competition. The joint solution, built on Cisco Webex Contact Center, is designed to unlock AI's potential by enabling enterprises to leverage large language models without exposing sensitive personal data. SecurePII's flagship product, SecureCall, was purpose-built for Webex (and also available on Genesys) to deliver PCI compliance while removing personally identifiable information from voice interactions. This enables organizations to deploy AI and agentic automation confidently, without the regulatory risk tied to data privacy laws across the U.S., GDPR, and beyond. Thals emphasized BroadSource's role in delivering services that complement CCaaS and UCaaS platforms globally, while Placke framed the opportunity for Cisco partners: “This is a super easy bolt-on, available in the Webex App Hub. Customers can be up and running in 30 minutes and compliant.” The collaboration, already proven with a government-regulated client in Australia, is industry-agnostic and scalable from small deployments to 50,000+ users. For Cisco resellers, it represents a powerful, sticky service that integrates seamlessly into channel models while helping enterprises stay compliant as they modernize customer engagement. Learn more at BroadSource and SecurePII.
In this episode with Stas Levitan, AI Governance Expert & Co-founder @ DeepKeep we dive deep into the wild west of AI security, shadow AI, and the real risks lurking behind your favourite GenAI tools. Stas shares hard-hitting insights on why most companies are blind to their AI usage, and how governance isn't just about tick-box compliance — it's about survival.Here's what we covered:AI Risk Starts Way Before You Deploy It Most think risk begins at runtime. Nope. It starts the moment you grab that model from a repo — and trust me, most are not as “safe” as they look.Shadow AI Is Everywhere Employees are quietly using ChatGPT, Gemini, and open-source models — often with good intentions, but zero oversight. Big risk, bigger blind spot.Guardrails Aren't Optional Anymore Enterprise AI needs serious guardrails — not just generic APIs. Think AI-native tools that track, monitor, and enforce behaviour in real time.LLMs Don't Forget… Ever Feed your chatbot personal data, and you might just see it pop up later — possibly in someone else's output. AI Security ≠ Traditional SecurityFirewalls won't save you here. This is about controlling model behaviour, not just access and networks. Totally different mindset needed.Big AI Providers = Not Enterprise-Ready The default tools don't cut it. The second you fine-tune a model or use it with your data — you own the risk.EU AI Act Isn't Just Hype — It's Happening Risk assessments, monitoring, documentation — this isn't optional for high-risk sectors. And no, you probably aren't ready yet.Step One: Get Visibility You can't protect what you can't see. Start by discovering what AI is actually being used in your org — you might be shocked. It's a frank and eye-opening conversation that every CIO, CISO, and compliance lead should hear. Tune in — and if you're using GenAI without a plan, maybe… stop.Stas Levitan can be contacted here •DeepKeep official website: https://www.deepkeep.ai•Stas Levitan LinkedIn: https://uk.linkedin.com/in/stas-levitanThe latest in Data Protection and Privacy Podcast by David ClarkeFollow me on Twitter @1davidclarke 98.6k FollowersJoin Linkedin GDPR Group 30,475 Others Members for FREE CoAuthor of an ICO certified GDPR scheme
Ca de obicei, lumea tech de aseară e cu totul și cu totul alta față de cea cu care ne-am trezit. După luni întregi de discuții și amânări, TikTok și Casa Albă au găsit, în sfârșit, un mijloc prin care rețeaua de socializare să-și continue activitatea în SUA, separată (teoretic) de compania-mamă și Beijing. Care a fost soluția? Niște oameni extrem de bogați și, evident, „prieteni” ai administrației Trump. Despre cine sunt aceștia, de ce sunt ei „aleșii”, cum va arăta un TikTok „americanizat” și nu numai a discutat, în acestă ediție de Upgrade 100 Live, Marian Hurducaș, cu invitații săi:- Marian Andrei, jurnalist și expert în tehnologie, cunoscut în special pentru moderarea emisiunii I LIKE IT. Are o vastă experiență în analiza trendurilor din industria tech și aduce mereu o perspectivă informată și echilibrată asupra impactului tehnologiei în viața de zi cu zi.- Tudor Galoș, consultant în domeniul protecției datelor, cu o expertiză solidă în GDPR. A lucrat cu numeroase companii pentru a le ajuta să navigheze complexitatea reglementărilor legate de date și este recunoscut pentru abilitatea sa de a explica pe înțelesul tuturor cum putem să ne protejăm informațiile personale.
Tired of throwing time and money at marketing and gettingnowhere? This week, Sam delivers five and a half practical, low cost marketing strategies for photographers that any UK photographer can implement right now to bring in higher paying commercial photography clients and grow theirbusiness.This episode is packed with useful content, but we've pulledout the three essential takeaways that will most quickly boost your commercial photography business. Stop relying on luck; start implementing a clear plan that converts prospects into paying customers.Listen to discover:Three Things You'll Learn in This EpisodeThe Hidden Database on Your Hard Drive: Discover the simple, overlooked source of paying clients you already possess, and learn the UK-specific GDPR rule that allows you to contact them immediately, turning old inquiries into new sales.How to Break the Ice with Big Decision-Makers: Find out the essential 3-step blueprint for warming up leads on LinkedIn before you send a connection request, ensuring that key marketing staff are ready to see the value in your commercial photography.The Low-Cost Client Funnel You Need: Master the technique of the "discounted offer" (like a simple headshot) and, more importantly, the crucial plan to successfully move those first-time buyers onto your premium, high-value brand shoots.Grab a cuppa and listen now—your next big client is waiting.
In this episode of the AdTechGod Pod, host AdTechGod interviews John Piccone, the Regional President of Adform Americas. They discuss John's extensive background in the ad tech industry, the importance of addressing the overlooked 40% of audiences, and how Adform's independence and transparency set it apart in a competitive market. John shares insights on the evolving landscape of digital advertising, the significance of data-driven marketing, and the future trends that excite him as they approach the fourth quarter. Takeaways John Piccone has a rich background in ad tech, having worked with major companies. Adform offers a full tech stack, providing various tools for advertisers. Understanding the 40% of users who are often overlooked is crucial for brands. Transparency in the programmatic marketplace is essential for building trust. Brands can achieve more with less by optimizing their advertising strategies. The fragmentation of channels complicates audience targeting for marketers. Adform's independence allows for a focus on brand needs over inventory sales. GDPR compliance gives Adform an edge in understanding privacy regulations. Brands need to adapt to changing dynamics in the advertising landscape. Incremental reach can be achieved without increasing budget size. Chapters 00:00 Introduction to Adform and John Piccone 02:55 John Piccone's Journey in Ad Tech 05:45 Addressing the Overlooked 40% Audience 08:23 The Role of Independence in Ad Tech 11:25 Looking Ahead: Innovations and Future Trends Learn more about your ad choices. Visit megaphone.fm/adchoices
When cyber attack notification goes wrong, companies face a disaster worse than the original breach. This episode dives deep into the critical mistakes organizations make when communicating about security incidents - and why transparency beats secrecy every time.We examine real-world failures like LastPass and Rackspace, where poor communication strategies amplified the damage from their cyber attacks. From legal requirements in California and GDPR to the new one-hour notification rules in China, we cover what regulations demand and why going beyond compliance makes business sense.Learn how to create effective status pages, manage customer expectations during recovery, and avoid the death-by-a-thousand-cuts approach that destroys trust. We share practical strategies for early and frequent communication that can actually strengthen customer relationships during crisis situations.
Cerebrium is a serverless AI infrastructure platform orchestrating CPU and GPU compute for companies building voice agents, healthcare AI systems, manufacturing defect detection, and LLM hosting. The company operates across global markets handling data residency constraints from GDPR to Saudi Arabia's data sovereignty requirements. In a recent episode of Category Visionaries, I sat down with Michael Louis, Co-Founder & CEO of Cerebrium, to explore how they built a high-performance infrastructure business serving enterprise customers with high five-figure to six-figure ACVs while maintaining 99.9%+ SLA requirements. Topics Discussed: Building AI infrastructure before the GPT moment and strategic patience during the hype cycle Scaling a distributed engineering team between Cape Town and NYC with 95% South African talent Partnership-driven revenue generation producing millions in ARR without traditional sales teams AI-powered market engineering achieving 35% LinkedIn reply rates through competitor analysis Technical differentiation through cold start optimization and network latency improvements Revenue expansion through global deployment and regulatory compliance automation GTM Lessons For B2B Founders: Treat go-to-market as a systems engineering problem: Michael reframed traditional sales challenges through an engineering lens, focusing on constraints, scalability, and data-driven optimization. "I try to reframe my go to market problem as an engineering one and try to pick up, okay, like what are my constraints? Like how can I do this, how can it scale?" This systematic approach led to testing 8-10 different strategies, measuring conversion rates, and building automated pipelines rather than relying on manual processes that don't scale. Structure partnerships for partner success before revenue sharing: Cerebrium generates millions in ARR through partners whose sales teams actively upsell their product. Their approach eliminates typical partnership friction: "We typically approach our partners saying like, look, you keep the money you make, we'll keep the money we make. If it goes well, we can talk about like rev share or some other agreement down the line." This removes commission complexity that kills B2B partnerships and allows partners to focus on customer value rather than internal revenue allocation conflicts. Build AI-powered competitive intelligence for outbound at scale: Cerebrium's 35% LinkedIn reply rate comes from scraping competitor followers and LinkedIn engagement, running prospects through qualification agents that check funding status, ICP fit, and technical roles, then generating personalized outreach referencing specific interactions. "We saw you commented on Michael's post about latency in voice. Like, we think that's interesting. Like, here's a case study we did in the voice space." The system processes thousands of prospects while maintaining personalization depth that manual processes can't match. Position infrastructure as revenue expansion, not cost optimization: While dev tools typically focus on developer productivity gains, Cerebrium frames their value proposition around market expansion and revenue growth. "We allow you to deploy your application in many different markets globally... go to market leaders love us and sales leaders because again we open up more markets for them and more revenue without getting their tech team involved." This messaging resonates with revenue stakeholders and justifies higher spending compared to pure cost-reduction positioning. Weaponize regulatory complexity as competitive differentiation: Cerebrium abstracts data sovereignty requirements across multiple jurisdictions - GDPR in Europe, data residency in Saudi Arabia, and other regional compliance frameworks. "As a company to build the infrastructure to have data sovereignty in all these companies and markets, it's a nightmare." By handling this complexity, they create significant switching costs and enable customers to expand internationally without engineering roadmap dependencies, making them essential to sales teams pursuing global accounts. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Are we already living in a post-data privacy world? Breaches are everywhere, data is constantly being leaked, and GDPR fines haven’t stopped surveillance capitalism or shady data brokers. In this episode of the Analyst Chat, Matthias Reinwarth is joined by Mike Small and Jonathan Care to explore whether privacy still has meaning — or if resilience and risk management are the only ways forward. They debate: ✅ Is privacy truly dead, or just evolving?✅Why regulations like GDPR often miss the mark ⚖️✅How cyber resilience is becoming more critical than “traditional” privacy✅The personal, societal, and legal dimensions of privacy✅What organizations (and individuals) can still do to protect data
Are we already living in a post-data privacy world? Breaches are everywhere, data is constantly being leaked, and GDPR fines haven’t stopped surveillance capitalism or shady data brokers. In this episode of the Analyst Chat, Matthias Reinwarth is joined by Mike Small and Jonathan Care to explore whether privacy still has meaning — or if resilience and risk management are the only ways forward. They debate: ✅ Is privacy truly dead, or just evolving?✅Why regulations like GDPR often miss the mark ⚖️✅How cyber resilience is becoming more critical than “traditional” privacy✅The personal, societal, and legal dimensions of privacy✅What organizations (and individuals) can still do to protect data
Send us a textOn this episode of Serious Privacy, Paul Breitbarth brings us news from the Global Privacy Assembly held in Korea and Dr. K Royal has fun with privacy trivia! Ralph O'Brien is out this week. Open offer to all fans... if you answered all the questions correctly, send oneof us your address and we will send you a sticker for playing Trivacy! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.
For episode 606 of the BlockHash Podcast, host Brandon Zemp is joined by Patrick Moynihan, President and Co-founder of Tracer Labs.Tracer Labs is building the future of digital trust. As the parent company of Trust ID and a founding member of DCID, we create self-sovereign identity (SSI) and consent solutions where control follows the user and not the website.Patrick leads a team bringing privacy-first, quantum-resistant identity to Web3, where user consent and data aren't just protected, but unified across platforms. Tracer Labs has replaced invasive device tracking with patent pending tech that gives individuals one login, full control, and real-world rewards—think GDPR and CCPA compliance, higher business conversions, and verified zero-party data. Their aPaaS integrates seamlessly for instant impact, with paid rollouts underway and brand partnerships like Bass Pro Shops and Expedia already in progress. ⏳ Timestamps: (0:00) Introduction(1:17) Who is Patrick Moynihan?(16:16) How can Trust ID be used?(22:00) How are users incentivized to share data?(28:46) Online data protection for kids(33:47) Quantum resistant identity(41:36) Tracer Labs roadmap
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Tietosuojapodin kauan odotetun (ainakin juontajat ovat odottaneet) viestintäjakson ensimmäinen osa on täällä! Jyri, Hannes ja Pilvi sukeltavat tietosuojaviestinnän saloihin oppaanaan Anna Brchisky, Bamla Agencyn toimitusjohtaja. Anna selvittää kolmikolle aluksi viestinnän perustermejä, eikä aikaakaan kun ymmärrettävyysvalvoja Hannes ja viestintäfanit Jyri ja Pilvi unohtavat hetkeksi asiantuntija-Annan läsnäolon ja intoutuvat kukin monologiin viestinnän merkityksestä tietosuoja-ammattilaisen työssä, omasta näkökulmastaan, mutta hypoteettisesti tietenkin. Ai niin, Anna! Jaksossa kuullaan monelta kantilta, että viestintä on tärkeä osa tietosuojaa. Sillä ei ainoastaan yritetä informoida ihmisiä kirjoittelemalla selosteisiin eteeristä tietosuojapuhetta siitä, miten me välitämme juuri sinun yksityisyydestäsi, vaan sillä myös tasapainotetaan kokonaisriskiä ja hallitaan mutaa, kun se osuu tuulettimeen. Tietosuojahan on myös kilpailuetu ja GDPR eurooppalaisten toimijoiden brändiä kiillottava valttikortti maailmalla… vai onko? Anna avaa meille viestintäasiantuntijan näkökulmasta, mitä missä ja milloin viestintään kannattaa tietosuojamielessä panostaa. Käymme läpi laadukkaan tietosuojaviestinnän raaka-aineita ja toteamme, että kaikki viestintä herkut eivät mahdu samaan jaksoon, joten asiaan palataan vielä! Ota tiukasti kiinni itsetunnostasi ja laita jakso pyörimään. Tykkäsitkö jaksostamme? Osta meille kahvia täältä, se auttaa: https://bmc.link/privacypod4u Voit seurata TietosuojaPodia Instagrammissa ja LinkedInissä @privacypod Voit lähettää meille palautetta Instagrammin tai LinkedInin yksityisviestinä tai sähköpostilla tietosuojapod@protonmail.com
AI is changing the way we work, live, and build businesses — but it also raises big questions about privacy. As AI tools process more personal and sensitive data, how can companies make sure they follow privacy laws like GDPR? How can privacy be built into AI from the very beginning? And what's the best way to handle data retention so users stay in control? In this episode of the FIT4Privacy Podcast, host Punit Bhatia speaks with Sylvestre Dupont, co-founder of Parser, about how to keep privacy at the heart of AI tools and services. They discuss why privacy matters in AI, how to build privacy by design into AI from the start, and what it takes to make an AI-based SaaS tool GDPR compliant. Sylvestre also shares his approach to data retention — letting users choose how long their data is stored — and why trust is a key advantage for any business handling personal data. If you work with AI, personal data, or GDPR, this episode gives you clear and practical ideas you can use right away.
Christine Russo, host and creator of What Just Happened, sits with Ethan Chernofsky of Placer.ai.Placer.ai was built with privacy at its core. From its 2018 launch, the company avoided collecting personally identifiable information (PII), instead focusing on anonymized, aggregate data. This approach aligned with GDPR and CCPA regulations, allowing Placer to demonstrate that location intelligence can be both privacy-centric and commercially valuable. While this choice meant leaving some revenue opportunities (like hyper-targeted advertising) on the table, it reinforced trust, credibility, and long-term sustainability.Two major misconceptions surfaced in the discussion:Data replaces intuition. Many assumed that advanced analytics would replace industry experience and gut instinct. In reality, Placer frames data as an empowerment tool—complementary to human judgment, not a substitute.Visits equal transactions. A common misunderstanding is that foot traffic should directly correlate to sales. Instead, visits represent multiple forms of value: discovery, intent, pickup, consideration, and brand engagement. This broader view reframes physical stores as multi-purpose platforms for marketing, fulfillment, and consumer connection, not just sales points.The conversation emphasized how retail decision-making is evolving:From outdated tools to scalable intelligence. The industry shifted from handheld “clickers” and gut instinct toward data-driven decision frameworks that still honor human experience but make it actionable and scalable.The pandemic's unexpected boost. Rather than killing physical retail, COVID-19 ultimately strengthened it, highlighting the resilience and adaptability of brick-and-mortar models.Data as a universal language. Placer's insights became a common currency across verticals—real estate, retail, finance, CPG, and advertising—spurring new ways to measure impact, optimize inventory, and harmonize digital with physical.The future of insights in the AI era. With AI simplifying access to information, the differentiator won't just be data but the decisions leaders make. Trust, creativity, and the ability to “zag” when others “zig” will define competitive advantage.
In this episode, I sit down with legal experts from Shardul Amarchand Mangaldas & Co, one of India's most respected full-service law firms. My guests are Bhargava K.S., Partner with 17+ years of experience in venture capital, private equity, and fund formation, and Mithila Hari, Principal Associate with deep expertise in VC/PE deals and strategic advisory.Together, we unpack the legal playbook every founder must know from choosing the right structure at incorporation to safeguarding IP, raising capital, and preparing for exits.Here's what we cover:Choosing between LLP, Pvt. Ltd, or Non-Profit structuresCommon mistakes founders make when starting upLegal must-haves for fundraising and negotiating with investorsProtecting IP, confidential information, and trade secretsStructuring offer letters, NDAs, and stock options for early hiresStaying compliant with data privacy laws (GDPR, CCPA)Preparing for due diligence and smoother acquisitionsRegulatory considerations for global companies entering IndiaReal stories of startups navigating legal challengesIf you're a founder, investor, or anyone navigating the startup ecosystem, this episode is packed with legal strategies to help you scale smarter and avoid costly pitfalls.Newsletter - https://techthrivenewsletter.beehiiv.com/Disclaimer: This podcast is for informational purposes only and not legal, financial, or investment advice. Views shared are personal and may not reflect affiliated organizations. Please consult qualified professionals before making business, legal, or financial decisions. No client or advisory relationship is created by listening.Shardul Amarchand Mangaldas & Co - https://www.amsshardul.com/Bhargava K.S. - https://www.linkedin.com/in/bhargava-k-s-49698a19/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_appMithila Hari - https://www.linkedin.com/in/mithila-hari-698321120/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_appSupport the showConnect with us Instagram: https://www.instagram.com/ctrl.alt.thrive.podcast/ Youtube : https://www.youtube.com/@Ctrlaltthrive/videos Connect with Navneet Linkedin : https://www.linkedin.com/in/navneet-kaur-80109b227/ Instagram : https://www.instagram.com/nav_neeetkaur/
L'espansione di un brand su scala globale oggi non dipende più solo da intuizioni commerciali, ma dalla capacità di interpretare un'enorme mole di dati attraverso tecnologie sempre più sofisticate. L'analisi avanzata dei dati, l'intelligenza artificiale applicata al marketing e le piattaforme di gestione dei creator sono diventate strumenti essenziali per capire mercati lontani e per comunicare in modo efficace con culture diverse. Ma come funzionano nel concreto questi strumenti e quale impatto reale hanno sulle strategie di crescita di un'azienda? Per analizzare il lato tecnologico dell'internazionalizzazione e le sue applicazioni pratiche abbiamo invitato Andrea Torri, Country Manager per l'Italia di Rocket Digital.Nella sezione delle notizie parliamo dell'ok del Garante della Privacy per l'adozione della piattaforma CEREBRO per combattere la criminalità e infine dell'innovativo spettacolo di droni in Vaticano.--Indice--00:00 - Introduzione01:31 - L'ok del Garante della Privacy a CEREBRO (CyberSecurity360.it, Luca Martinelli)03:16 - L'innovativo spettacolo di droni in Vaticano (Wired.it, Matteo Gallo)04:51 - Rocket Digital: come la tecnologia accelera l'internazionalizzazione dei brand (Andrea Torri, Davide Fasoli, Matteo Gallo)29:39 - Conclusione--Testo--Leggi la trascrizione: https://www.dentrolatecnologia.it/S7E38#testo--Contatti--• www.dentrolatecnologia.it• Instagram (@dentrolatecnologia)• Telegram (@dentrolatecnologia)• YouTube (@dentrolatecnologia)• redazione@dentrolatecnologia.it--Sponsor--• Puntata realizzata in collaborazione con Rocket Digital--Immagini--• Foto copertina: 8photo su Freepik--Brani--• Ecstasy by Rabbit Theft• Found You by Time To Talk, Avaya & RYVM
Send us a textOn this week of Serious Privacy, Ralph O'Brien of Reinbo Consulting and Dr. K Royal (Paul Breitbarth is travelling) discuss current events in privacy, data protection, and cyber law. Fascinating episode with all the hot stories which seem to follow a theme - adequacy and child online safety, plus some enforcements. Coverage includes the decision on the European Court's decision on the Latombe suit challenging the adequacy of the EU-US thingie, Brazil, Tazania, Argentina, Austrailia, China, ChatGPT, and so much more! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.
It's the most curious paradox of today's digital revolution. While the computers, the internet, smartphones and AI all appear magical, they haven't actually translated into equally magical economic progress. That, at least, is the counter-intuitive argument of the Oxford economist Carl Benedikt Frey whose new book, How Progress Ends, suggests that the digital revolution isn't resulting in an equivalent revolution of productivity. History is repeating itself in an equally paradoxical way, Frey warns. We may, indeed, be repeating the productivity stagnation of the 1970s, in spite of our technological marvels. Unlike the 19th-century industrial revolution that radically transformed how we work, today's digital tools—however impressive—are primarily automating existing processes rather than creating fundamentally new types of economic activity that drive broad-based growth. And AI, by making existing work easier rather than creating new industries, will only compound this paradox. It might be the fate of not just the United States and Europe, but China as well. That, Frey warns, is how all progress will end.1. The Productivity Paradox is Real Despite revolutionary digital technologies, we're not seeing the productivity gains that past technological revolutions delivered. It took a century for steam to show its full economic impact, four decades for electricity—but even accounting for lag time, the computer revolution has underperformed economically compared to its transformative social effects.2. Automation vs. Innovation: The Critical Distinction True progress comes from creating entirely new industries and types of work, not just automating existing processes. The mid-20th century boom created the automobile industry and countless supporting sectors. Today's AI primarily makes existing work easier rather than spawning fundamentally new economic activities.3. Institutional Structure Trumps Technology The Soviet Union succeeded when scaling existing technology but failed when innovation was needed because it lacked decentralized exploration. Success requires competitive, decentralized systems where different actors can take different bets—like Google finding funding after Bessemer Ventures said no.4. Europe's Innovation Crisis Has a Clear Diagnosis Europe lags in digital not due to lack of talent or funding, but because of fragmented markets and regulatory burdens. The EU's internal trade barriers in services amount to a 110% tariff equivalent, while regulations like GDPR primarily benefit large incumbents who can absorb compliance costs.5. Geography Still Matters in the Digital Age Silicon Valley's success stemmed from unenforceable non-compete clauses that enabled job-hopping and knowledge transfer, while Detroit's enforcement of non-competes after 1985 contributed to its decline. As AI makes many services tradeable globally, high-cost innovation centers face new competitive pressures from lower-cost locations.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Onko ollut ikävä? Niin meilläkin! Se olisi syksyn ensimmäiset uutiset ja voi veljet mitä uutisia ne onkin! Tällä kertaa käymme kurkkaamassa Hanneksen, Jyrin ja Pilvin kanssa muun muassa: Tuore pseudonymisointiratkaisu eli SRB-ratkaisu, joka LinkedIn-postausten perusteella muuttaa koko maailman (tai ainakin GDPR:n, henkilötiedon käsitteen ja internetin). Mutta kuinkas sitten kävikään? Muuttuuko sittenkään mikään vai lisääntyykö vain dokumentaation ja laskutettavien tuntien määrä? S-Pankin tietomurto Episode V: tietosuojavaltuutettuimperiumin vastaisku. Oikeusvarmuuteen ja prosessioikeuden ydinperiaatteisiin pureutuva eeppinen kudelma. Chat Control - jatkokertomuksen uusimmat käänteet. Muista vastustaa jos kannatat vapautta ja demokratiaa. Muista myös Nepal. Ja Alan Moore; onhan sinulla jo Guy Fawkes naamari? Google rikkoo verkkosivustojen ansaintalogiikan tekijänoikeutta uhmaten. Käyttäjä iloitsee, mutta mitä se tarkoittaa miljoonille (vai miljardeille?) verkkosivustojen sisällölle? Hajoaako Internet? Hajoaako Google? Laita jakso pyörimään ja valmistaudu triggeröitymään. LINKIT: SRB ja pseudonymisoinnin loppu (lehdistötiedote, ratkaisua ei vielä saatavilla): https://curia.europa.eu/jcms/upload/docs/application/pdf/2025-09/cp250107en.pdf S-pankin megasakot: https://tietosuoja.fi/-/s-pankille-seuraamusmaksu-s-mobiilin-tietoturvahaavoittuvuudesta Chat Control: https://www.techradar.com/computing/cyber-security/chat-control-germany-joins-the-opposition-against-mandatory-scanning-of-private-chats-in-the-name-of-encryption?utm_source=chatgpt.com Penske Media v. Google: https://www.forbes.com/sites/andymeek/2025/09/14/why-rolling-stone-owner-penske-media-just-declared-war-on-google/ Tykkäsitkö jaksostamme? Osta meille kahvia täältä, se auttaa: https://bmc.link/privacypod4u Voit seurata TietosuojaPodia Instagrammissa ja LinkedInissä @privacypod Voit lähettää meille palautetta Instagrammin tai LinkedInin yksityisviestinä tai sähköpostilla tietosuojapod@protonmail.com Liity kanavallemme Discordissa täältä: https://discord.gg/c4ggY9Vx Sisällysluettelo Herkolle: 00:00 Johdanto tietoturvaan ja henkilökohtaisiin kokemuksiin 03:04 EU-sääntely ja henkilötiedot 05:59 Henkilötietojen merkitys oikeudellisissa yhteyksissä 09:05 Keskustelua henkilötietojen luonteesta 12:00 Lausuntojen rooli tietosuojassa 15:05 Tietojen siirtojen ja vastuiden ymmärtäminen 18:06 GDPR:n vaikutus tietojen käsittelyyn 21:05 Anonyymit tiedot ja niiden vaikutukset 24:11 S-Pankki tapaus ja sen oikeudelliset seuraamukset 30:12 Seuraamusten tarkastelu 33:59 TSV:n ja FIVA:n rooli sääntelyssä 36:21 Tietoturvaloukkausten seuraukset 39:30 Veropäätösten vaikutus taloudelliseen toimintaan 44:33 Keskustelua lapsien tietosuojasta ja yksityisyydestä 47:38 Chat Control -sääntelyn haasteet 55:55 Hakukoneiden ja sisällön indeksoinnin tulevaisuus
AI is changing the way we work, live, and build businesses — but it also raises big questions about privacy. As AI tools process more personal and sensitive data, how can companies make sure they follow privacy laws like GDPR? How can privacy be built into AI from the very beginning? And what's the best way to handle data retention so users stay in control? In this episode of the FIT4Privacy Podcast, host Punit Bhatia speaks with Sylvestre Dupont, co-founder of Parser, about how to keep privacy at the heart of AI tools and services. They discuss why privacy matters in AI, how to build privacy by design into AI from the start, and what it takes to make an AI-based SaaS tool GDPR compliant. Sylvestre also shares his approach to data retention — letting users choose how long their data is stored — and why trust is a key advantage for any business handling personal data. If you work with AI, personal data, or GDPR, this episode gives you clear and practical ideas you can use right away.
Marketing teams used to have a simple enough job: follow the click, count the conversions, and shift the budget accordingly. But that world is gone. GDPR, iOS restrictions, and browser-level changes have left most attribution models broken or unreliable. So what now? In this episode, I sat down with Fredrik Skansen, CEO of Funnel, to unpack how marketing intelligence actually works in a world where data is partial, journeys are fragmented, and the old models don't hold. Since founding Funnel in 2014, Fredrik has grown the company into a platform that supports over 2,600 brands and handles reporting on more than 80 billion dollars in annual digital spend. That scale gives him a front-row seat to the questions every CMO and CFO are asking right now. Fredrik explains why last-click attribution didn't just become inaccurate. It became misleading. With tracking capabilities stripped down and user signals disappearing, the industry has had to move toward modeled attribution and real-time optimisation. That only works if your data is clean, aligned, and ready for analysis. Funnel's platform helps structure campaigns upfront, pull data into a unified model, apply intelligence, push learnings back into the platforms, and produce reporting that makes sense to the wider business. This isn't about dashboards. It's about decisions. We also talk about budget mix. Performance channels may feel safe, but Fredrik points out they are also getting more expensive. When teams bring brand and mid-funnel activity back into the measurement framework, the picture often changes. He shares how Swedish retailer Gina Tricot grew from 100 million to 300 million dollars in three years, in part by shifting spend to brand and driving demand earlier in the customer journey. That move only felt safe because the data supported it. AI adds another layer. With tools like Perplexity reshaping search behavior and the web shifting from links to answers, click-throughs are drying up. But it's not the end of visibility. Content still matters. So does structure. The difference is that now your reader might be an AI model, not a human. That requires a rethink in how brands approach discoverability, authority, and engagement. What makes Funnel interesting is that it doesn't stop at analytics. The platform feeds insight back into action, reducing waste and creating tighter loops between teams. It also works for agencies, which is why groups like Havas use it across 40 offices through a global agreement. If you're tired of attribution theatre and want to understand what marketing measurement looks like when it's built for reality, this episode gives you a clear, usable view. Listen in, then tell me which decision you're still guessing on. Because marketing can be measured. Just not the way it used to be. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Send us a textPrivacy and cybersecurity leader Sonia Siddiqui joins us to explore the collision between emerging technologies and privacy regulations, offering insights on how companies can navigate this complex landscape while building trust.• Sonia's journey from aspiring architect to privacy expert, motivated by the intersection of civil rights and privacy• The growing gap between rapid technological innovation and slower-moving regulatory frameworks• Examining real-world tensions like WorldCoin's iris scanning under GDPR's biometric data provisions• Why privacy should be a core business enabler rather than just a compliance checkbox• The importance of implementing privacy by design as a living process that evolves with technology• Why principles-based regulation allows for better adaptation to new technologies than prescriptive rules• The inseparable relationship between privacy and security in building customer trust• How privacy professionals can stay current through professional networks, podcasts, and continuous learning• Essential privacy resources including "The Unwanted Gaze" and "Dieterman's Field Guide to Privacy"Find Sonia and her privacy consulting practice at tamarack.solutions or connect with her at the upcoming AI conference in Boston. Support the show
How can we apply differential privacy to real-world scenarios? How do you go about algorithmic design? Is there a conflict between data minimization and differential privacy? Can you solve for personal data finding its way into machine learning models? Where can a young professional find resources to dive deeper?References:* Daniel Simmons-Marengo on LinkedIn* OpenDP* Some takeaways from PEPR'24 (USENIX Conference on Privacy Engineering Practice and Respect 2024)* Damien Desfontaines: Differential Privacy in Data Clean Rooms (Masters of Privacy, January 2024)* NIST Guidelines for Evaluating Differential Privacy Guarantees (March 2025)* Peter Craddock: EDPS v SRB, the relative nature of personal data, processors, transparency, impact on MarTech and AdTech (Masters of Privacy, September 2025)* Katharine Jarmul: Demystifying Privacy Enhancing Technologies (Masters of Privacy, October 2023)* Sunny Kang: Machine Learning meets Privacy Enhancing Technologies (Masters of Privacy, February 2023)* How GDPR changes the rules for research (Gabe Maldoff, IAPP blog, 2016) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
Alex Kauffmann has resumed his role as principal moderator of ‘The Listeners Chair', reclaiming the central chair from which audience questions are drawn. These are then picked apart and reframed within environmental contexts to tease out wider significance.Lilly, from Summertown, Oxford, England sets the first question - “Should Oxford colleges open up more of their greenspaces to the wider population and tourists, or is it ok to keep them private and it is only the poor driving the move as they want some of what the rich have? Some people actually attending the college don't even get to see some of the private internal areas. I heard it said opening up threatens students GDPR protection, and that students don't really want to be disturbed. The Town and Gown rivalry still lives on, and the university often gets bad press. People often forget there are two universities in Oxford”.Stuart, forever Lily's “expert” after one fateful consultation, swears they're basically besties—especially now she's firing off another sly jab at the critic who dared to be dismissive back then.He digresses into the “sleeve Olympics,” where longer gown fabric apparently equals status. Then, like a city tour guide, he sketches a divide between the postcard-perfect centre and “real Oxford,” the suburbs where life actually happens.Alex, all cynicism, scoffs that locals couldn't care less about polished lawns—they're strictly tourist bait. William, sounding like the tourist board, notes that plenty of colleges open their gates—sometimes free for residents—though all the quads blur together: same stones, same chapels.Back with Stuart, who moans these patches are so tiny you'd wreck your shoes circling them—and forget walking on grass.Alex delivers his verdict: if dons don't stroll freely, neither should tourists. William agrees students do deserve their hush-hush study sanctuaries, but insists that visitors tread as reverently as in a cathedral.Luna, San Hose Del Cabo, Mexico brings the next question - “I see the biggest threats to humanity outside of the multiple climate related issues as truth distortion, feral social media and runaway AI. What do you think?”Stuart resets by clarifying that “threats to humanity” means existential doom, not oat‑milk prices. He drops the wisdom of belly‑button gazing: stare too long and all you get is fluff, not enlightenment. Translation? Stop spiraling—take action, even if it's just colour‑coding your apocalypse survival kit alphabetically.Alex wonders whether “threats” means asteroids or endless propaganda. William connects social media, AI, and collapsing truth like red string on a board, warning not to trust any single source.Alex, ever the optimist, claims independent news influencers are thriving, which he counts as hopeful. He advises we stop fretting constant climate doom, since total self‑destruction is unlikely. His news tip? “News Daddy,” a TikTok oracle free of corporate spin.William closes with the mic‑drop: the gravest threat to humanity is “believing our own bullshit.” Hard to argue with that—now hand me the navel fluff.What do you make of this discussion? Do you have a question that you'd like us to discuss? Let us know by sending an email to thepeoplescountryside@gmail.comSign the Petition - Improve The Oxfordshire Countryside Accessibility For All Disabilities And Abilities: change.org petitionWe like to give you an ad free experience. We also like our audience to be relatively small and engaged, we're not after numbers.This podcast's overall themes are nature, philosophy, climate, the human condition, sustainability, and social justice. Help us to spread the impact of the podcast by sharing this link with 5 friends podfollow.com/ThePeoplesCountrysideEnvironmentalDebatePodcast , support our work through Patreon patreon.com/thepeoplescountryside. Find out all about the podcast via this one simple link: linktr.ee/thepeoplescountryside
This week's World of DaaS LM Brief looks at GDPR compliance for non-EU and non-UK companies. Firms handling resident data in these regions are required to appoint local representatives who serve as points of contact for regulators and individuals exercising their data rights. This step is critical for ensuring compliance with both GDPR and UK GDPR.Listen to this short podcast summary, powered by NotebookLM.
In this episode, Andreas Munk Holm speaks with Dom Hallas, Executive Director of the UK's Startup Coalition, to explore how the organization is influencing policy at the intersection of startups, venture capital, and government. From immigration reform to capital access and regulatory red tape, Dom brings a candid view on what it takes to create real impact for founders across Europe.They dive into the power of founder-first advocacy, the evolving lobbying landscape in Europe, and the urgent need for a united tech voice across the continent.Here's what's covered:01:10 Why Policy is a Competitive Sport03:42 GDPR, Brussels & Lessons from Tech Regulation05:12 What is the Startup Coalition & Who Funds It?07:13 The Three Buckets: Talent, Capital, Regulation11:20 Why Founders Need Their Own Voice in Politics16:31 Making Advocacy Fun, Human & Effective17:56 What Startups Can Learn from Farmers21:30 Time Horizons & Playbooks in Policy Work26:18 How the Coalition Sets its Agenda31:46 A Crossroads for European Tech35:46 The Current Policy Agenda: Talent, Finance & Reg43:27 Funding the Underfunded: Inclusion as Policy47:01 Regulation That Clears the Way for the Next Thing
Do you want to use AI without losing trust? What frameworks help build trust and manage AI responsibly? Can we really create trust while using AI?In this episode of the FIT4PRIVACY Podcast, host Punit Bhatia and digital trust expert Mark Thomas explain how to govern and manage AI in ways that build real trust with customers, partners, and society.This episode breaks down what it means to use AI responsibly and how strong governance can help avoid risks. You'll also learn about key frameworks like the ISO 42001, the EU AI Act, and the World Economic Forum's Digital Trust Framework—and how they can guide your AI practices.Mark and Punit also talk about how organizational culture, company size, and leadership affect how AI is used—and how trust is built (or lost). They discuss real-world tips for making AI part of your existing business systems, and how to make decisions that are fair, explainable, and trustworthy.
Nonprofits lean on outside platforms to save time and stretch budgets—but those relationships can quietly expose sensitive donor, client, and payment data. In this episode, Senior Cybersecurity Advisor Parker Brissette of Richey May explains how to recognize and manage third-party software risk before it becomes tomorrow's headline. He starts with a simple lens: follow the data. Where is it stored? Who can touch it—directly or indirectly? Many teams only think about contracted vendors, but Parker widens the aperture to “shadow IT” and consumer tools staff use without formal approval. As he puts it, “Third parties is really anybody that can touch the data at any point in your business, whether you have an agreement with them or maybe not.”From privacy regulations (GDPR, CCPA) to sector-specific rules (HIPAA, PCI), nonprofits carry legal and reputational exposure the moment personal information enters their systems. Parker offers practical steps: inventory paid tools via your accounting system; ask, “If this vendor vanished tomorrow, what would break?”; and press vendors for proof—SOC 2 reports, ISO 27001, or completed security questionnaires. For organizations without a CIO, he recommends clear contracts and one non-negotiable safeguard: “The biggest thing that I recommend in any third-party engagement is setting an expectation of having cyber insurance, because that's a big protection for you financially.”AI enters the picture with both promise and peril. Consumer AI tools can learn from and retain your uploads, potentially exposing proprietary or personal information. Enterprise agreements (e.g., Microsoft Copilot) can offer stronger data protections, but only if configured and used correctly. Parker's guidance is pragmatic: don't ban AI; set guardrails, choose vetted tools, and train teams.Finally, he urges preparation and transparency. Incidents can happen—even with good controls. Donors and corporate funders expect frank communication about what protections exist and what happens if data is exposed. Build trust now by documenting safeguards, validating vendors, and rehearsing your response.You don't have to be a security expert to make smart choices—but you do need a map: know your systems, test your assumptions, ask vendors for evidence, and write risk into your contracts and budgets. That approach turns anxiety into action—and preserves the trust your mission depends on.Find us Live daily on YouTube!Find us Live daily on LinkedIn!Find us Live daily on X: @Nonprofit_ShowOur national co-hosts and amazing guests discuss management, money and missions of nonprofits! 12:30pm ET 11:30am CT 10:30am MT 9:30am PTSend us your ideas for Show Guests or Topics: HelpDesk@AmericanNonprofitAcademy.comVisit us on the web:The Nonprofit Show
Welcome back to Meeting of the Minds, a special podcast episode series by EM360Tech, where we talk about the future of tech.In this Big Data special episode of the Meeting of the Minds, our expert panel – Ravit Jain, Podcast host, Christina Stathopoulos of Dare to Data and a data and AI evangelist, Wayne Eckerson, data strategy consultant and president of the Eckerson Group and Kevin Petrie VP of Research at BARC, come together again to discuss the key data and AI trends, particularly focusing on data ethics. They discuss ethical issues related to using AI, the need for data governance and guidelines, and the essential role of data quality in AI success. The speakers also look at how organisations can measure the value of AI through different KPIs, stressing the need for a balance between technical achievements and business results. Our data experts examine the changing role of AI across various sectors, with a focus on success metrics, the effects on productivity and employee stress, changes in education, and the possible positive and negative impacts of AI in everyday life. They highlight the need to balance productivity with quality and consider the ethics of autonomous AI systems.In the previous episode, new challenges and opportunities in data governance, regulatory frameworks, and the AI workforce were discussed. They looked at the important balance between innovation and ethical responsibility, looking at how companies are handling these issues.Tune in to get new understandings about the future of data and AI and how your enterprise can adapt to the upcoming changes and challenges. Hear how leaders in the field are preparing for a future that is already here.Also watch: Meeting of the Minds: State Of Cybersecurity in 2025TakeawaysGenerative AI is creating a supply shock in cognitive power.Companies are eager for data literacy and AI training.Data quality remains a critical issue for AI success.Regulatory frameworks like GDPR are shaping AI governance.The US prioritises innovation, sometimes at the expense of regulation.Generative AI introduces new risks that need to be managed.Data quality issues are often the root of implementation failures.AI's impact on jobs is leading to concerns about workforce automation.Organisations must adapt to the probabilistic nature of generative AI.The conversation around data quality is ongoing and evolving. AI literacy and data literacy are crucial for workforce success.Executives are more concerned about retraining than layoffs.Younger workers may struggle to evaluate AI-generated answers.Incremental changes in productivity are expected with AI.Job displacement may not be immediate, but could create future gaps.Human empathy and communication skills remain essential in many professions.AI will augment, not replace, skilled software developers.Global cooperation is needed to navigate...
The Data (Use and Access) Act 2025 (DUAA) has brought the most significant changes to UK data protection since UK GDPR came into force. While it doesn't replace GDPR, the DPA 2018, or PECR, the DUAA reshapes how organisations process personal data, handle subject access, manage cookies, and apply legitimate interests. In this episode, we share highlights from our live webinar, where VinciWorks experts explained how these reforms affect compliance strategies. From broad consent in scientific research and recognised legitimate interests, to expanded cookie exemptions, stricter rules for children's services, and higher PECR fines, the DUAA introduces both opportunities and risks. Listen in to learn: What the DUAA changes — and what stays the same Updates to subject access rights and proportionality Cookie rules, soft opt-in for charities, and tougher PECR fines Automated decision-making and AI compliance under the DUAA The new “data protection test” for international transfers Practical steps to future-proof your compliance framework This episode is essential listening for data protection officers, compliance professionals, and legal teams preparing for the future of UK data protection.
Could ongoing trials redefine the management of oligometastatic and advanced prostate cancer? In this installment of BackTable Tumor Board, leading prostate cancer experts Dr. Neeraj Agarwal, a medical oncologist from the University of Utah, and Dr. Tyler Seibert, a radiation oncologist from UC San Diego, join host Dr. Parth Modi to share their insights on the latest clinical trials and persistent challenges in managing prostate cancer.---This podcast is supported by:Ferring Pharmaceuticals https://ad.doubleclick.net/ddm/trackclk/N2165306.5658203BACKTABLE/B33008413.420220578;dc_trk_aid=612466359;dc_trk_cid=234162109;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};gpp=${GPP_STRING_755};gpp_sid=${GPP_SID};ltd=;dc_tdv=1---SYNPOSISThe multidisciplinary discussion addresses clinical decision-making in active surveillance versus early intervention, the role of PSMA PET imaging in detection and treatment planning, and evolving strategies for metastatic and castration-resistant disease. They also evaluate the therapeutic potential of alpha emitters and radioligand therapies, consider the evidence behind treatment intensification and de-intensification, and explore how these approaches can be individualized to optimize patient outcomes.---TIMESTAMPS0:00 - Introduction1:48 - Active Surveillance in Low-Risk Prostate Cancer7:08 - Molecular Testing and Risk Stratification8:28 - Radiation Therapy Approaches20:16 - PSA Recurrence and PSMA PET Scans32:40 - The Role of ADT37:15 - PSMA PET Scans40:58 - Genetic Testing in High-Risk and Metastatic Prostate Cancer46:54 - Treatment Intensification vs. De-Intensification Trials55:59 - Castration-Resistant Prostate Cancer
Broadcasting from Florence and Los Angeles, I Had One of Those Conversations...You know the kind—where you start discussing one thing and suddenly realize you're mapping the entire landscape of how different societies approach technology. That's exactly what happened when Rob Black and I connected across the Atlantic for the pilot episode of ITSPmagazine Europe: The Transatlantic Broadcast.Rob was calling from what he optimistically described as "sunny" West Sussex (complete with biblical downpours and Four Seasons weather in one afternoon), while I enjoyed actual California sunshine. But this geographic distance perfectly captured what we were launching: a genuine exploration of how European perspectives on cybersecurity, technology, and society differ from—and complement—American approaches.The conversation emerged from something we'd discovered at InfoSecurity Europe earlier this year. After recording several episodes together with Sean Martin, we realized we'd stumbled onto something crucial: most global technology discourse happens through an American lens, even when discussing fundamentally European challenges. Digital sovereignty isn't just a policy buzzword in Brussels—it represents a completely different philosophy about how democratic societies should interact with technology.Rob Black: Bridging Defense Research and Digital RealityRob brings credentials that perfectly embody the European approach to cybersecurity—one that integrates geopolitics, human sciences, and operational reality in ways that purely technical perspectives miss. As UK Cyber Citizen of the Year 2024, he's recognized for contributions that span UK Ministry of Defense research on human elements in cyber operations, international relations theory, and hands-on work with university students developing next-generation cybersecurity leadership skills.But what struck me during our pilot wasn't his impressive background—it was his ability to connect macro-level geopolitical cyber operations with the daily impossible decisions that Chief Information Security Officers across Europe face. These leaders don't see themselves as combatants in a digital war, but they're absolutely operating on front lines where nation-state actors, criminal enterprises, and hybrid threats converge.Rob's international relations expertise adds crucial context that American cybersecurity discourse often overlooks. We're witnessing cyber operations as extensions of statecraft—the ongoing conflict in Ukraine demonstrates how narrative battles and digital infrastructure attacks interweave with kinetic warfare. European nations are developing their own approaches to cyber deterrence, often fundamentally different from American strategies.European Values Embedded in Technology ChoicesWhat emerged from our conversation was something I've observed but rarely heard articulated so clearly: Europe approaches technology governance through distinctly different cultural and philosophical frameworks than America. This isn't just about regulation—though the EU's leadership from GDPR through the AI Act certainly shapes global standards. It's about fundamental values embedded in technological choices.Rob highlighted algorithmic bias as a perfect example. When AI systems are developed primarily in Silicon Valley, they embed specific cultural assumptions and training data that may not reflect European experiences, values, or diverse linguistic traditions. The implications cascade across everything from hiring algorithms to content moderation to criminal justice applications.We discussed how this connects to broader patterns of technological adoption. I'd recently written about how the transistor radio revolution of the 1960s paralleled today's smartphone-driven transformation—both technologies were designed for specific purposes but adopted by users in ways inventors never anticipated. The transistor radio became a tool of cultural rebellion; smartphones became instruments of both connection and surveillance.But here's what's different now: the stakes are global, the pace is accelerated, and the platforms are controlled by a handful of American and Chinese companies. European voices in these conversations aren't just valuable—they're essential for understanding how different democratic societies can maintain their values while embracing technological transformation.The Sociological Dimensions Technology Discourse MissesMy background in political science and sociology of communication keeps pulling me toward questions that pure technologists might skip: How do different European cultures interpret privacy rights differently? Why do Nordic countries approach digital government services so differently than Mediterranean nations? What happens when AI training data reflects primarily Anglo-American cultural assumptions but gets deployed across 27 EU member states with distinct languages and traditions?Rob's perspective adds the geopolitical layer that's often missing from cybersecurity conversations. We're not just discussing technical vulnerabilities—we're examining how different societies organize themselves digitally, how they balance individual privacy against collective security, and how they maintain democratic values while defending against authoritarian digital influence operations.Perhaps most importantly, we're both convinced that the next generation of European cybersecurity leaders needs fundamentally different skills than previous generations. Technical expertise remains crucial, but they also need to communicate complex risks to non-technical decision-makers, operate comfortably with uncertainty rather than seeking perfect solutions, and understand that cybersecurity decisions are ultimately political decisions about what kind of society we want to maintain.Why European Perspectives Matter GloballyEurope represents 27 different nations with distinct histories, languages, and approaches to technology governance, yet they're increasingly coordinating digital policies through EU frameworks. This complexity is fascinating and the implications are global. When Europe implements new AI regulations or data protection standards, Silicon Valley adjusts its practices worldwide.But European perspectives are too often filtered through American media or reduced to regulatory footnotes in technology publications. We wanted to create space for European voices to explain their approaches in their own terms—not as responses to American innovation, but as distinct philosophical and practical approaches to technology's role in democratic society.Rob pointed out something crucial during our conversation: we're living through a moment where "every concept that we've thought about in terms of how humans react to each other and how they react to the world around them now needs to be reconsidered in light of how humans react through a computer mediated existence." This isn't abstract philosophizing—it's the practical challenge facing policymakers, educators, and security professionals across Europe.Building Transatlantic Understanding, Not DivisionThe "Transatlantic Broadcast" name reflects our core mission: connecting perspectives across borders rather than reinforcing them. Technology challenges—from cybersecurity threats to AI governance to digital rights—don't respect national boundaries. Solutions require understanding how different democratic societies approach these challenges while maintaining their distinct values and traditions.Rob and I come from different backgrounds—his focused on defense research and international relations, mine on communication theory and sociological analysis—but we share curiosity about how technology shapes society and how society shapes technology in return. Sean Martin brings the American cybersecurity industry perspective that completes our analytical triangle.Cross-Border Collaboration for European Digital FutureThis pilot episode represents just the beginning of what we hope becomes a sustained conversation. We're planning discussions with European academics developing new frameworks for digital rights, policymakers implementing AI governance across member states, industry leaders building privacy-first alternatives to Silicon Valley platforms, and civil society advocates working to ensure technology serves democratic values.We want to understand how digital transformation looks different across European cultures, how regulatory approaches evolve through multi-stakeholder processes, and how European innovation develops characteristics that reflect distinctly European values and approaches to technological development.The Invitation to Continue This ConversationBroadcasting from our respective sides of the Atlantic, we're extending an invitation to join this ongoing dialogue. Whether you're developing cybersecurity policy in Brussels, building startups in Berlin, teaching digital literacy in Barcelona, or researching AI ethics in Amsterdam, your perspective contributes to understanding how democratic societies can thrive in an increasingly digital world.European voices aren't afterthoughts in global technology discourse—they're fundamental contributors to understanding how diverse democratic societies can maintain their values while embracing technological change. This conversation needs academic researchers, policy practitioners, industry innovators, and engaged citizens from across Europe and beyond.If this resonates with your own observations about technology's role in society, subscribe to follow our journey as we explore these themes with guests from across Europe and the transatlantic technology community.And if you want to dig deeper into these questions or share your own perspective on European approaches to cybersecurity and technology governance, I'd love to continue the conversation directly. Get in touch with us on Linkedin! Marco CiappelliBroadcasting from Los Angeles (USA) & Florence (IT)On Linkedin: https://www.linkedin.com/in/marco-ciappelliRob BlackBroadcasting from London (UK)On Linkedin https://www.linkedin.com/in/rob-black-30440819Sean MartinBroadcasting from New York City (USA)On Linkedin: https://www.linkedin.com/in/imsmartinThe transatlantic conversation about technology, society, and democratic values starts now.
We're joined by two GenAI experts from AWS and DoiT to understand why GenAI POCs fail at production scale, how to evaluate LLMs, and how to approach GenAI production readiness. The discussion covers four GenAI workload migration patterns, AWS Bedrock's systematic migration framework, enterprise compliance challenges including HIPAA and GDPR requirements, and practical approaches to evaluating model performance across speed, cost, and quality.
In this milestone 100th episode of our podcast, Stuart celebrates seven years of the show by reminiscing about its origins and dipping into the very first episode from September 2018. Then its time to welcome back friend of the show Vanessa Viger, Chief Marketing Officer at Envision. Vanessa provides an overview of the new Ally Solos smart glasses, discussing their affordability, lighter design, and key features. The Ally Solos Glasses are designed to look like normal, lightweight everyday frames. They feature a camera on the left arm and battery-powered, removable stems that offer a 10-hour battery life with swappable arms for all-day use. They will be available in medium/large sizes and three colour options, complete with a folding case. The glasses are offered at an early bird price of £380, normally around £532. This price includes a 1-year Ally Pro subscription worth around £200. They connect via Bluetooth to a phone and can function as a headset for music and calls. They utilize existing Ally profiles/personalities across devices and employ multiple AI models for different tasks, focusing on accuracy over speed for safety. Unlike the hardware-focused Envision glasses, the Ally Solos adopt a software platform approach. Envision provides customer support, while Solos handles fulfilment. These glasses are built with an accessibility-first approach, distinguishing them from general market products like Meta Ray-Bans. Key scenarios include reading menus, food packaging and signs, getting real-time descriptions of what's around you, checking your calendar and even finding out what the weather forecast. They are also prescription and tinted lens compatible and aim to target a broader market, including the elderly, individuals with cognitive needs, and those seeking general convenience. Ally and Ally Solos Glasses users have complete control over their privacy since all Envision products are GDPR compliant and you are opted out of data sharing by default.
Here are some helpful links.See the entire original post here for the most support. This video will help most WordPress users get the drift of how to set up the widget to create a sign-up form. Setting up this widget will only take 15 minutes.Congratulate yourself after you do it. Now do one more thing. Set up an account with a free email service provider like MailChimp.You want to do this so you can nab that necessary bit of custom code you'll need to paste into that WordPress subscribe form widget you just set up.Learn how to wed your MailChimp account to your WordPress widget here.Congratulate yourself even more profusely after this step. Get yourself some roses or something.Because you just did something huge for your art business. Oh God, Do I Have to Add MailChimp?Unlike Gmail, MailChimp will gift your readers with an easy way to update their preferences or to unsubscribe altogether.You want to give people that option, right?It's the polite thing to do, and it keeps you safe from GDPR compliance troubles.Plus, once you master MailChimp, you'll swell up with pride.You could go with other email marketing software providers like Constant Contact or ConvertKit(but they start at around $29 to $49 a month), while MailChimp is free to begin.And as of this writing, Mailchimp stays free until you have over 499 people on your list.Want extra credit?Put an afternoon's effort into learning how to make the most out of MailChimp, and you'll be giving your art business strong roots to grow from. 2. How To Create Your Weebly Email Sign-Up FormWeebly's mailing list setup could be the easiest of the bunch.We are talking 5 minutes.(I am a WordPress woman, but the intuitive setup for the Weebly box gave me momentary Weebly-envy.)To learn how to add a MailChimp sign-up form to your Weebly site, go here. 3. How To Pop In Your Wix Subscribe FormCheck out this page and video to learn how to place a "Get Subscribers" form on your Wix site and add a snazzy "pop-up" sign-up form to boot. I bet you can do both in about 30 minutes.If you want to integrate MailChimp into your Wix sign-up setup, check out this video for extra help.But if you do nothing else here today, put a darn sign-up box on your Facebook page. Oh, and while you're helping yourself attract your ideal audience, discover how to add a gorgeous email signature to the bottom of your email for free in this Charmed Studio post.Okay, now on to taking the step that could make your mailing list soar, where others sink: adding a signup box on your social media. How To Put A Sign-Up Form On Your Twitter or Facebook PageFacebook helps many artists, but it's important to start a list that's yours for keeps.The same goes for X.You don't need me to tell you that Mark Zuckerberg or Elon Musk may have forgotten to put your well-being at the top of their to-do lists today.The only simple way I found is to hook yourself up via MailChimp.For Facebook, watch this straightforward instructional video here.To share your Mailchimp subscribe form on X, read this.If you haven't set up a MailChimp account, go here first for your account set-up tutorial, then head to the Facebook Signup Form instructions.Done.You are now a captain of industry!Get yourself two bunches of tulips, one for your kitchen and one for your desk. My P.S. (For the Tech-Challenged Among Us)If you are not interested or able to install a subscribe form yourself, I still respect you.You can easily and cheaply hire a smart, techy type on sites like Fiverr or Upworkto put one in for you pronto.Got your subscribe form up, but it's not pulling in many folks?Check out this post on Turn Your Art Website Into an Attraction Magnet (Without Social Media).You'll discover how being yourself and changing the possibly current boring wording in your current subscription form can change everything.(Here's my article on 4 great benefits of a small list.) To be charming and subscribe to the blog and get free access to my writing toolkit for artists click here.For info on one-on-one writing coaching with Thea go here. This blog is produced by The Charmed Studio Blog and Podcast™, LLC. And when you get scared about writing and want to relax, remember what Anne Lamott says."100 years, all new people."You can do this. Occasionally my show notes contain Amazon or other affiliate links. This means if you buy books or stuff via my podcast link I may receive a tiny commission and do a happy dance. There is no extra fee for you. I only link to items I personally use and love: products I feel help heart-centered artists and writers. Thank you. :)
Send us a textCheck us out at: https://www.cisspcybertraining.com/Get access to 360 FREE CISSP Questions: https://www.cisspcybertraining.com/offers/dzHKVcDB/checkoutGet access to my FREE CISSP Self-Study Essentials Videos: https://www.cisspcybertraining.com/offers/KzBKKouvFrom insecure code causing breaches to proper data destruction, this episode dives deep into the critical world of data lifecycle management—a cornerstone of the CISSP certification and modern cybersecurity practice.A shocking 74% of organizations have experienced security incidents from insecure code, highlighting why proper data management matters more than ever. Whether you're preparing for the CISSP exam or strengthening your organization's security posture, understanding who's responsible for what is essential. We break down the sometimes confusing differences between data owners (who bear legal liability), data custodians (handling day-to-day operations), data controllers (determining what gets processed and how), and data processors (who handle the actual processing).The stakes couldn't be higher. With GDPR violations potentially costing organizations up to 4% of global annual revenue, misunderstanding these roles can lead to catastrophic financial consequences. We explore the eight principles driving transborder data flows and why understanding your data's journey matters for compliance and security.When it comes to data destruction, I share practical wisdom about what really works. While methods like degaussing and various overwriting techniques exist, I explain why physical destruction (the "jaws of death" approach) often makes the most practical and economic sense in today's world of inexpensive storage media.Throughout the episode, I provide real-world examples from my decades of experience as a CISO and security professional. Whether you're dealing with classified information requiring specialized handling or simply trying to implement sensible data governance in a commercial environment, these principles will help protect your organization's most valuable asset—its information.Ready to continue your cybersecurity journey? Visit CISSP Cyber Training for free resources, sign up for my email list, or check out my YouTube channel for additional content to help you pass the CISSP exam the first time.Support the showGain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
Send us a textOn this episode of seriousprivacy, Paul Breitbarth is away, so Ralph O' Brien and Dr. K Royal bring you a mish mash week in privacy. Topics include current news and a little bit about the differences in GDPR compliance vs what the US privacy laws require. If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.
In this episode of Tech Talks Daily, I speak with Jane Ostler from Kantar, the world's leading marketing data and analytics company, whose clients include Google, Diageo, AB InBev, Unilever, and Kraft Heinz. Jane brings clarity to a debate often clouded by headlines, explaining why AI should be seen as a creative sparring partner, not a rival. She outlines how Kantar is helping brands balance efficiency with inspiration, and why the best marketing in the years ahead will come from humans and machines working together. We explore Kantar's research into how marketers really feel about AI adoption, uncovering why so many projects stall in pilot phase, and what steps can help teams move from experimentation to execution. Jane also discusses the importance of data quality as the foundation of effective AI, drawing comparisons to the early days of GDPR when oversight and governance first became front of mind. From Coca-Cola's AI-assisted Christmas ads to predictive analytics that help brands allocate budgets with greater confidence, Jane shares examples of where AI is already shaping marketing in ways that might surprise you. She also highlights the importance of cultural nuance in AI-driven campaigns across 90-plus markets, and why transparency, explainability, and human oversight are vital for earning consumer trust. Whether you're a CMO weighing AI strategy, a brand manager experimenting with new tools, or someone curious about how the biggest advertisers are reshaping their playbooks, this conversation with Jane Ostler offers both inspiration and practical guidance. It's about rethinking AI not as the end of creativity, but as the beginning of a new partnership between data, machines, and human imagination.
Ever wondered where digital trust fits in your company's strategy? We live in a world that's buzzing with AI, cybersecurity, and digital innovation. Everywhere you look, there's a new app, a smarter tool, or a faster system. But in the middle of all this tech hype, there's one thing we often overlook—trust.In this insightful conversation, Punit discusses with Bruno about the crucial influence of technology, economy, and other external factors on business strategies. They delve into how companies navigate different environments, the role of digital transformation, and the importance of maintaining a balanced ecosystem approach.If you're a leader, strategist, privacy professional, or tech enthusiast trying to make sense of innovation, trust, and governance in today's world—this conversation is a must-watch.KEY CONVERSION00:02:02 What is the concept of digital trust? Was it trust enough?00:04:40 Can we expect digital trust in an emerging world of new technology in 10-20 years?00:09:15 Is the board convinced about the value of digital trust or are they still in compliance mode?00:13:15 How do we sell this concept of digital trust on the boards? 00:18:51 Linking concept of trust, security and privacy to the broader agenda 00:25:58 What is it that you can sell them with and how can they reach out? ABOUT GUESTBruno Horta Soares is a seasoned executive advisor, professor, and keynote speaker with over 20 years of experience in Governance, Digital Transformation, Risk Management, and Information Security. He is the founder of GOVaaS – Governance Advisors as-a-Service and has worked with organizations across Portugal, Angola, Brazil, and Mozambique to align governance and technology for sustainable business value.Since 2015, Bruno has served as Leading Executive Senior Advisor at IDC Portugal, guiding C-level leaders in digital strategy, transformation, governance, and cybersecurity. He is also a professor at top Portuguese business schools, including NOVA SBE, Católica Lisbon, ISCTE, ISEG, and Porto Business School, teaching in Masters, MBA, and Executive programs on topics such as IT Governance, Cybersecurity, Digital Transformation, and AI for Leadership.He holds a degree in Management and Computer Science (ISCTE), an executive program in Project Management (ISLA), and numerous professional certifications: PMP®, CISA®, CGEIT®, CRISC™, ITIL®, ISO/IEC 27001 LA, and COBIT® Trainer. As a LEGO® SERIOUS PLAY® Facilitator, he brings creativity into strategy and leadership development.Bruno received the ISACA John Kuyers Award for Best Speaker in 2019 and is the founder and current President of the ISACA Lisbon Chapter. A frequent international speaker, he shares expertise on governance and digital innovation globally.ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/brunohsoares/ Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Inge Zwick, a senior leader from Emapta Global, a global outsourcing company, who elaborates on his experience working in different international locations, including the Philippines and now Italy. Zwick discusses the complexities and common concerns around outsourcing under GDPR, emphasizing the importance of compliance and data protection. They explain how Emapta supports clients in achieving GDPR compliance while outsourcing, including risk assessments, data flow mapping, and maintaining secure work environments. The conversation delves into the practical aspects of handling Subject Access Requests (SARs), the integration of compliance into operational workflows, and the importance of maintaining ongoing monitoring and updates. Zwick also touches upon how ESG initiatives and compliance are seamlessly woven into Emapta's operations, providing a sustainable approach to global outsourcing. Lastly, advice is given to business leaders on how to future-proof their outsourcing strategies in light of GDPR, encouraging them not to shy away from global talent opportunities due to compliance fears. Key highlights: Company Overview and Global Operations Outsourcing and GDPR Compliance Risk Assessment and Data Security Subject Access Requests (SAR) Outsourcing Contracts and GDPR Obligations Integrating Compliance into Operations Future-Proofing Your Outsourcing Strategy Resources: Connect with Inge Zwick LinkedIn Connect with Emapta Global Website LinkedIn Tom Fox Instagram Facebook YouTube Twitter LinkedIn
Another call with Peter Wilson discussing Corrupt Courts , GDPR, NASA lies, Flat Earth and lots more #commonlaw #natural law #sovereignity About my Guest:Ex Royal Navy gunner and armourer, turned professional fighter. Owned and ran own martial arts gym for about 30 years. Always been aware of something not being right in the world, went deep into it after losing over £1million of property in 1 week including own home. So been up and been down even living in a car for a while with his wife Janine and 4 dogs. ---Awakening Podcast Social Media / Coaching My Other Podcasts https://roycoughlan.com/ ------------------What we Discussed: 00:40 Updates on Court Cases02:25 Don't go to the County Court03:53 CPR Book and Court Rules08:05 The Trick the Debt Collectors use12:20 They Write to the Legal Entity13:50 There are a few ways to defend yourself15:40 The Bank can not use a PO BOX20:10 To the Agent and the Principle21:50 Can we take the to Court for the illegal Trickery23:00 Can Ai be done by good People29:30 Ai Covering GDPR in Europe30:50 Using Ai to fight the Corrupt Banks than wiped us both out32:00 How much work is needed Training the Ai36:08 Can you Scan a letter & know how to respond37:25 Why is the laws in Latin & French39:50 Judges Rude in Court40:50 High Court Case Laws that can Help You46:15 How are Court Case Fees Calculated?48:00 If the Debt was sold can original debtors come after you 50:20 Are the Court Costs 5% similar to Poland51:05 How Long do the Cases Take52:15 Preparing the File for Your Court Case54:15 The Courts are Set up to Intimidate You57:10 How to Prepare with Breathwork and Meditation in Court1:01:40 The Stupidity of the Wigs they Wear in Court1:03:55 The Currents are Regular1:10:45 Why Does NASA Lie1:12:00 Satallite Lie1:14:20 Why are the Flight Patterns showing Flat Earth1:17:00 Shadows on Clouds from the Plane1:18:40 What is Happening in Antartica1:19:20 The Suez Canal is all the same Level1:20:25 When Firing from a Navy Boat they did not allow for Curve of Earth1:22:10 When I flat Earth Dave on my Podcast1:24:30 How do so many people look differentHow to Contact Peter: https://www.skool.com/check-mate-the-matrix-2832/about?ref=f30a0a71fea743aa8f9b8fb632d6129c https://www.claimyourstrawman.com/ https://linktr.ee/PeterWilsonReturnToDemocracy ------------------------------More about the Awakening Podcast:All Episodes can be found at www.awakeningpodcast.org My Facebook Group Mentioned in this Episode https://www.facebook.com/profile.php?id=61572386459383 Awakening Podcast Social Media / Coaching My Other Podcasts https://roycoughlan.com/ Our Facebook Group can be found at https://www.facebook.com/royawakening #checkmatethematrix #ucc #peterwilson #corruptcourts #flatearth
Send us a textProfessor JRod makes a triumphant return to Technology Tap after a year-long hiatus, bringing listeners up to speed on his personal journey and diving straight into Security Plus 701 fundamentals. Having completed his doctorate and subsequently focusing on his health—resulting in an impressive 50-pound weight loss—he reconnects with his audience with the same passion and expertise that made his podcast popular.The heart of this comeback episode centers on essential cybersecurity concepts, beginning with the CIA triad (confidentiality, integrity, availability) that forms the foundation of information security. Professor J-Rod expertly breaks down complex frameworks including NIST, ISO/IEC standards, and compliance-driven approaches like HIPAA and GDPR, explaining how organizations should select frameworks based on their specific industry requirements.With his trademark clear explanations, he walks listeners through the process of gap analysis—a methodical approach to identifying differences between current security postures and desired standards. The episode then transitions to a comprehensive overview of access control models, including Discretionary, Mandatory, Role-Based, Attribute-Based, and Rule-Based controls, each illustrated with practical examples that bring abstract concepts to life.What sets this episode apart is the interactive element, as Professor JRod concludes with practice questions that challenge listeners to apply their newly acquired knowledge. This practical approach bridges the gap between theory and real-world implementation, making complex security concepts accessible to professionals and students alike. Whether you're preparing for certification or simply expanding your cybersecurity knowledge, this return episode delivers valuable insights from an educator who clearly missed sharing his expertise with his audience.Support the showIf you want to help me with my research please e-mail me.Professorjrod@gmail.comIf you want to join my question/answer zoom class e-mail me at Professorjrod@gmail.comArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod
Questions Piotr addresses in this episode:What is FORMEL SKIN, and how does it solve dermatology's bottleneck in Germany?How did Piotr's career in analytics develop across multiple verticals?Why is ‘perfect data' a myth in mobile marketing?How do you responsibly track and aggregate users before registration?What's the difference between front-end and back-end behavioral data?How do device/user mismatches and changes create analytics headaches?What are the new challenges and gray areas in privacy (GDPR, CCPA, device fingerprinting)?Where does fraud hide in aggregated data, and how do you find it?Why does fraud persist, and what incentives make it so durable?How could success in mobile marketing be measured differently to promote collaboration and integrity?Timestamps(0:00) – Introducing FORMEL SKIN, Piotr's role, and Germany's digital dermatology(1:18) – Marketing analytics in dating, fintech, health(2:50) – Why ‘perfect data' is a myth(5:00) – Assigning pseudo-user IDs, device-based tracking(6:00) – Aggregated data, ‘chasing ghosts,' and its pitfalls(8:00) – Combining front-end and back-end data; challenges in stitching(9:36) – Device vs. user: confusion, mismatches, and noise(11:13) – Balancing privacy vs. marketing needs; legal and business conflicts(12:30) – Device fingerprinting: what's legal, what's risky, and why(14:22) – The end of one-to-one attribution; rise of aggregated, top-level analysis(16:05) – Marketing fraud: what's changed, sneaky affiliate/network tricks(19:08) – Incentives, alignment failures, and why fraud persists(21:40) – Filtering fraud: long onboarding, compliance, and technical vigilance(23:38) – ‘Success' in mobile marketing and why responsibility must be shared(32:08) – Wrap upQuotes(2:50) “Don't expect perfect data – especially in marketing where different data sources are being combined.”(5:10) “You try to anchor it to the device…within all the data security and the privacy setup and anchor it to this entity and create one entity.”(15:26) “We can use aggregated data for strategic decisions, like how to shift budgets from channel A to B.”Mentioned in This EpisodePiotr Prędkiewicz's LinkedinFORMEL SKIN
Wei Hu is the Senior Vice President, High Availability Technologies, at Oracle. In today's Cloud Wars Live, Hu sits down with Bob Evans for a wide-ranging discussion on Oracle's globally distributed database, AI-native workloads, and how Oracle is helping businesses meet data sovereignty requirements while delivering high performance, elasticity, and always-on availability across regions.Where AI Meets Your DataThe Big Themes:Globally Distributed Exadata Database on Exascale: Oracle's Globally Distributed Exadata Database on Exascale Infrastructure delivers something few cloud providers can: high performance, high availability, and full compliance. Built on Oracle's powerful Exadata platform, this architecture removes the traditional need to purchase or manage hardware. Organizations can start small and elastically scale compute and memory across multiple regions.Agentic AI and Vector Search at Enterprise Scale: Oracle's database innovation is designed for real-world AI demands, especially agentic AI. AI agents need massive compute, consistent availability, and extremely fast access to live business data. Oracle's globally distributed architecture supports in-memory vector indexes for lightning-fast retrieval augmented generation (RAG), making AI more responsive and effective. Additionally, Oracle keeps AI close to the data — eliminating stale data issues and ensuring compliance.Built for a Sovereign Cloud World: Data residency and sovereignty are no longer optional, they're legal imperatives. Countries around the world are implementing strict rules on where data must be stored, how it can be accessed, and who can process it. Oracle addresses these challenges with policy-driven data distribution, allowing customers to define how and where data lives. Whether it's for compliance with India's payment data regulations or Europe's GDPR, Oracle enables precise control without requiring app changes or replication of the full stack.The Big Quote: “The other thing that's interesting about agentic AI is that it's very dynamic. The work comes in, the demands comes in, like a tidal wave. Then it goes away, right, then a little, then when it comes again, there's another tidal wave. So, what you really want to do is have an infrastructure that is elastic, that can scale up and down depending on the demand."More from Wei Hu and Oracle:Connect with Wei Hu on LinkedIn and learn more about Globally Distributed Exadata Database on Exascale Infrastructure.* Sponsored Podcast * Visit Cloud Wars for more.