Podcasts about gdpr

European Union regulation on the processing of personal data

  • 4,309PODCASTS
  • 13,471EPISODES
  • 34mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 12, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories




Best podcasts about gdpr

Show all podcasts related to gdpr

Latest podcast episodes about gdpr

The Sunday Show
The Open Internet is Dead. What Comes Next?

The Sunday Show

Play Episode Listen Later Oct 12, 2025 49:33


Mallory Knodel,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and Burcu Kilic, a senior fellow at Canada's Center for International Governance Innovation, or CIGI, are the authors of a recent post on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet' has been hollowed out by decades of policy choices and corporate consolidation. Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.

The Fantasy Life Podcast
Week 6 Fantasy Football Rankings Risers & Fallers + Start/Sit!

The Fantasy Life Podcast

Play Episode Listen Later Oct 10, 2025 96:43


Get FantasyLife+ for free ($100 value) by going to https://www.fantasylife.com/comet Become the best at watching sports with Xfinity! If you call yourself a sports fan, you gotta have Xfinity. All the games, all in one place. https://ad.doubleclick.net/ddm/trackc...{GDPR};gdpr_consent=${GDPR_CONSENT_755};ltd=;dc_tdv=4 For all the tools and advice you need to win your league, subscribe to FantasyLife+: https://fantasylife.com/pricing Use code “IAN” for 20% off your subscription! Welcome to Fantasy Life with Ian Hartitz! We're here to give you all of the fantasy football news, advice and stats you need (with a little bit of fun chaos along the way)! Week 6 is here and fantasy football expert Ian Hartitz is joined by fantasy football expert Dwain McFarland to break down their rankings for Week 6 of the fantasy football season! In today's episode: Is Rico Dowdle about to go off for a 2nd straight week? We're downgrading Derrick Henry to RB18?! Matthew Stafford in a smash matchup?! We're discussing all this, plus answering YOUR start/sit questions for Week 6 of fantasy football! Learn more about your ad choices. Visit megaphone.fm/adchoices

The Dead Pixels Society podcast
From Prints to Immersive: How Niche AI Reimagines Volume Photography for Schools, Events, and Theme Parks

The Dead Pixels Society podcast

Play Episode Listen Later Oct 9, 2025 33:07 Transcription Available


Have an idea or tip? Send us a text!What if the best photo you take is just the start? The Dead Pixels Society sits down with Heath Lassiter, founder of Niche AI, to explore how volume imaging is transforming from prints and proofs into living, brand‑ready media—think fusion assets mapped to body parts, refined face replacement, and motion‑from‑stills that parents and guests can't wait to share. Lassiter's roots at Express Digital and deep work across schools, theme parks, events, and cruises give him a rare, pragmatic lens on what actually scales: fast inference, lower COGS, consistent brand aesthetics, and real privacy compliance.Lassiter unpacks Niche AI's core offerings—precision cutouts, fusion scenes, and immersive face swaps—built to operate at enterprise speed and quality. The conversation gets tactical on GPU throughput, cost curves, and why off‑the‑shelf AI often produces generic looks or drifts from target demographics. Heath explains how his team blends proprietary models with in‑house design to nail placement, mood, and age‑appropriate results, and how that approach translates into higher conversions and repeat purchases across e‑commerce.Schools get a spotlight: extend fall capture slightly and replace the operationally heavy spring shoot with AI-driven products—seasonal themes, cap‑and‑gown automation (colors, tassels, even hands and diplomas when needed), and short motion clips for social. We cover data handling, GDPR, API vs on‑prem, and why once‑reluctant enterprises are now opening the door to responsible generative and non‑generative workflows. For parks and events, the image becomes both souvenir and marketing engine, turning guest photos into shareable media that drives reach and revenue.MediaclipMediaclip strives to continuously enhance the user experience while dramatically increasing revenue.Buzzsprout - Let's get your podcast launched!Start for FREEVisual 1stVisual 1st is the premier global conference focused on the photo and video ecosystem. Photo Imaging CONNECTThe Photo Imaging CONNECT conference, March 1-2, 2026, at the RIO Hotel and Resort in Las Vegas, NDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showSign up for the Dead Pixels Society newsletter at http://bit.ly/DeadPixelsSignUp.Contact us at gary@thedeadpixelssociety.comVisit our LinkedIn group, Photo/Digital Imaging Network, and Facebook group, The Dead Pixels Society. Leave a review on Apple and Podchaser. Are you interested in being a guest? Click here for details.Hosted and produced by Gary PageauEdited by Olivia PageauAnnouncer: Erin Manning

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers,For most of history, stagnation — not growth — was the rule. To explain why prosperity so often stalls, economist Carl Benedikt Frey offers a sweeping tour through a millennium of innovation and upheaval, showing how societies either harness — or are undone by — waves of technological change. His message is sobering: an AI revolution is no guarantee of a new age of progress.Today on Faster, Please! — The Podcast, I talk with Frey about why societies midjudge their trajectory and what it takes to reignite lasting growth.Frey is a professor of AI and Work at the Oxford Internet Institute and a fellow of Mansfield College, University of Oxford. He is the director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.He is the author of several books, including the brand new one, How Progress Ends: Technology, Innovation, and the Fate of Nations.In This Episode* The end of progress? (1:28)* A history of Chinese innovation (8:26)* Global competitive intensity (11:41)* Competitive problems in the US (15:50)* Lagging European progress (22:19)* AI & labor (25:46)Below is a lightly edited transcript of our conversation. The end of progress? (1:28). . . once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.Pethokoukis: Since 2020, we've seen the emergence of generative AI, mRNA vaccines, reusable rockets that have returned America to space, we're seeing this ongoing nuclear renaissance including advanced technologies, maybe even fusion, geothermal, the expansion of solar — there seems to be a lot cooking. Is worrying about the end of progress a bit too preemptive?Frey: Well in a way, it's always a bit too preemptive to worry about the future: You don't know what's going to come. But let me put it this way: If you had told me back in 1995 — and if I was a little bit older then — that computers and the internet would lead to a decade streak of productivity growth and then peter out, I would probably have thought you nuts because it's hard to think about anything that is more consequential. Computers have essentially given people the world's store of knowledge basically in their pockets. The internet has enabled us to connect inventors and scientists around the world. There are few tools that aided the research process more. There should hardly be any technology that has done more to boost scientific discovery, and yet we don't see it.We don't see it in the aggregate productivity statistics, so that petered out after a decade. Research productivity is in decline. Measures of breakthrough innovation is in decline. So it's always good to be optimistic, I guess, and I agree with you that, when you say AI and when you read about many of the things that are happening now, it's very, very exciting, but I remain somewhat skeptical that we are actually going to see that leading to a huge revival of economic growth.I would just be surprised if we don't see any upsurge at all, to be clear, but we do have global productivity stagnation right now. It's not just Europe, it's not just Britain. The US is not doing too well either over the past two decades or so. China's productivity is probably in the negative territory or stagnant, by more optimistic measures, and so we're having a growth problem.If tech progress were inevitable, why have predictions from the '90s, and certainly earlier decades like the '50s and '60s, about transformative breakthroughs and really fast economic growth by now, consistently failed to materialize? How does your thesis account for why those visions of rapid growth and progress have fallen short?I'm not sure if my thesis explains why those expectations didn't materialize, but I'm hopeful that I do provide some framework for thinking about why we've often seen historically rapid growth spurts followed by stagnation and even decline. The story I'm telling is not rocket science, exactly. It's basically built on the simple intuitions that once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.So for example, the Soviet Union actually did reasonably well in terms of economic growth. A lot of it, or most of it, was centered on heavy industry, I should say. So people didn't necessarily see the benefits in their pockets, but the economy grew rapidly for about four decades or so, then growth petered out, and eventually it collapsed. So for exploiting mass-production technologies, the Soviet system worked reasonably well. Soviet bureaucrats could hold factory managers accountable by benchmarking performance across factories.But that became much harder when something new was needed because when something is new, what's the benchmark? How do you benchmark against that? And more broadly, when something is new, you need to explore, and you need to explore often different technological trajectories. So in the Soviet system, if you were an aircraft engineer and you wanted to develop your prototype, you could go to the red arm and ask for funding. If they turned you down, you maybe had two or three other options. If they turned you down, your idea would die with you.Conversely, in the US back in '99, Bessemer Venture declined to invest in Google, which seemed like a bad idea with the benefit of hindsight, but it also illustrates that Google was no safe bet at the time. Yahoo and Alta Vista we're dominating search. You need somebody to invest in order to know if something is going to catch on, and in a more decentralized system, you can have more people taking different bets and you can explore more technological trajectories. That is one of the reasons why the US ended up leading the computer revolutions to which Soviet contributions were basically none.Going back to your question, why didn't those dreams materialize? I think we've made it harder to explore. Part of the reason is protective regulation. Part of the reason is lobbying by incumbents. Part of the reason is, I think, a revolving door between institutions like the US patent office and incumbents where we see in the data that examiners tend to grant large firms some patents that are of low quality and then get lucrative jobs at those places. That's creating barriers to entry. That's not good for new startups and inventors entering the marketplace. I think that is one of the reasons that we haven't seen some of those dreams materialize.A history of Chinese innovation (8:26)So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided . . .I wonder if your analysis of pre-industrial China, if there's any lessons you can draw about modern China as far as the way in which bad governance can undermine innovation and progress?Pre-industrial China has a long history. China was the technology leader during the Song and Tang dynasties. It had a meritocratic civil service. It was building infrastructure on scales that were unimaginable in Europe at the time, and yet it didn't have an industrial revolution. So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided, and because there was lots of social status attached to becoming a bureaucrat and passing the civil service examination, if Galileo was born in China, he would probably become a bureaucrat rather than a scientist, and I think that's part of the reason too.But China mostly did well when the state was strong rather than weak. A strong state was underpinned by intensive political competition, and once China had unified and there were fewer peer competitors, you see that the center begins to fade. They struggle to tax local elites in order to keep the peace. People begin to erect monopolies in their local markets and collide with guilds to protect production and their crafts from competition.So during the Qing dynasty, China begins to decline, whereas we see the opposite happening in Europe. European fragmentation aids exploration and innovation, but it doesn't necessarily aid scaling, and so that is something that Europe needs to come to terms with at a later stage when the industrial revolution starts to take off. And even before that, market integration played an important role in terms of undermining the guilds in Europe, and so part of the reason why the guilds persist longer in China is the distance is so much longer between cities and so the guilds are less exposed to competition. In the end, Europe ends up overtaking China, in large part because vested interests are undercut by governments, but also because of investments in things that spur market integration.Global competitive intensity (11:41)Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.This is a great McKinsey kind of way of looking at the world: The notion that what drives innovation is sort of maximum competitive intensity. You were talking about the competitive intensity in both Europe and in China when it was not so centralized. You were talking about the competitive intensity of a fragmented Europe.Do you think that the current level of competitive intensity between the United States and China —and I really wish I could add Europe in there. Plenty of white papers, I know, have been written about Europe's competitive state and its in innovativeness, and I hope those white papers are helpful and someone reads them, but it seems to be that the real competition is between United States and China.Do you not think that that competitive intensity will sort of keep those countries progressing despite any of the barriers that might pop up and that you've already mentioned a little bit? Isn't that a more powerful tailwind than any of the headwinds that you've mentioned?It could be, I think, if people learn the right lessons from history, at least that's a key argument of the book. Right now, what I'm seeing is the United States moving more towards protectionist with protective tariffs. Right now, what I see is a move towards, we could even say crony capitalism with tariff exemptions that some larger firms that are better-connected to the president are able to navigate, but certainly not challengers. You're seeing the United States embracing things like golden shares in Intel, and perhaps even extending that to a range of companies. Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.And China today is having similar problems and on, I would argue, an even greater scale. Growth used to be the key objective in China, and so for local governments, provincial governments competing on such targets, it was fairly easy to benchmark and measure and hold provincial governors accountable, and they would be promoted inside the Communist Party based on meeting growth targets. Now, we have prioritized common prosperity, more national security-oriented concerns.And so in China, most progress has been driven by private firms and foreign-invested firms. State-owned enterprise has generally been a drag on innovation and productivity. What you're seeing, though, as China is shifting more towards political objectives, it's harder to mobilize private enterprise, where the yard sticks are market share and profitability, for political goals. That means that China is increasingly relying more again on state-owned enterprises, which, again, have been a drag on innovation.So, in principle, I agree with you that historically you did see Russian defeat to Napoleon leading to this Stein-Hardenberg Reforms, and the abolishment of Gilded restrictions, and a more competitive marketplace for both goods and ideas. You saw that Russian losses in the Crimean War led to the of abolition of serfdom, and so there are many times in history where defeat, in particular, led to striking reforms, but right now, the competition itself doesn't seem to lead to the kinds of reforms I would've hoped to see in response.Competitive problems in the US (15:50)I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior.I certainly wrote enough pieces and talked to enough people over the past decade who have been worried about competition in the United States, and the story went something like this: that you had these big tech companies — Google, and Meta, Facebook and Microsoft — that these were companies were what they would call “forever companies,” that they had such dominance in their core businesses, and they were throwing off so much cash that these were unbeatable companies, and this was going to be bad for America. People who made that argument just could not imagine how any other companies could threaten their dominance. And yet, at the time, I pointed out that it seemed to me that these companies were constantly in fear that they were one technological advance from being in trouble.And then lo and behold, that's exactly what happened. And while in AI, certainly, Google's super important, and Meta Facebook are super important, so are OpenAI, and so is Anthropic, and there are other companies.So the point here, after my little soliloquy, is can we overstate these problems, at least in the United States, when it seems like it is still possible to create a new technology that breaks the apparent stranglehold of these incumbents? Google search does not look quite as solid a business as it did in 2022.Can we overstate the competitive problems of the United States, or is what you're saying more forward-looking, that perhaps we overstated the competitive problems in the past, but now, due to these tariffs, and executives having to travel to the White House and give the president gifts, that that creates a stage for the kind of competitive problems that we should really worry about?I'm very happy to support the notion that technological changes can lead to unpredictable outcomes that incumbents may struggle to predict and respond to. Even if they predict it, they struggle to act upon it because doing so often undermines the existing business model.So if you take Google, where the transformer was actually conceived, the seven people behind it, I think, have since left the company. One of the reasons that they probably didn't launch anything like ChatGPT was probably for the fear of cannibalizing search. So I think the most important mechanisms for dislodging incumbents are dramatic shifts in technology.None of the legacy media companies ended up leading social media. None of the legacy retailers ended up leading e-commerce. None of the automobile leaders are leading in EVs. None of the bicycle companies, which all went into automobile, so many of them, ended up leading. So there is a pattern there.At the same time, I think you do have to worry that there are anti-competitive practices going on that makes it harder, and that are costly. The revolving door between the USPTO and companies is one example of that. We also have a reasonable amount of evidence on killer acquisitions whereby firms buy up a competitor just to shut it down. Those things are happening. I think you need to have tools that allow you to combat that, and I think more broadly, the United States has a long history of fairly vigorous antitrust policy. I think it'd be a hard pressed to suggest that that has been a tremendous drag on American business or American dynamism. So if you don't think, for example, that American antitrust policy has contributed to innovation and dynamism, at the very least, you can't really say either that it's been a huge drag on it.In Japan, for example, in its postwar history, antitrust was extremely lax. In the United States, it was very vigorous, and it was very vigorous throughout the computer revolution as well, which it wasn't at all in Japan. If you take the lawsuit against IBM, for example, you can debate this. To what extent did it force it to unbundle hardware and software, and would Microsoft been the company it is today without that? I think AT&T, it's both the breakup and it's deregulation, as well, but I think by basically all accounts, that was a good idea, particularly at the time when the National Science Foundation released ARPANET into the world.I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior. There's always a risk of antitrust being heavily politicized, and that's always been a bad idea, but at the same time, I think having tools on the books that allows you to check monopolies and steer their investments more towards the innovation rather than anti-competitive practices, I think is, broadly speaking, a good thing. I think in the European Union, you often hear that competition policy is a drag on productivity. I think it's the least of Europe's problem.Lagging European progress (22:19)If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. . . but doesn't do the same in digital. The question in my mind is: Why is that?Let's talk about Europe as we sort of finish up. We don't have to write How Progress Ends, it seems like progress has ended, so maybe we want to think about how progress restarts, and is the problem in Europe, is it institutions or is it the revealed preference of Europeans, that they're getting what they want? That they don't value progress and dynamism, that it is a cultural preference that is manifested in institutions? And if that's the case — you can tell me if that's not the case, I kind of feel like it might be the case — how do you restart progress in Europe since it seems to have already ended?The most puzzling thing to me is not that Europe is less dynamic than the United States — that's not very puzzling at all — but that it hasn't even managed to catch up in digital. If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. So in a way, take automobiles, electrical machinery, chemicals, pharmaceuticals, nobody would say that Europe is behind in those industries, or at least not for long. Europe has very robust catchup growth in the post-war period, but doesn't do the same in digital. The question in my mind is: Why is that?I think part of the reason is that the returns to innovation, the returns to scaling in Europe are relatively muted by a fragmented market in services, in particular. The IMF estimates that if you take all trade barriers on services inside the European Union and you add them up, it's something like 110 percent tariffs. Trump Liberation Day tariffs, essentially, imposed within European Union. That means that European firms in digital and in services don't have a harmonized market to scale into, the way the United States and China has. I think that's by far the biggest reason.On top of that, there are well-intentioned regulations like the GDPR that, by any account, has been a drag on innovation, and particularly been harmful for startups, whereas larger firms that find it easier to manage compliance costs have essentially managed to offset those costs by capturing a larger share of the market. I think the AI Act is going in the same direction there, ad so you have more hurdles, you have greater costs of innovating because of those regulatory barriers. And then the return to innovation is more capped by having a smaller, fragmented market.I don't think that culture or European lust for leisure rather than work is the key reason. I think there's some of that, but if you look at the most dynamic places in Europe, it tends to be the Scandinavian countries and, being from Sweden myself, I can tell you that most people you will encounter there are not workaholics.AI & labor (25:46)I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin.As I finish up, let me ask you: Like a lot of economists who think about technology, you've thought about how AI will affect jobs — given what we've seen in the past few years, would it be your guess that, if we were to look at the labor force participation rates of the United States and other rich countries 10 years from now, that we will look at those employment numbers and think, “Wow, we can really see the impact of AI on those numbers”? Will it be extraordinarily evident, or would it be not as much?Unless there's very significant progress in AI, I don't think so. I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin. So in most activities where the world is changing, and the world is changing every day, you can't really rely on AI to reliably do work for you.An example of that, most people know of AlphaGo beating the world champion back in 2016. Few people will know that, back in 2023, human amateurs, using standard laptops, exposing the best Go programs to new positions that they would not have encountered in training, actually beat the best Go programs quite easily. So even in a domain where basically the problem is solved, where we already achieved super-human intelligence, you cannot really know how well these tools perform when circumstances change, and I think that that's really a problem. So unless we solve that, I don't think it's going to have an impact that will mean that labor force participation is going to be significantly lower 10 years from now.That said, I do think it's going to have a very significant impact on white collar work, and people's income and sense of status. I think of generative AI, in particular, as a tool that reduces barriers to entry in professional services. I often compare it to what happened with Uber and taxi services. With the arrival of GPS technology, knowing the name of every street in New York City was no longer a particularly valuable skill, and then with a platform matching supply and demand, anybody could essentially get into their car who has a driver's license and top up their incomes on the side. As a result of that, incumbent drivers faced more competition, they took a pay cut of around 10 percent.Obviously, a key difference with professional services is that they're traded. So I think it's very likely that, as generative AI reduces the productivity differential between people in, let's say the US and the Philippines in financial modeling, in paralegal work, in accounting, in a host of professional services, more of those activities will shift abroad, and I think many knowledge workers that had envisioned prosperous careers may feel a sense of loss of status and income as a consequence, and I do think that's quite significant.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

BackTable Urology
Ep . 263 Renal Cancer Management: Tumor Board Discussion & Treatments with Dr. Raquib Hannan, Dr. Brandon Manley and Dr. Rana McKay

BackTable Urology

Play Episode Listen Later Oct 7, 2025 68:49


Kidney cancer management is evolving. How are experts adapting? In this installment of BackTable Tumor Board, Dr. Brandon Manley (Urologic Oncology, Moffitt Cancer Center), Dr. Raquibul Hannan (Radiation Oncology, UT Southwestern), and Dr. Rana McKay (Medical Oncology, UC San Diego) join guest host Mark Ball (Urologic Oncology, National Cancer Institute) to share their multidisciplinary perspectives on challenging, real-world kidney cancer cases.---This podcast is supported by:Ferring Pharmaceuticalshttps://ad.doubleclick.net/ddm/trackclk/N2165306.5658203BACKTABLE/B33008413.420220578;dc_trk_aid=612466359;dc_trk_cid=234162109;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};gpp=${GPP_STRING_755};gpp_sid=${GPP_SID};ltd=;dc_tdv=1---SYNPOSISThe conversation covers diagnostic dilemmas, navigating the treatment options of surgery, systemic therapy, and radiation, and the importance of a multidisciplinary approach. Through detailed case reviews, the panel highlights practical pearls, emerging clinical trials, and collaborative approaches that exemplify modern kidney cancer care.---TIMESTAMPS0:00 - Introduction02:20 - Case 1 (Incidental Renal Mass)16:52 - Case 2 (Bilateral Renal Masses)37:22 - Case 3 (Locally Advanced Renal Mass)56:34 - Case 4 (Symptomatic, Metastatic Disease)01:14:00 - Final Takeaways

The Digital Customer Success Podcast
Voice to Value: I Used ChatGPT Voice to Help Build an Automation | Episode 099

The Digital Customer Success Podcast

Play Episode Listen Later Oct 7, 2025 21:03 Transcription Available


In Episode 99, I do something a little different: I take you behind the scenes as I use ChatGPT in voice mode to design a real automation from start to finish. The goal? Build an internal chatbot for product and engineering that's trained on CX call transcripts stored in Gong, so teams can ask targeted questions (“What's frustrating customers in Module X?”) and get instant, concise answers with deep links back to the exact call moments.You'll hear how I frame the problem, push the model to avoid hallucinations, and pick a stack that balances speed, privacy, and scale: Gong → Airtable as the searchable store → a Zapier-hosted chatbot for querying. We also cover transcript hygiene (auto-removing small talk and personal details), vendor privacy considerations, and a simple habit hack: having AI remind you later to actually implement the ideas you generated while walking the dog.I'll link the step-by-step PDF I asked ChatGPT to generate in the show notes so you can follow along and adapt it to your environment.If this sparks ideas for your own digital CX programs, follow/subscribe and drop a review—it really helps more practitioners find the show.Support the show+++++++++++++++++Like/Subscribe/Review:If you are getting value from the show, please follow/subscribe so that you don't miss an episode and consider leaving us a review. Website:For more information about the show or to get in touch, visit DigitalCustomerSuccess.com. Buy Alex a Cup of Coffee:This show runs exclusively on caffeine - and lots of it. If you like what we're, consider supporting our habit by buying us a cup of coffee: https://bmc.link/dcspThank you for all of your support!The Digital Customer Success Podcast is hosted by Alex Turkovic

Awakening
#394 The Weird Connection Between Digital ID's and Total Control

Awakening

Play Episode Listen Later Oct 5, 2025 25:51


This week on our Live Show we discussed Digital ID's Join my PodFather Podcast Coaching Community https://www.skool.com/podfather/about Start Your Own SKOOL Community https://www.skool.com/signup?ref=c72a37fe832f49c584d7984db9e54b71 Donations https://www.awakeningpodcast.org/support/ #awakening #brainfitness #digitalid About my Co-Host:Arnold Beekes Innovator, certified coach & trainer and generalist. First 20 years in technology and organizational leadership, then 20 years in psychology and personal leadership (all are crucial for innovation). What we Discussed: 00:00 All the Dots are Connected01:05 86 Million Bank Accounts Closed03:10 Where Digital ID's are being Rolled out03:32 When was Digital Id 1st Mentioned04:10 What Movies or Series Showed Digital ID's05:45 Credit Score System in China06:45 QR Code in China to see your Social Credit Score08:11 The Bigger Picture with Digital Currenies11:55 Blockchain Technology is Pushing Digital ID's12:42 I have been incorporated into Digital ID without Knowing it13:35 The Irish have being put into Digital ID at a mass scale14:40 We Must use GDPR if the Countries connect the Digital ID's16:35 Palantir and CCTV Cameras can get it wrong19:25 Make People Personally Accountable20:10 The UK putting people in Jail for Social Media Posts21:15 Stop thinking someone will save you22:45 We Must Connect Globally 23:25 The United Nations Agreed to the Pact of the Future for having Ditital ID's25:00 Tony Blair son doing the contract for Digital ID'sSubstack Subscriptionhttps://substack.com/@podfatherroy How to Contact Arnold Beekes: https://braingym.fitness/ https://www.linkedin.com/in/arnoldbeekes/ Donations ⁠⁠ https://www.awakeningpodcast.org/support/ https://www.podpage.com/speaking-podcast/support/ ⁠⁠ All about Roy / Brain Gym & Virtual Assistants athttps://roycoughlan.com/ 

The Fantasy Life Podcast
Week 5 Fantasy Football Rankings Risers & Fallers + Start/Sit!

The Fantasy Life Podcast

Play Episode Listen Later Oct 3, 2025 98:13


Get FantasyLife+ for free ($100 value) by going to https://www.fantasylife.com/comet Become the best at watching sports with Xfinity! If you call yourself a sports fan, you gotta have Xfinity. All the games, all in one place. https://ad.doubleclick.net/ddm/trackc...{GDPR};gdpr_consent=${GDPR_CONSENT_755};ltd=;dc_tdv=4 For all the tools and advice you need to win your league, subscribe to FantasyLife+: https://fantasylife.com/pricing Use code “IAN” for 20% off your subscription! Welcome to Fantasy Life with Ian Hartitz! We're here to give you all of the fantasy football news, advice and stats you need (with a little bit of fun chaos along the way)! Week 5 is here and fantasy football expert Ian Hartitz is joined by fantasy football experts Dwain McFarland and Matthew Freedman to break down their rankings for Week 5 of the fantasy football season! In today's episode: Will Jaylen Waddle boom now that Tyreek Hill is out for the season? Will Bo Nix struggle against a tough Eagles defense? Will Chris Godwin keep up his elite usage in week 5? We're discussing all this, plus answering YOUR start/sit questions for Week 5 of fantasy football! ______________________ If you want more of Fantasy Life, check us out at FantasyLife.com, where all our analysis is free, smart, fun, and has won a bunch of awards. We have an awesome free seven-day-a-week fantasy newsletter (which would win awards if they existed, we assure you!): https://www.fantasylife.com/fantasy-n... And if you want to go deeper, check out our suite of also-award-winning premium tools at FantasyLife.com/pricing But really we hope you just are enjoying what you clicked on here, and come back for more. We are here to help you win!! Learn more about your ad choices. Visit megaphone.fm/adchoices

The FIT4PRIVACY Podcast - For those who care about privacy
Seven Years of GDPR: Balancing Power Between Organizations and Data Subjects Through Trust

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Oct 2, 2025 9:34


In this milestone episode of the Fit4Privacy podcast, host Punit Bhatia is joined by three distinguished privacy experts — Dr. Kerry Miller (AI Governance Expert, U.S.), Heidi Waem (Partner, DLA Piper, Brussels), and Dr. Valerie Lyons (COO, BH Consulting; Academic & Author) — to reflect on 7 years of GDPR and explore what lies ahead. Whether you're a privacy professional, business leader, or just curious about how data protection shapes our digital lives, this conversation offers both a critical reflection on GDPR's first seven years and foresight into its future role in AI and trust. KEY CONVERSION 00:03:25 Panelist Introductions and Initial Thoughts on GDPR 00:09:06 Significant challenge that remains in up to 7-9 years of GDPR 00:18:10 Has there been a fair amount of reporting on compliance failures over the years? 00:21:11 EU Compliance Gaps and How Companies Can Avoid Them  00:29:56 Has the GDPR has been successful in balancing the power equilibrium of organization and data subjects?  00:35:35 Role of trust after 7 years of GDPR  00:41:39 From GDPR compliance in AI World, what can be done additionally? ABOUT GUEST Heidi Waem is the head of the data protection practice at DLA Piper Belgium and specialized in data protection and privacy. She assists clients with all aspects of EU Regulatory Data Protection compliance including the ‘structuring' of data processing and sharing activities to achieve an optimal use of data, advising on data transfers and the processing of personal data by means of new technologies (AI, facial recognition,…).Dr. Cari Miller is the Principal and Lead Researcher for the Center for Inclusive Change. She is a subject matter expert in AI risk management and governance practices, an experienced corporate strategist, and a certified change manager. Dr. Miller creates and delivers AI literacy training, AI procurement guidance, AI policy coaching, and AI audit and assessment advisory services.Dr. Valerie Lyons is a globally recognized authority in privacy, cybersecurity, data protection, and AI governance. Holding a PhD in Information Privacy along with CDPSE, CISSP, and CIPP/E certifications, she serves as a trusted strategic advisor to regulatory bodies and organizations across both public and private sectors. Valerie has played an influential role in shaping EU-wide data protection frameworks and enforcement strategies, and is an active member of the European Data Protection Board's pool of experts, as well as other global cyber and data protection bodies. ABOUT HOSTPunit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCESWebsites ⁠⁠www.fit4privacy.com⁠⁠,⁠⁠www.punitbhatia.com⁠⁠, ⁠⁠https://www.linkedin.com/in/heidiwaem/⁠⁠, ⁠⁠https://www.linkedin.com/in/cari-miller/⁠⁠, ⁠⁠https://www.linkedin.com/in/valerielyons-privsec/⁠⁠ Podcast⁠⁠ ⁠⁠⁠⁠https://www.fit4privacy.com/podcast⁠⁠ Blog ⁠⁠https://www.fit4privacy.com/blog⁠⁠ YouTube ⁠⁠http://youtube.com/fit4privacy⁠⁠

Tech Gumbo
Pope Rejects Virtual Papacy, ChatGPT's Real Use Revealed, TikTok Deal Takes Shape, EU Eyes Cookie Ban, Samsung Puts Ads on Fridges

Tech Gumbo

Play Episode Listen Later Oct 2, 2025 22:04


News and Updates: Pope Leo XIV rejected a proposal to create an AI-powered “virtual pope,” calling the idea of a digital clone horrifying. He warned that deepfakes, automation, and artificial substitutes erode trust, strip dignity from work, and risk turning life into “an empty, cold shell.” His stance echoes concerns as layoffs at Microsoft and Salesforce mount amid AI adoption. OpenAI released its first major study on ChatGPT usage, showing that over 70% of queries are non-work-related, with people mainly seeking tutoring, how-to guidance, brainstorming, and writing help. Only 4% of consumer queries involve coding, with writing far more dominant. Work-related use centers on information gathering and decision-making. Adoption is now global, especially in low- and middle-income countries, with 10% of adults worldwide estimated to use ChatGPT. A preliminary deal to keep TikTok in the U.S. has been reached: existing investors and new U.S. backers, including Oracle and Silver Lake, will control about 80%. ByteDance's stake drops below 20% to comply with U.S. law. Oracle will safeguard U.S. user data, while the recommendation algorithm will be licensed, retrained under U.S. oversight, and cut off from Beijing's influence. The U.S. government is also set to receive a multibillion-dollar facilitation fee. The European Commission is considering scrapping the cookie consent banner requirement, part of the 2009 e-Privacy Directive. Alternatives include setting preferences once at the browser level or exempting “technically necessary” cookies. Any change would fold into GDPR, but privacy advocates are likely to resist. Samsung has begun testing ads on its Family Hub smart refrigerators in the U.S. Despite previously denying plans, a software update now pushes “promotions and curated ads” to fridge screens when idle. Samsung calls it a pilot to “strengthen value,” but users blasted the move as another step in the company's “screens everywhere” strategy.

Big Law Life
#92: What to Do When the Work Dries Up - Responding to Enforcement & Regulatory Downturns in BigLaw

Big Law Life

Play Episode Listen Later Oct 1, 2025 16:31


If you have built your BigLaw career around a thriving regulatory or enforcement practice, you know how difficult it can be for you and your practice when that work suddenly isn't there. One month you are buried in nvestigations motivated by government inquiries or merger reviews, and the next your phone goes quiet because enforcement priorities shifted, agency budgets got cut, or a new administration has redirected resources. It is unsettling, especially when your brand, reputation and and client base are tied to that flow of work. In this episode, I walk through the reality of what it can feel like and what to do when your once-busy enforcement and regulatory practice slows. I share how to distinguish between cyclical downturns and structural changes that reshape a practice like this long term, and share some specific examples across areas such as FCPA, antitrust, and privacy to illustrate how BigLaw attorneys can pivot effectively. I also outline practical steps to stay visible with clients as well as inside your firm so that even when the billable work is not there, your value and future opportunities are. At a Glance: 00:00 Introduction need to navigate BigLaw downturns in regulatory and enforcement work 01:20 When busy practices suddenly dry up: regulatory shifts and enforcement changes 02:14 How external forces such as politics, budgets, and agency leadership reshape your practice overnight 03:03 Early warning signs that your work is slowing down in these areas 03:37 The emotional impact: anxiety, uncertainty, and fear of career derailment 04:08 Diagnosing cyclical vs. structural downturns with concrete indicators 05:16 Why this distinction matters for your long-term career strategy 05:39 Examples of temporary pivots that kept practices alive (FCPA, antitrust, GDPR, privacy) 07:04 How lawyers can broaden their practices to adapt to structural changes 08:08 The importance of proactive client communication, including with “good news” updates 09:37 What to do when billable hours stall: seeking work across departments and staying visible 10:41 Positioning yourself as a thought leader through articles, CLEs, and conferences 11:29 Documenting outreach, cross-practice contributions, and client loyalty for firm leadership 12:21 Demonstrating cross-practice value: aligning with busier groups inside your firm 13:30 How client loyalty and referrals strengthen your standing even in slow periods 13:58 Reframing your practice to be less narrowly defined by one enforcement area 14:27 How one partner survived cuts by documenting value and broadening expertise 15:16 Long-game mindset: showing your firm that you are indispensable beyond billable hours Rate, Review, & Follow on Apple Podcasts & Spotify Do you enjoy listening to Big Law Life?  Please consider rating and reviewing the show! This helps support and reach more people like you who want to grow a career in Big Law.  For Apple Podcasts, click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let me know what you loved most about the episode! Also, if you haven't done so already, follow the podcast here!  For Spotify, tap here on your mobile phone, follow the podcast, listen to the show, then find the rating icon below the description, and tap to rate with five stars. Interested in doing 1-2-1 coaching with Laura Terrell? Or learning more about her work coaching and consulting? Here are ways to reach out to her: www.lauraterrell.com  laura@lauraterrell.com   LinkedIn: https://www.linkedin.com/in/lauralterrell/  Instagram: https://www.instagram.com/lauraterrellcoaching/  Show notes: https://www.lauraterrell.com/podcast

Telecom Reseller
BroadSource and SecurePII Showcase AI-Ready Data Privacy at WebexOne, Podcast

Telecom Reseller

Play Episode Listen Later Oct 1, 2025 9:34


“Our approach is simple: remove the PII from the data stream, and you don't have to worry about compliance,” said Bill Placke, President, Americas at SecurePII. At WebexOne in San Diego, Doug Green, Publisher of Technology Reseller News, spoke with Jason Thals, COO of BroadSource, and Placke of SecurePII about their finalist recognition in Cisco's Dynamic Duo competition. The joint solution, built on Cisco Webex Contact Center, is designed to unlock AI's potential by enabling enterprises to leverage large language models without exposing sensitive personal data. SecurePII's flagship product, SecureCall, was purpose-built for Webex (and also available on Genesys) to deliver PCI compliance while removing personally identifiable information from voice interactions. This enables organizations to deploy AI and agentic automation confidently, without the regulatory risk tied to data privacy laws across the U.S., GDPR, and beyond. Thals emphasized BroadSource's role in delivering services that complement CCaaS and UCaaS platforms globally, while Placke framed the opportunity for Cisco partners: “This is a super easy bolt-on, available in the Webex App Hub. Customers can be up and running in 30 minutes and compliant.” The collaboration, already proven with a government-regulated client in Australia, is industry-agnostic and scalable from small deployments to 50,000+ users. For Cisco resellers, it represents a powerful, sticky service that integrates seamlessly into channel models while helping enterprises stay compliant as they modernize customer engagement. Learn more at BroadSource and SecurePII.

The Data Protection and Privacy Podcast
What The EXPERTS Do Not TELL Us about Data Protection? Part 27 – Securing AI Before It Secures You – With Stas Levitan from DeepKeep.ai

The Data Protection and Privacy Podcast

Play Episode Listen Later Oct 1, 2025 28:39


In this episode with Stas Levitan, AI Governance Expert & Co-founder @ DeepKeep we dive deep into the wild west of AI security, shadow AI, and the real risks lurking behind your favourite GenAI tools. Stas shares hard-hitting insights on why most companies are blind to their AI usage, and how governance isn't just about tick-box compliance — it's about survival.Here's what we covered:AI Risk Starts Way Before You Deploy It Most think risk begins at runtime. Nope. It starts the moment you grab that model from a repo — and trust me, most are not as “safe” as they look.Shadow AI Is Everywhere Employees are quietly using ChatGPT, Gemini, and open-source models — often with good intentions, but zero oversight. Big risk, bigger blind spot.Guardrails Aren't Optional Anymore Enterprise AI needs serious guardrails — not just generic APIs. Think AI-native tools that track, monitor, and enforce behaviour in real time.LLMs Don't Forget… Ever Feed your chatbot personal data, and you might just see it pop up later — possibly in someone else's output. AI Security ≠ Traditional SecurityFirewalls won't save you here. This is about controlling model behaviour, not just access and networks. Totally different mindset needed.Big AI Providers = Not Enterprise-Ready The default tools don't cut it. The second you fine-tune a model or use it with your data — you own the risk.EU AI Act Isn't Just Hype — It's Happening  Risk assessments, monitoring, documentation — this isn't optional for high-risk sectors. And no, you probably aren't ready yet.Step One: Get Visibility You can't protect what you can't see. Start by discovering what AI is actually being used in your org — you might be shocked. It's a frank and eye-opening conversation that every CIO, CISO, and compliance lead should hear. Tune in — and if you're using GenAI without a plan, maybe… stop.Stas Levitan  can be contacted here •DeepKeep official website: https://www.deepkeep.ai•Stas Levitan LinkedIn: https://uk.linkedin.com/in/stas-levitanThe latest in Data Protection and Privacy Podcast by David ClarkeFollow me on Twitter @1davidclarke 98.6k FollowersJoin Linkedin GDPR Group 30,475 Others Members for FREE CoAuthor of an ICO certified GDPR scheme

UPGRADE 100 by Dragos Stanca
STAT vs COMPANIE | Ce înseamnă intervenția Casei Albe în acționariatul TikTok?

UPGRADE 100 by Dragos Stanca

Play Episode Listen Later Sep 30, 2025 73:01


Ca de obicei, lumea tech de aseară e cu totul și cu totul alta față de cea cu care ne-am trezit. După luni întregi de discuții și amânări, TikTok și Casa Albă au găsit, în sfârșit, un mijloc prin care rețeaua de socializare să-și continue activitatea în SUA, separată (teoretic) de compania-mamă și Beijing. Care a fost soluția? Niște oameni extrem de bogați și, evident, „prieteni” ai administrației Trump. Despre cine sunt aceștia, de ce sunt ei „aleșii”, cum va arăta un TikTok „americanizat” și nu numai a discutat, în acestă ediție de Upgrade 100 Live, Marian Hurducaș, cu invitații săi:- Marian Andrei, jurnalist și expert în tehnologie, cunoscut în special pentru moderarea emisiunii I LIKE IT. Are o vastă experiență în analiza trendurilor din industria tech și aduce mereu o perspectivă informată și echilibrată asupra impactului tehnologiei în viața de zi cu zi.- Tudor Galoș, consultant în domeniul protecției datelor, cu o expertiză solidă în GDPR. A lucrat cu numeroase companii pentru a le ajuta să navigheze complexitatea reglementărilor legate de date și este recunoscut pentru abilitatea sa de a explica pe înțelesul tuturor cum putem să ne protejăm informațiile personale.

Shoot to the top
Commercial Photography Marketing: 5 Low Cost Strategies for UK Photographers

Shoot to the top

Play Episode Listen Later Sep 30, 2025 25:25


Tired of throwing time and money at marketing and gettingnowhere? This week, Sam delivers five and a half practical, low cost marketing strategies for photographers that any UK photographer can implement right now to bring in higher paying commercial photography clients and grow theirbusiness.This episode is packed with useful content, but we've pulledout the three essential takeaways that will most quickly boost your commercial photography business. Stop relying on luck; start implementing a clear plan that converts prospects into paying customers.Listen to discover:Three Things You'll Learn in This EpisodeThe Hidden Database on Your Hard Drive: Discover the simple, overlooked source of paying clients you already possess, and learn the UK-specific GDPR rule that allows you to contact them immediately, turning old inquiries into new sales.How to Break the Ice with Big Decision-Makers: Find out the essential 3-step blueprint for warming up leads on LinkedIn before you send a connection request, ensuring that key marketing staff are ready to see the value in your commercial photography.The Low-Cost Client Funnel You Need: Master the technique of the "discounted offer" (like a simple headshot) and, more importantly, the crucial plan to successfully move those first-time buyers onto your premium, high-value brand shoots.Grab a cuppa and listen now—your next big client is waiting.

AdTechGod Pod
Unlocking the Overlooked 40% with John Piccone from Adform

AdTechGod Pod

Play Episode Listen Later Sep 29, 2025 13:08


In this episode of the AdTechGod Pod, host AdTechGod interviews John Piccone, the Regional President of Adform Americas. They discuss John's extensive background in the ad tech industry, the importance of addressing the overlooked 40% of audiences, and how Adform's independence and transparency set it apart in a competitive market. John shares insights on the evolving landscape of digital advertising, the significance of data-driven marketing, and the future trends that excite him as they approach the fourth quarter. Takeaways John Piccone has a rich background in ad tech, having worked with major companies. Adform offers a full tech stack, providing various tools for advertisers. Understanding the 40% of users who are often overlooked is crucial for brands. Transparency in the programmatic marketplace is essential for building trust. Brands can achieve more with less by optimizing their advertising strategies. The fragmentation of channels complicates audience targeting for marketers. Adform's independence allows for a focus on brand needs over inventory sales. GDPR compliance gives Adform an edge in understanding privacy regulations. Brands need to adapt to changing dynamics in the advertising landscape. Incremental reach can be achieved without increasing budget size. Chapters 00:00 Introduction to Adform and John Piccone 02:55 John Piccone's Journey in Ad Tech 05:45 Addressing the Overlooked 40% Audience 08:23 The Role of Independence in Ad Tech 11:25 Looking Ahead: Innovations and Future Trends Learn more about your ad choices. Visit megaphone.fm/adchoices

Backup Central's Restore it All
Cyber Attack Notification - Final Lessons from Mr. Robot Season One

Backup Central's Restore it All

Play Episode Listen Later Sep 29, 2025 48:04


When cyber attack notification goes wrong, companies face a disaster worse than the original breach. This episode dives deep into the critical mistakes organizations make when communicating about security incidents - and why transparency beats secrecy every time.We examine real-world failures like LastPass and Rackspace, where poor communication strategies amplified the damage from their cyber attacks. From legal requirements in California and GDPR to the new one-hour notification rules in China, we cover what regulations demand and why going beyond compliance makes business sense.Learn how to create effective status pages, manage customer expectations during recovery, and avoid the death-by-a-thousand-cuts approach that destroys trust. We share practical strategies for early and frequent communication that can actually strengthen customer relationships during crisis situations.

Category Visionaries
How Cerebrium generated millions in ARR through partnerships without a sales team | Michael Louis

Category Visionaries

Play Episode Listen Later Sep 29, 2025 24:31


Cerebrium is a serverless AI infrastructure platform orchestrating CPU and GPU compute for companies building voice agents, healthcare AI systems, manufacturing defect detection, and LLM hosting. The company operates across global markets handling data residency constraints from GDPR to Saudi Arabia's data sovereignty requirements. In a recent episode of Category Visionaries, I sat down with Michael Louis, Co-Founder & CEO of Cerebrium, to explore how they built a high-performance infrastructure business serving enterprise customers with high five-figure to six-figure ACVs while maintaining 99.9%+ SLA requirements. Topics Discussed: Building AI infrastructure before the GPT moment and strategic patience during the hype cycle Scaling a distributed engineering team between Cape Town and NYC with 95% South African talent Partnership-driven revenue generation producing millions in ARR without traditional sales teams AI-powered market engineering achieving 35% LinkedIn reply rates through competitor analysis Technical differentiation through cold start optimization and network latency improvements Revenue expansion through global deployment and regulatory compliance automation GTM Lessons For B2B Founders: Treat go-to-market as a systems engineering problem: Michael reframed traditional sales challenges through an engineering lens, focusing on constraints, scalability, and data-driven optimization. "I try to reframe my go to market problem as an engineering one and try to pick up, okay, like what are my constraints? Like how can I do this, how can it scale?" This systematic approach led to testing 8-10 different strategies, measuring conversion rates, and building automated pipelines rather than relying on manual processes that don't scale. Structure partnerships for partner success before revenue sharing: Cerebrium generates millions in ARR through partners whose sales teams actively upsell their product. Their approach eliminates typical partnership friction: "We typically approach our partners saying like, look, you keep the money you make, we'll keep the money we make. If it goes well, we can talk about like rev share or some other agreement down the line." This removes commission complexity that kills B2B partnerships and allows partners to focus on customer value rather than internal revenue allocation conflicts. Build AI-powered competitive intelligence for outbound at scale: Cerebrium's 35% LinkedIn reply rate comes from scraping competitor followers and LinkedIn engagement, running prospects through qualification agents that check funding status, ICP fit, and technical roles, then generating personalized outreach referencing specific interactions. "We saw you commented on Michael's post about latency in voice. Like, we think that's interesting. Like, here's a case study we did in the voice space." The system processes thousands of prospects while maintaining personalization depth that manual processes can't match. Position infrastructure as revenue expansion, not cost optimization: While dev tools typically focus on developer productivity gains, Cerebrium frames their value proposition around market expansion and revenue growth. "We allow you to deploy your application in many different markets globally... go to market leaders love us and sales leaders because again we open up more markets for them and more revenue without getting their tech team involved." This messaging resonates with revenue stakeholders and justifies higher spending compared to pure cost-reduction positioning. Weaponize regulatory complexity as competitive differentiation: Cerebrium abstracts data sovereignty requirements across multiple jurisdictions - GDPR in Europe, data residency in Saudi Arabia, and other regional compliance frameworks. "As a company to build the infrastructure to have data sovereignty in all these companies and markets, it's a nightmare." By handling this complexity, they create significant switching costs and enable customers to expand internationally without engineering roadmap dependencies, making them essential to sales teams pursuing global accounts.   //   Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe.  www.GlobalTalent.co   //   Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM   

KuppingerCole Analysts
Analyst Chat #271: From Breach Fatigue to Resilient Architecture - Rethinking Data Trust

KuppingerCole Analysts

Play Episode Listen Later Sep 29, 2025 44:08


Are we already living in a post-data privacy world? Breaches are everywhere, data is constantly being leaked, and GDPR fines haven’t stopped surveillance capitalism or shady data brokers. In this episode of the Analyst Chat, Matthias Reinwarth is joined by Mike Small and Jonathan Care to explore whether privacy still has meaning — or if resilience and risk management are the only ways forward. They debate: ✅ Is privacy truly dead, or just evolving?✅Why regulations like GDPR often miss the mark ⚖️✅How cyber resilience is becoming more critical than “traditional” privacy✅The personal, societal, and legal dimensions of privacy✅What organizations (and individuals) can still do to protect data

KuppingerCole Analysts Videos
Analyst Chat #271: From Breach Fatigue to Resilient Architecture - Rethinking Data Trust

KuppingerCole Analysts Videos

Play Episode Listen Later Sep 29, 2025 44:08


Are we already living in a post-data privacy world? Breaches are everywhere, data is constantly being leaked, and GDPR fines haven’t stopped surveillance capitalism or shady data brokers. In this episode of the Analyst Chat, Matthias Reinwarth is joined by Mike Small and Jonathan Care to explore whether privacy still has meaning — or if resilience and risk management are the only ways forward. They debate: ✅ Is privacy truly dead, or just evolving?✅Why regulations like GDPR often miss the mark ⚖️✅How cyber resilience is becoming more critical than “traditional” privacy✅The personal, societal, and legal dimensions of privacy✅What organizations (and individuals) can still do to protect data

Serious Privacy
Fun with Facts plus... Korea!

Serious Privacy

Play Episode Listen Later Sep 27, 2025 30:42


Send us a textOn this episode of Serious Privacy, Paul Breitbarth brings us news from the Global Privacy Assembly held in Korea and Dr. K Royal has fun with privacy trivia! Ralph O'Brien is out this week. Open offer to all fans... if you answered all the questions correctly, send oneof us your address and we will send you a sticker for playing Trivacy! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

BlockHash: Exploring the Blockchain
Ep. 606 Patrick Moynihan | Building Digital Trust with Tracer

BlockHash: Exploring the Blockchain

Play Episode Listen Later Sep 26, 2025 48:17


For episode 606 of the BlockHash Podcast, host Brandon Zemp is joined by Patrick Moynihan, President and Co-founder of Tracer Labs.Tracer Labs is building the future of digital trust. As the parent company of Trust ID and a founding member of DCID, we create self-sovereign identity (SSI) and consent solutions where control follows the user and not the website.Patrick leads a team bringing privacy-first, quantum-resistant identity to Web3, where user consent and data aren't just protected, but unified across platforms. Tracer Labs has replaced invasive device tracking with patent pending tech that gives individuals one login, full control, and real-world rewards—think GDPR and CCPA compliance, higher business conversions, and verified zero-party data. Their aPaaS integrates seamlessly for instant impact, with paid rollouts underway and brand partnerships like Bass Pro Shops and Expedia already in progress. ⏳ Timestamps: (0:00) Introduction(1:17) Who is Patrick Moynihan?(16:16) How can Trust ID be used?(22:00) How are users incentivized to share data?(28:46) Online data protection for kids(33:47) Quantum resistant identity(41:36) Tracer Labs roadmap 

Uncommons with Nate Erskine-Smith
The Future of Online Harms and AI Regulation with Taylor Owen

Uncommons with Nate Erskine-Smith

Play Episode Listen Later Sep 26, 2025 39:00


After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca

The FIT4PRIVACY Podcast - For those who care about privacy
Privacy Compliance in AI World with Sylvestre Dupont in the FIT4PRIVACY Podcast E148 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Sep 25, 2025 24:11


AI is changing the way we work, live, and build businesses — but it also raises big questions about privacy. As AI tools process more personal and sensitive data, how can companies make sure they follow privacy laws like GDPR? How can privacy be built into AI from the very beginning? And what's the best way to handle data retention so users stay in control? In this episode of the FIT4Privacy Podcast, host Punit Bhatia speaks with Sylvestre Dupont, co-founder of Parser, about how to keep privacy at the heart of AI tools and services. They discuss why privacy matters in AI, how to build privacy by design into AI from the start, and what it takes to make an AI-based SaaS tool GDPR compliant. Sylvestre also shares his approach to data retention — letting users choose how long their data is stored — and why trust is a key advantage for any business handling personal data. If you work with AI, personal data, or GDPR, this episode gives you clear and practical ideas you can use right away. 

Serious Privacy
A slamming week in Privacy with Ralph and K

Serious Privacy

Play Episode Listen Later Sep 19, 2025 35:11


Send us a textOn this week of Serious Privacy, Ralph O'Brien of Reinbo Consulting and Dr. K Royal (Paul Breitbarth is travelling) discuss current events in privacy, data protection, and cyber law. Fascinating episode with all the hot stories which seem to follow a theme - adequacy and child online safety, plus some enforcements. Coverage includes the decision on the European Court's decision on the Latombe suit challenging the adequacy of the EU-US thingie, Brazil, Tazania, Argentina, Austrailia, China, ChatGPT, and so much more! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

Keen On Democracy
The Innovation Paradox Undermining the Digital Revolution: How Magical Technology Isn't Translating into Miraculous Economic Progress

Keen On Democracy

Play Episode Listen Later Sep 19, 2025 46:07


It's the most curious paradox of today's digital revolution. While the computers, the internet, smartphones and AI all appear magical, they haven't actually translated into equally magical economic progress. That, at least, is the counter-intuitive argument of the Oxford economist Carl Benedikt Frey whose new book, How Progress Ends, suggests that the digital revolution isn't resulting in an equivalent revolution of productivity. History is repeating itself in an equally paradoxical way, Frey warns. We may, indeed, be repeating the productivity stagnation of the 1970s, in spite of our technological marvels. Unlike the 19th-century industrial revolution that radically transformed how we work, today's digital tools—however impressive—are primarily automating existing processes rather than creating fundamentally new types of economic activity that drive broad-based growth. And AI, by making existing work easier rather than creating new industries, will only compound this paradox. It might be the fate of not just the United States and Europe, but China as well. That, Frey warns, is how all progress will end.1. The Productivity Paradox is Real Despite revolutionary digital technologies, we're not seeing the productivity gains that past technological revolutions delivered. It took a century for steam to show its full economic impact, four decades for electricity—but even accounting for lag time, the computer revolution has underperformed economically compared to its transformative social effects.2. Automation vs. Innovation: The Critical Distinction True progress comes from creating entirely new industries and types of work, not just automating existing processes. The mid-20th century boom created the automobile industry and countless supporting sectors. Today's AI primarily makes existing work easier rather than spawning fundamentally new economic activities.3. Institutional Structure Trumps Technology The Soviet Union succeeded when scaling existing technology but failed when innovation was needed because it lacked decentralized exploration. Success requires competitive, decentralized systems where different actors can take different bets—like Google finding funding after Bessemer Ventures said no.4. Europe's Innovation Crisis Has a Clear Diagnosis Europe lags in digital not due to lack of talent or funding, but because of fragmented markets and regulatory burdens. The EU's internal trade barriers in services amount to a 110% tariff equivalent, while regulations like GDPR primarily benefit large incumbents who can absorb compliance costs.5. Geography Still Matters in the Digital Age Silicon Valley's success stemmed from unenforceable non-compete clauses that enabled job-hopping and knowledge transfer, while Detroit's enforcement of non-competes after 1985 contributed to its decline. As AI makes many services tradeable globally, high-cost innovation centers face new competitive pressures from lower-cost locations.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The FIT4PRIVACY Podcast - For those who care about privacy
Privacy Compliance in AI World with Sylvestre Dupont in the FIT4PRIVACY Podcast E148 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Sep 18, 2025 9:36


AI is changing the way we work, live, and build businesses — but it also raises big questions about privacy. As AI tools process more personal and sensitive data, how can companies make sure they follow privacy laws like GDPR? How can privacy be built into AI from the very beginning? And what's the best way to handle data retention so users stay in control? In this episode of the FIT4Privacy Podcast, host Punit Bhatia speaks with Sylvestre Dupont, co-founder of Parser, about how to keep privacy at the heart of AI tools and services. They discuss why privacy matters in AI, how to build privacy by design into AI from the start, and what it takes to make an AI-based SaaS tool GDPR compliant. Sylvestre also shares his approach to data retention — letting users choose how long their data is stored — and why trust is a key advantage for any business handling personal data. If you work with AI, personal data, or GDPR, this episode gives you clear and practical ideas you can use right away. 

The Tech Blog Writer Podcast
Marketing Intelligence After Cookies: How Funnel Turns Data Into Decisions

The Tech Blog Writer Podcast

Play Episode Listen Later Sep 15, 2025 30:29


Marketing teams used to have a simple enough job: follow the click, count the conversions, and shift the budget accordingly. But that world is gone. GDPR, iOS restrictions, and browser-level changes have left most attribution models broken or unreliable. So what now? In this episode, I sat down with Fredrik Skansen, CEO of Funnel, to unpack how marketing intelligence actually works in a world where data is partial, journeys are fragmented, and the old models don't hold. Since founding Funnel in 2014, Fredrik has grown the company into a platform that supports over 2,600 brands and handles reporting on more than 80 billion dollars in annual digital spend. That scale gives him a front-row seat to the questions every CMO and CFO are asking right now. Fredrik explains why last-click attribution didn't just become inaccurate. It became misleading. With tracking capabilities stripped down and user signals disappearing, the industry has had to move toward modeled attribution and real-time optimisation. That only works if your data is clean, aligned, and ready for analysis. Funnel's platform helps structure campaigns upfront, pull data into a unified model, apply intelligence, push learnings back into the platforms, and produce reporting that makes sense to the wider business. This isn't about dashboards. It's about decisions. We also talk about budget mix. Performance channels may feel safe, but Fredrik points out they are also getting more expensive. When teams bring brand and mid-funnel activity back into the measurement framework, the picture often changes. He shares how Swedish retailer Gina Tricot grew from 100 million to 300 million dollars in three years, in part by shifting spend to brand and driving demand earlier in the customer journey. That move only felt safe because the data supported it. AI adds another layer. With tools like Perplexity reshaping search behavior and the web shifting from links to answers, click-throughs are drying up. But it's not the end of visibility. Content still matters. So does structure. The difference is that now your reader might be an AI model, not a human. That requires a rethink in how brands approach discoverability, authority, and engagement. What makes Funnel interesting is that it doesn't stop at analytics. The platform feeds insight back into action, reducing waste and creating tighter loops between teams. It also works for agencies, which is why groups like Havas use it across 40 offices through a global agreement. If you're tired of attribution theatre and want to understand what marketing measurement looks like when it's built for reality, this episode gives you a clear, usable view. Listen in, then tell me which decision you're still guessing on. Because marketing can be measured. Just not the way it used to be. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA    

Privacy Please
S6, E255 - Privacy, Law, and Emerging Tech: A Conversation with Sonia Siddiqui

Privacy Please

Play Episode Listen Later Sep 15, 2025 49:42 Transcription Available


Send us a textPrivacy and cybersecurity leader Sonia Siddiqui joins us to explore the collision between emerging technologies and privacy regulations, offering insights on how companies can navigate this complex landscape while building trust.• Sonia's journey from aspiring architect to privacy expert, motivated by the intersection of civil rights and privacy• The growing gap between rapid technological innovation and slower-moving regulatory frameworks• Examining real-world tensions like WorldCoin's iris scanning under GDPR's biometric data provisions• Why privacy should be a core business enabler rather than just a compliance checkbox• The importance of implementing privacy by design as a living process that evolves with technology• Why principles-based regulation allows for better adaptation to new technologies than prescriptive rules• The inseparable relationship between privacy and security in building customer trust• How privacy professionals can stay current through professional networks, podcasts, and continuous learning• Essential privacy resources including "The Unwanted Gaze" and "Dieterman's Field Guide to Privacy"Find Sonia and her privacy consulting practice at tamarack.solutions or connect with her at the upcoming AI conference in Boston. Support the show

Masters of Privacy
Daniel Simmons-Marengo: Differential Privacy in practice

Masters of Privacy

Play Episode Listen Later Sep 14, 2025 33:52


How can we apply differential privacy to real-world scenarios? How do you go about algorithmic design? Is there a conflict between data minimization and differential privacy? Can you solve for personal data finding its way into machine learning models? Where can a young professional find resources to dive deeper?References:* Daniel Simmons-Marengo on LinkedIn* OpenDP* Some takeaways from PEPR'24 (USENIX Conference on Privacy Engineering Practice and Respect 2024)* Damien Desfontaines: Differential Privacy in Data Clean Rooms (Masters of Privacy, January 2024)* NIST Guidelines for Evaluating Differential Privacy Guarantees (March 2025)* Peter Craddock: EDPS v SRB, the relative nature of personal data, processors, transparency, impact on MarTech and AdTech (Masters of Privacy, September 2025)* Katharine Jarmul: Demystifying Privacy Enhancing Technologies (Masters of Privacy, October 2023)* Sunny Kang: Machine Learning meets Privacy Enhancing Technologies (Masters of Privacy, February 2023)* How GDPR changes the rules for research (Gabe Maldoff, IAPP blog, 2016) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe

The People's Countryside Environmental Debate Podcast
The Gown, The Town, and the TikTok Apocalypse

The People's Countryside Environmental Debate Podcast

Play Episode Listen Later Sep 14, 2025 32:26


Alex Kauffmann has resumed his role as principal moderator of ‘The Listeners Chair', reclaiming the central chair from which audience questions are drawn. These are then picked apart and reframed within environmental contexts to tease out wider significance.Lilly, from Summertown, Oxford, England sets the first question - “Should Oxford colleges open up more of their greenspaces to the wider population and tourists, or is it ok to keep them private and it is only the poor driving the move as they want some of what the rich have? Some people actually attending the college don't even get to see some of the private internal areas. I heard it said opening up threatens students GDPR protection, and that students don't really want to be disturbed. The Town and Gown rivalry still lives on, and the university often gets bad press. People often forget there are two universities in Oxford”.Stuart, forever Lily's “expert” after one fateful consultation, swears they're basically besties—especially now she's firing off another sly jab at the critic who dared to be dismissive back then.He digresses into the “sleeve Olympics,” where longer gown fabric apparently equals status. Then, like a city tour guide, he sketches a divide between the postcard-perfect centre and “real Oxford,” the suburbs where life actually happens.Alex, all cynicism, scoffs that locals couldn't care less about polished lawns—they're strictly tourist bait. William, sounding like the tourist board, notes that plenty of colleges open their gates—sometimes free for residents—though all the quads blur together: same stones, same chapels.Back with Stuart, who moans these patches are so tiny you'd wreck your shoes circling them—and forget walking on grass.Alex delivers his verdict: if dons don't stroll freely, neither should tourists. William agrees students do deserve their hush-hush study sanctuaries, but insists that visitors tread as reverently as in a cathedral.Luna, San Hose Del Cabo, Mexico brings the next question - “I see the biggest threats to humanity outside of the multiple climate related issues as truth distortion, feral social media and runaway AI. What do you think?”Stuart resets by clarifying that “threats to humanity” means existential doom, not oat‑milk prices. He drops the wisdom of belly‑button gazing: stare too long and all you get is fluff, not enlightenment. Translation? Stop spiraling—take action, even if it's just colour‑coding your apocalypse survival kit alphabetically.Alex wonders whether “threats” means asteroids or endless propaganda. William connects social media, AI, and collapsing truth like red string on a board, warning not to trust any single source.Alex, ever the optimist, claims independent news influencers are thriving, which he counts as hopeful. He advises we stop fretting constant climate doom, since total self‑destruction is unlikely. His news tip? “News Daddy,” a TikTok oracle free of corporate spin.William closes with the mic‑drop: the gravest threat to humanity is “believing our own bullshit.” Hard to argue with that—now hand me the navel fluff.What do you make of this discussion? Do you have a question that you'd like us to discuss? Let us know by sending an email to ⁠thepeoplescountryside@gmail.comSign the Petition - Improve The Oxfordshire Countryside Accessibility For All Disabilities And Abilities: change.org petitionWe like to give you an ad free experience. We also like our audience to be relatively small and engaged, we're not after numbers.This podcast's overall themes are nature, philosophy, climate, the human condition, sustainability, and social justice. Help us to spread the impact of the podcast by sharing this link with 5 friends podfollow.com/ThePeoplesCountrysideEnvironmentalDebatePodcast , support our work through Patreon patreon.com/thepeoplescountryside⁠. Find out all about the podcast via this one simple link: linktr.ee/thepeoplescountryside

World of DaaS
The LM Brief: Navigating GDPR Compliance Outside the EU and UK

World of DaaS

Play Episode Listen Later Sep 12, 2025 16:07


This week's World of DaaS LM Brief looks at GDPR compliance for non-EU and non-UK companies. Firms handling resident data in these regions are required to appoint local representatives who serve as points of contact for regulators and individuals exercising their data rights. This step is critical for ensuring compliance with both GDPR and UK GDPR.Listen to this short podcast summary, powered by NotebookLM.

EUVC
E576 | Dom Hallas, Startup Coalition: Founder-Led Policy & The Startup Coalition's Fight for European Tech

EUVC

Play Episode Listen Later Sep 12, 2025 57:05


In this episode, Andreas Munk Holm speaks with Dom Hallas, Executive Director of the UK's Startup Coalition, to explore how the organization is influencing policy at the intersection of startups, venture capital, and government. From immigration reform to capital access and regulatory red tape, Dom brings a candid view on what it takes to create real impact for founders across Europe.They dive into the power of founder-first advocacy, the evolving lobbying landscape in Europe, and the urgent need for a united tech voice across the continent.Here's what's covered:01:10 Why Policy is a Competitive Sport03:42 GDPR, Brussels & Lessons from Tech Regulation05:12 What is the Startup Coalition & Who Funds It?07:13 The Three Buckets: Talent, Capital, Regulation11:20 Why Founders Need Their Own Voice in Politics16:31 Making Advocacy Fun, Human & Effective17:56 What Startups Can Learn from Farmers21:30 Time Horizons & Playbooks in Policy Work26:18 How the Coalition Sets its Agenda31:46 A Crossroads for European Tech35:46 The Current Policy Agenda: Talent, Finance & Reg43:27 Funding the Underfunded: Inclusion as Policy47:01 Regulation That Clears the Way for the Next Thing

The FIT4PRIVACY Podcast - For those who care about privacy
Govern and Manage AI to Create Trust with Mark Thomas and Punit Bhatia in the FIT4PRIVACY Podcast E147 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Sep 11, 2025 32:46


Do you want to use AI without losing trust? What frameworks help build trust and manage AI responsibly?  Can we really create trust while using AI?In this episode of the FIT4PRIVACY Podcast, host Punit Bhatia and digital trust expert Mark Thomas explain how to govern and manage AI in ways that build real trust with customers, partners, and society.This episode breaks down what it means to use AI responsibly and how strong governance can help avoid risks. You'll also learn about key frameworks like the ISO 42001, the EU AI Act, and the World Economic Forum's Digital Trust Framework—and how they can guide your AI practices.Mark and Punit also talk about how organizational culture, company size, and leadership affect how AI is used—and how trust is built (or lost). They discuss real-world tips for making AI part of your existing business systems, and how to make decisions that are fair, explainable, and trustworthy.

The Nonprofit Show
Third-Party Software Risks Nonprofits Overlook: Shadow IT, AI, and Donor Data

The Nonprofit Show

Play Episode Listen Later Sep 10, 2025 31:56


Nonprofits lean on outside platforms to save time and stretch budgets—but those relationships can quietly expose sensitive donor, client, and payment data. In this episode, Senior Cybersecurity Advisor Parker Brissette of Richey May explains how to recognize and manage third-party software risk before it becomes tomorrow's headline. He starts with a simple lens: follow the data. Where is it stored? Who can touch it—directly or indirectly? Many teams only think about contracted vendors, but Parker widens the aperture to “shadow IT” and consumer tools staff use without formal approval. As he puts it, “Third parties is really anybody that can touch the data at any point in your business, whether you have an agreement with them or maybe not.”From privacy regulations (GDPR, CCPA) to sector-specific rules (HIPAA, PCI), nonprofits carry legal and reputational exposure the moment personal information enters their systems. Parker offers practical steps: inventory paid tools via your accounting system; ask, “If this vendor vanished tomorrow, what would break?”; and press vendors for proof—SOC 2 reports, ISO 27001, or completed security questionnaires. For organizations without a CIO, he recommends clear contracts and one non-negotiable safeguard: “The biggest thing that I recommend in any third-party engagement is setting an expectation of having cyber insurance, because that's a big protection for you financially.”AI enters the picture with both promise and peril. Consumer AI tools can learn from and retain your uploads, potentially exposing proprietary or personal information. Enterprise agreements (e.g., Microsoft Copilot) can offer stronger data protections, but only if configured and used correctly. Parker's guidance is pragmatic: don't ban AI; set guardrails, choose vetted tools, and train teams.Finally, he urges preparation and transparency. Incidents can happen—even with good controls. Donors and corporate funders expect frank communication about what protections exist and what happens if data is exposed. Build trust now by documenting safeguards, validating vendors, and rehearsing your response.You don't have to be a security expert to make smart choices—but you do need a map: know your systems, test your assumptions, ask vendors for evidence, and write risk into your contracts and budgets. That approach turns anxiety into action—and preserves the trust your mission depends on.Find us Live daily on YouTube!Find us Live daily on LinkedIn!Find us Live daily on X: @Nonprofit_ShowOur national co-hosts and amazing guests discuss management, money and missions of nonprofits! 12:30pm ET 11:30am CT 10:30am MT 9:30am PTSend us your ideas for Show Guests or Topics: HelpDesk@AmericanNonprofitAcademy.comVisit us on the web:The Nonprofit Show

VinciWorks
The Data (Use and Access) Act 2025 - What it means for your organisation

VinciWorks

Play Episode Listen Later Sep 10, 2025 48:50


The Data (Use and Access) Act 2025 (DUAA) has brought the most significant changes to UK data protection since UK GDPR came into force. While it doesn't replace GDPR, the DPA 2018, or PECR, the DUAA reshapes how organisations process personal data, handle subject access, manage cookies, and apply legitimate interests. In this episode, we share highlights from our live webinar, where VinciWorks experts explained how these reforms affect compliance strategies. From broad consent in scientific research and recognised legitimate interests, to expanded cookie exemptions, stricter rules for children's services, and higher PECR fines, the DUAA introduces both opportunities and risks. Listen in to learn: What the DUAA changes — and what stays the same Updates to subject access rights and proportionality Cookie rules, soft opt-in for charities, and tougher PECR fines Automated decision-making and AI compliance under the DUAA The new “data protection test” for international transfers Practical steps to future-proof your compliance framework This episode is essential listening for data protection officers, compliance professionals, and legal teams preparing for the future of UK data protection.

Data Transforming Business
Data Experts Question: Is Data Infrastructure Ready for Responsible AI?

Data Transforming Business

Play Episode Listen Later Sep 10, 2025 36:39


Welcome back to Meeting of the Minds, a special podcast episode series by EM360Tech, where we talk about the future of tech.In this Big Data special episode of the Meeting of the Minds, our expert panel – Ravit Jain, Podcast host, Christina Stathopoulos of Dare to Data and a data and AI evangelist, Wayne Eckerson, data strategy consultant and president of the Eckerson Group and Kevin Petrie VP of Research at BARC, come together again to discuss the key data and AI trends, particularly focusing on data ethics. They discuss ethical issues related to using AI, the need for data governance and guidelines, and the essential role of data quality in AI success. The speakers also look at how organisations can measure the value of AI through different KPIs, stressing the need for a balance between technical achievements and business results. Our data experts examine the changing role of AI across various sectors, with a focus on success metrics, the effects on productivity and employee stress, changes in education, and the possible positive and negative impacts of AI in everyday life. They highlight the need to balance productivity with quality and consider the ethics of autonomous AI systems.In the previous episode, new challenges and opportunities in data governance, regulatory frameworks, and the AI workforce were discussed. They looked at the important balance between innovation and ethical responsibility, looking at how companies are handling these issues.Tune in to get new understandings about the future of data and AI and how your enterprise can adapt to the upcoming changes and challenges. Hear how leaders in the field are preparing for a future that is already here.Also watch: Meeting of the Minds: State Of Cybersecurity in 2025TakeawaysGenerative AI is creating a supply shock in cognitive power.Companies are eager for data literacy and AI training.Data quality remains a critical issue for AI success.Regulatory frameworks like GDPR are shaping AI governance.The US prioritises innovation, sometimes at the expense of regulation.Generative AI introduces new risks that need to be managed.Data quality issues are often the root of implementation failures.AI's impact on jobs is leading to concerns about workforce automation.Organisations must adapt to the probabilistic nature of generative AI.The conversation around data quality is ongoing and evolving. AI literacy and data literacy are crucial for workforce success.Executives are more concerned about retraining than layoffs.Younger workers may struggle to evaluate AI-generated answers.Incremental changes in productivity are expected with AI.Job displacement may not be immediate, but could create future gaps.Human empathy and communication skills remain essential in many professions.AI will augment, not replace, skilled software developers.Global cooperation is needed to navigate...

BackTable Urology
Ep. 259 Prostate Tumor Board: Risk Stratification & Treatment Strategies with Dr. Neeraj Agarwal and Dr. Tyler Seibert

BackTable Urology

Play Episode Listen Later Sep 9, 2025 66:15


Could ongoing trials redefine the management of oligometastatic and advanced prostate cancer? In this installment of BackTable Tumor Board, leading prostate cancer experts Dr. Neeraj Agarwal, a medical oncologist from the University of Utah, and Dr. Tyler Seibert, a radiation oncologist from UC San Diego, join host Dr. Parth Modi to share their insights on the latest clinical trials and persistent challenges in managing prostate cancer.---This podcast is supported by:Ferring Pharmaceuticals https://ad.doubleclick.net/ddm/trackclk/N2165306.5658203BACKTABLE/B33008413.420220578;dc_trk_aid=612466359;dc_trk_cid=234162109;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};gpp=${GPP_STRING_755};gpp_sid=${GPP_SID};ltd=;dc_tdv=1---SYNPOSISThe multidisciplinary discussion addresses clinical decision-making in active surveillance versus early intervention, the role of PSMA PET imaging in detection and treatment planning, and evolving strategies for metastatic and castration-resistant disease. They also evaluate the therapeutic potential of alpha emitters and radioligand therapies, consider the evidence behind treatment intensification and de-intensification, and explore how these approaches can be individualized to optimize patient outcomes.---TIMESTAMPS0:00 - Introduction1:48 - Active Surveillance in Low-Risk Prostate Cancer7:08 - Molecular Testing and Risk Stratification8:28 - Radiation Therapy Approaches20:16 - PSA Recurrence and PSMA PET Scans32:40 - The Role of ADT37:15 - PSMA PET Scans40:58 - Genetic Testing in High-Risk and Metastatic Prostate Cancer46:54 - Treatment Intensification vs. De-Intensification Trials55:59 - Castration-Resistant Prostate Cancer

ITSPmagazine | Technology. Cybersecurity. Society
The Transatlantic Broadcast Pilot Episode 2025: European Tech, Cybersecurity, and Society | ITSPmagazine Europe: The Transatlantic Broadcast Hosted by Marco Ciappelli, Rob Black, and Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 6, 2025 32:12


Broadcasting from Florence and Los Angeles, I Had One of Those Conversations...You know the kind—where you start discussing one thing and suddenly realize you're mapping the entire landscape of how different societies approach technology. That's exactly what happened when Rob Black and I connected across the Atlantic for the pilot episode of ITSPmagazine Europe: The Transatlantic Broadcast.Rob was calling from what he optimistically described as "sunny" West Sussex (complete with biblical downpours and Four Seasons weather in one afternoon), while I enjoyed actual California sunshine. But this geographic distance perfectly captured what we were launching: a genuine exploration of how European perspectives on cybersecurity, technology, and society differ from—and complement—American approaches.The conversation emerged from something we'd discovered at InfoSecurity Europe earlier this year. After recording several episodes together with Sean Martin, we realized we'd stumbled onto something crucial: most global technology discourse happens through an American lens, even when discussing fundamentally European challenges. Digital sovereignty isn't just a policy buzzword in Brussels—it represents a completely different philosophy about how democratic societies should interact with technology.Rob Black: Bridging Defense Research and Digital RealityRob brings credentials that perfectly embody the European approach to cybersecurity—one that integrates geopolitics, human sciences, and operational reality in ways that purely technical perspectives miss. As UK Cyber Citizen of the Year 2024, he's recognized for contributions that span UK Ministry of Defense research on human elements in cyber operations, international relations theory, and hands-on work with university students developing next-generation cybersecurity leadership skills.But what struck me during our pilot wasn't his impressive background—it was his ability to connect macro-level geopolitical cyber operations with the daily impossible decisions that Chief Information Security Officers across Europe face. These leaders don't see themselves as combatants in a digital war, but they're absolutely operating on front lines where nation-state actors, criminal enterprises, and hybrid threats converge.Rob's international relations expertise adds crucial context that American cybersecurity discourse often overlooks. We're witnessing cyber operations as extensions of statecraft—the ongoing conflict in Ukraine demonstrates how narrative battles and digital infrastructure attacks interweave with kinetic warfare. European nations are developing their own approaches to cyber deterrence, often fundamentally different from American strategies.European Values Embedded in Technology ChoicesWhat emerged from our conversation was something I've observed but rarely heard articulated so clearly: Europe approaches technology governance through distinctly different cultural and philosophical frameworks than America. This isn't just about regulation—though the EU's leadership from GDPR through the AI Act certainly shapes global standards. It's about fundamental values embedded in technological choices.Rob highlighted algorithmic bias as a perfect example. When AI systems are developed primarily in Silicon Valley, they embed specific cultural assumptions and training data that may not reflect European experiences, values, or diverse linguistic traditions. The implications cascade across everything from hiring algorithms to content moderation to criminal justice applications.We discussed how this connects to broader patterns of technological adoption. I'd recently written about how the transistor radio revolution of the 1960s paralleled today's smartphone-driven transformation—both technologies were designed for specific purposes but adopted by users in ways inventors never anticipated. The transistor radio became a tool of cultural rebellion; smartphones became instruments of both connection and surveillance.But here's what's different now: the stakes are global, the pace is accelerated, and the platforms are controlled by a handful of American and Chinese companies. European voices in these conversations aren't just valuable—they're essential for understanding how different democratic societies can maintain their values while embracing technological transformation.The Sociological Dimensions Technology Discourse MissesMy background in political science and sociology of communication keeps pulling me toward questions that pure technologists might skip: How do different European cultures interpret privacy rights differently? Why do Nordic countries approach digital government services so differently than Mediterranean nations? What happens when AI training data reflects primarily Anglo-American cultural assumptions but gets deployed across 27 EU member states with distinct languages and traditions?Rob's perspective adds the geopolitical layer that's often missing from cybersecurity conversations. We're not just discussing technical vulnerabilities—we're examining how different societies organize themselves digitally, how they balance individual privacy against collective security, and how they maintain democratic values while defending against authoritarian digital influence operations.Perhaps most importantly, we're both convinced that the next generation of European cybersecurity leaders needs fundamentally different skills than previous generations. Technical expertise remains crucial, but they also need to communicate complex risks to non-technical decision-makers, operate comfortably with uncertainty rather than seeking perfect solutions, and understand that cybersecurity decisions are ultimately political decisions about what kind of society we want to maintain.Why European Perspectives Matter GloballyEurope represents 27 different nations with distinct histories, languages, and approaches to technology governance, yet they're increasingly coordinating digital policies through EU frameworks. This complexity is fascinating and the implications are global. When Europe implements new AI regulations or data protection standards, Silicon Valley adjusts its practices worldwide.But European perspectives are too often filtered through American media or reduced to regulatory footnotes in technology publications. We wanted to create space for European voices to explain their approaches in their own terms—not as responses to American innovation, but as distinct philosophical and practical approaches to technology's role in democratic society.Rob pointed out something crucial during our conversation: we're living through a moment where "every concept that we've thought about in terms of how humans react to each other and how they react to the world around them now needs to be reconsidered in light of how humans react through a computer mediated existence." This isn't abstract philosophizing—it's the practical challenge facing policymakers, educators, and security professionals across Europe.Building Transatlantic Understanding, Not DivisionThe "Transatlantic Broadcast" name reflects our core mission: connecting perspectives across borders rather than reinforcing them. Technology challenges—from cybersecurity threats to AI governance to digital rights—don't respect national boundaries. Solutions require understanding how different democratic societies approach these challenges while maintaining their distinct values and traditions.Rob and I come from different backgrounds—his focused on defense research and international relations, mine on communication theory and sociological analysis—but we share curiosity about how technology shapes society and how society shapes technology in return. Sean Martin brings the American cybersecurity industry perspective that completes our analytical triangle.Cross-Border Collaboration for European Digital FutureThis pilot episode represents just the beginning of what we hope becomes a sustained conversation. We're planning discussions with European academics developing new frameworks for digital rights, policymakers implementing AI governance across member states, industry leaders building privacy-first alternatives to Silicon Valley platforms, and civil society advocates working to ensure technology serves democratic values.We want to understand how digital transformation looks different across European cultures, how regulatory approaches evolve through multi-stakeholder processes, and how European innovation develops characteristics that reflect distinctly European values and approaches to technological development.The Invitation to Continue This ConversationBroadcasting from our respective sides of the Atlantic, we're extending an invitation to join this ongoing dialogue. Whether you're developing cybersecurity policy in Brussels, building startups in Berlin, teaching digital literacy in Barcelona, or researching AI ethics in Amsterdam, your perspective contributes to understanding how democratic societies can thrive in an increasingly digital world.European voices aren't afterthoughts in global technology discourse—they're fundamental contributors to understanding how diverse democratic societies can maintain their values while embracing technological change. This conversation needs academic researchers, policy practitioners, industry innovators, and engaged citizens from across Europe and beyond.If this resonates with your own observations about technology's role in society, subscribe to follow our journey as we explore these themes with guests from across Europe and the transatlantic technology community.And if you want to dig deeper into these questions or share your own perspective on European approaches to cybersecurity and technology governance, I'd love to continue the conversation directly. Get in touch with us on Linkedin! Marco CiappelliBroadcasting from Los Angeles (USA) & Florence (IT)On Linkedin: https://www.linkedin.com/in/marco-ciappelliRob BlackBroadcasting from London (UK)On Linkedin https://www.linkedin.com/in/rob-black-30440819Sean MartinBroadcasting from New York City (USA)On Linkedin: https://www.linkedin.com/in/imsmartinThe transatlantic conversation about technology, society, and democratic values starts now.

Cloud Masters
GenAI at production scale: Why GenAI POCs fail and how AWS thinks about production readiness

Cloud Masters

Play Episode Listen Later Sep 5, 2025 50:44


We're joined by two GenAI experts from AWS and DoiT to understand why GenAI POCs fail at production scale, how to evaluate LLMs, and how to approach GenAI production readiness. The discussion covers four GenAI workload migration patterns, AWS Bedrock's systematic migration framework, enterprise compliance challenges including HIPAA and GDPR requirements, and practical approaches to evaluating model performance across speed, cost, and quality.

The Charmed Studio Podcast for Artists
The Number 1 Thing Artists Leave Off Their Websites That Hurts Them The Most

The Charmed Studio Podcast for Artists

Play Episode Listen Later Sep 2, 2025 18:43


Here are some helpful links.See the entire original post here for the most support. This video will help most WordPress users get the drift of how to set up the widget to create a sign-up form. Setting up this widget will only take 15 minutes.Congratulate yourself after you do it. Now do one more thing.  Set up an account with a free email service provider like MailChimp.You want to do this so you can nab that necessary bit of custom code you'll need to paste into that WordPress subscribe form widget you just set up.Learn how to wed your MailChimp account to your WordPress widget here.Congratulate yourself even more profusely after this step.  Get yourself some roses or something.Because you just did something huge for your art business. Oh God, Do I Have to Add MailChimp?Unlike Gmail, MailChimp will gift your readers with an easy way to update their preferences or to unsubscribe altogether.You want to give people that option, right?It's the polite thing to do, and it keeps you safe from GDPR compliance troubles.Plus, once you master MailChimp, you'll swell up with pride.You could go with other email marketing software providers like Constant Contact or ConvertKit(but they start at around $29 to $49 a month), while MailChimp is free to begin.And as of this writing, Mailchimp stays free until you have over 499 people on your list.Want extra credit?Put an afternoon's effort into learning how to make the most out of MailChimp, and you'll be giving your art business strong roots to grow from.  2. How To Create Your Weebly Email Sign-Up FormWeebly's mailing list setup could be the easiest of the bunch.We are talking 5 minutes.(I am a WordPress woman, but the intuitive setup for the Weebly box gave me momentary Weebly-envy.)To learn how to add a MailChimp sign-up form to your Weebly site, go here.  3. How To Pop In Your Wix Subscribe FormCheck out this page and video to learn how to place a "Get Subscribers" form on your Wix site and add a snazzy "pop-up" sign-up form to boot. I bet you can do both in about 30 minutes.If you want to integrate MailChimp into your Wix sign-up setup, check out this video for extra help.But if you do nothing else here today, put a darn sign-up box on your Facebook page. Oh, and while you're helping yourself attract your ideal audience,  discover how to add a gorgeous email signature to the bottom of your email for free in this Charmed Studio post.Okay, now on to taking the step that could make your mailing list soar, where others sink: adding a signup box on your social media. How To Put A Sign-Up Form On Your Twitter or Facebook PageFacebook helps many artists, but it's important to start a list that's yours for keeps.The same goes for X.You don't need me to tell you that Mark Zuckerberg or Elon Musk may have forgotten to put your well-being at the top of their to-do lists today.The only simple way I found is to hook yourself up via MailChimp.For Facebook, watch this straightforward instructional video here.To share your Mailchimp subscribe form on X, read this.If you haven't set up a MailChimp account, go here first for your account set-up tutorial, then head to the Facebook Signup Form instructions.Done.You are now a captain of industry!Get yourself two bunches of tulips, one for your kitchen and one for your desk.  My P.S. (For the Tech-Challenged Among Us)If you are not interested or able to install a subscribe form yourself, I still respect you.You can easily and cheaply hire a smart, techy type on sites like Fiverr or Upworkto put one in for you pronto.Got your subscribe form up, but it's not pulling in many folks?Check out this post on Turn Your Art Website Into an Attraction Magnet (Without Social Media).You'll discover how being yourself and changing the possibly current boring wording in your current subscription form can change everything.(Here's my article on 4 great benefits of a small list.) To be charming and subscribe to the blog and get free access to my writing toolkit for artists click here.For info on one-on-one writing coaching with Thea go here. This blog is produced by The Charmed Studio Blog and Podcast™, LLC. And when you get scared about writing and want to relax, remember what Anne Lamott says."100 years, all new people."You can do this. Occasionally my show notes contain Amazon or other affiliate links. This means if you buy books or stuff via my podcast link I may receive a tiny commission and do a happy dance. There is no extra fee for you. I only link to items I personally use and love: products I feel help heart-centered artists and writers. Thank you. :)

Serious Privacy
Mish mash a privacy splash

Serious Privacy

Play Episode Listen Later Aug 30, 2025 27:59


Send us a textOn this episode of seriousprivacy, Paul Breitbarth is away, so Ralph O' Brien and Dr. K Royal bring you a mish mash week in privacy. Topics include current news and a little bit about the differences in GDPR compliance vs what the US privacy laws require.  If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

The Tech Blog Writer Podcast
Fear vs FOMO: Kantar's View on AI Adoption in Marketing

The Tech Blog Writer Podcast

Play Episode Listen Later Aug 28, 2025 27:32


In this episode of Tech Talks Daily, I speak with Jane Ostler from Kantar, the world's leading marketing data and analytics company, whose clients include Google, Diageo, AB InBev, Unilever, and Kraft Heinz. Jane brings clarity to a debate often clouded by headlines, explaining why AI should be seen as a creative sparring partner, not a rival. She outlines how Kantar is helping brands balance efficiency with inspiration, and why the best marketing in the years ahead will come from humans and machines working together. We explore Kantar's research into how marketers really feel about AI adoption, uncovering why so many projects stall in pilot phase, and what steps can help teams move from experimentation to execution. Jane also discusses the importance of data quality as the foundation of effective AI, drawing comparisons to the early days of GDPR when oversight and governance first became front of mind. From Coca-Cola's AI-assisted Christmas ads to predictive analytics that help brands allocate budgets with greater confidence, Jane shares examples of where AI is already shaping marketing in ways that might surprise you. She also highlights the importance of cultural nuance in AI-driven campaigns across 90-plus markets, and why transparency, explainability, and human oversight are vital for earning consumer trust. Whether you're a CMO weighing AI strategy, a brand manager experimenting with new tools, or someone curious about how the biggest advertisers are reshaping their playbooks, this conversation with Jane Ostler offers both inspiration and practical guidance. It's about rethinking AI not as the end of creativity, but as the beginning of a new partnership between data, machines, and human imagination.

The FIT4PRIVACY Podcast - For those who care about privacy
Where Does Digital Trust Fit into Board's Agenda with Bruno Soares and Punit Bhatia in the FIT4PRIVACY Podcast E146 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Aug 28, 2025 28:42


Ever wondered where digital trust fits in your company's strategy? We live in a world that's buzzing with AI, cybersecurity, and digital innovation. Everywhere you look, there's a new app, a smarter tool, or a faster system. But in the middle of all this tech hype, there's one thing we often overlook—trust.In this insightful conversation, Punit discusses with Bruno about the crucial influence of technology, economy, and other external factors on business strategies. They delve into how companies navigate different environments, the role of digital transformation, and the importance of maintaining a balanced ecosystem approach.If you're a leader, strategist, privacy professional, or tech enthusiast trying to make sense of innovation, trust, and governance in today's world—this conversation is a must-watch.KEY CONVERSION00:02:02 What is the concept of digital trust? Was it trust enough?00:04:40 Can we expect digital trust in an emerging world of new technology in 10-20 years?00:09:15 Is the board convinced about the value of digital trust or are they still in compliance mode?00:13:15 How do we sell this concept of digital trust on the boards? 00:18:51 Linking concept of trust, security and privacy to the broader agenda 00:25:58 What is it that you can sell them with and how can they reach out?  ABOUT GUESTBruno Horta Soares is a seasoned executive advisor, professor, and keynote speaker with over 20 years of experience in Governance, Digital Transformation, Risk Management, and Information Security. He is the founder of GOVaaS – Governance Advisors as-a-Service and has worked with organizations across Portugal, Angola, Brazil, and Mozambique to align governance and technology for sustainable business value.Since 2015, Bruno has served as Leading Executive Senior Advisor at IDC Portugal, guiding C-level leaders in digital strategy, transformation, governance, and cybersecurity. He is also a professor at top Portuguese business schools, including NOVA SBE, Católica Lisbon, ISCTE, ISEG, and Porto Business School, teaching in Masters, MBA, and Executive programs on topics such as IT Governance, Cybersecurity, Digital Transformation, and AI for Leadership.He holds a degree in Management and Computer Science (ISCTE), an executive program in Project Management (ISLA), and numerous professional certifications: PMP®, CISA®, CGEIT®, CRISC™, ITIL®, ISO/IEC 27001 LA, and COBIT® Trainer. As a LEGO® SERIOUS PLAY® Facilitator, he brings creativity into strategy and leadership development.Bruno received the ISACA John Kuyers Award for Best Speaker in 2019 and is the founder and current President of the ISACA Lisbon Chapter. A frequent international speaker, he shares expertise on governance and digital innovation globally.ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/brunohsoares/ Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy   

Innovation in Compliance with Tom Fox
Global Outsourcing and GDPR Compliance - Navigating Challenges and Opportunities with Inge Zwick

Innovation in Compliance with Tom Fox

Play Episode Listen Later Aug 26, 2025 23:12


Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Inge Zwick, a senior leader from Emapta Global, a global outsourcing company, who elaborates on his experience working in different international locations, including the Philippines and now Italy. Zwick discusses the complexities and common concerns around outsourcing under GDPR, emphasizing the importance of compliance and data protection. They explain how Emapta supports clients in achieving GDPR compliance while outsourcing, including risk assessments, data flow mapping, and maintaining secure work environments. The conversation delves into the practical aspects of handling Subject Access Requests (SARs), the integration of compliance into operational workflows, and the importance of maintaining ongoing monitoring and updates. Zwick also touches upon how ESG initiatives and compliance are seamlessly woven into Emapta's operations, providing a sustainable approach to global outsourcing. Lastly, advice is given to business leaders on how to future-proof their outsourcing strategies in light of GDPR, encouraging them not to shy away from global talent opportunities due to compliance fears. Key highlights: Company Overview and Global Operations Outsourcing and GDPR Compliance Risk Assessment and Data Security Subject Access Requests (SAR) Outsourcing Contracts and GDPR Obligations Integrating Compliance into Operations Future-Proofing Your Outsourcing Strategy   Resources: Connect with Inge Zwick LinkedIn Connect with Emapta Global Website LinkedIn Tom Fox Instagram Facebook YouTube Twitter LinkedIn

Awakening
Court Corruption and Is the Earth Flat with Peter Wilson

Awakening

Play Episode Listen Later Aug 20, 2025 89:08


Another call with Peter Wilson discussing Corrupt Courts , GDPR, NASA lies, Flat Earth and lots more  #commonlaw #natural law #sovereignity About my Guest:Ex Royal Navy gunner and armourer, turned professional fighter. Owned and ran own martial arts gym for about 30 years. Always been aware of something not being right in the world, went deep into it after losing over £1million of property in 1 week including own home. So been up and been down even living in a car for a while with his wife Janine and 4 dogs. ---Awakening Podcast Social Media / Coaching My Other Podcasts  ⁠⁠⁠https://roycoughlan.com/⁠⁠ ------------------What we Discussed: 00:40 Updates on Court Cases02:25 Don't go to the County Court03:53 CPR Book and Court Rules08:05 The Trick the Debt Collectors use12:20 They Write to the Legal Entity13:50 There are a few ways to defend yourself15:40 The Bank can not use a PO BOX20:10 To the Agent and the Principle21:50 Can we take the to Court for the illegal Trickery23:00 Can Ai be done by good People29:30 Ai Covering GDPR in Europe30:50 Using Ai to fight the Corrupt Banks than wiped us both out32:00 How much work is needed Training the Ai36:08 Can you Scan a letter & know how to respond37:25 Why is the laws in Latin & French39:50 Judges Rude in Court40:50 High Court Case Laws that can Help You46:15 How are Court Case Fees Calculated?48:00 If the Debt was sold can original debtors come after you 50:20 Are the Court Costs 5% similar to Poland51:05 How Long do the Cases Take52:15 Preparing the File for Your Court Case54:15 The Courts are Set up to Intimidate You57:10 How to Prepare with Breathwork and Meditation in Court1:01:40 The Stupidity of the Wigs they Wear in Court1:03:55 The Currents are Regular1:10:45 Why Does NASA Lie1:12:00 Satallite Lie1:14:20 Why are the Flight Patterns showing Flat Earth1:17:00 Shadows on Clouds from the Plane1:18:40 What is Happening in Antartica1:19:20 The Suez Canal is all the same Level1:20:25 When Firing from a Navy Boat they did not allow for Curve of Earth1:22:10 When I flat Earth Dave on my Podcast1:24:30 How do so many people look differentHow to Contact Peter:  https://www.skool.com/check-mate-the-matrix-2832/about?ref=f30a0a71fea743aa8f9b8fb632d6129c https://www.claimyourstrawman.com/ https://linktr.ee/PeterWilsonReturnToDemocracy ------------------------------More about the Awakening Podcast:All Episodes can be found at www.awakeningpodcast.org My Facebook Group Mentioned in this Episode https://www.facebook.com/profile.php?id=61572386459383 Awakening Podcast Social Media / Coaching My Other Podcasts  ⁠⁠⁠https://roycoughlan.com/⁠⁠ Our Facebook Group can be found at https://www.facebook.com/royawakening #checkmatethematrix #ucc #peterwilson #corruptcourts #flatearth

Technology Tap
I'm back and Security Plus Chapter 1

Technology Tap

Play Episode Listen Later Aug 20, 2025 25:33 Transcription Available


Send us a textProfessor JRod makes a triumphant return to Technology Tap after a year-long hiatus, bringing listeners up to speed on his personal journey and diving straight into Security Plus 701 fundamentals. Having completed his doctorate and subsequently focusing on his health—resulting in an impressive 50-pound weight loss—he reconnects with his audience with the same passion and expertise that made his podcast popular.The heart of this comeback episode centers on essential cybersecurity concepts, beginning with the CIA triad (confidentiality, integrity, availability) that forms the foundation of information security. Professor J-Rod expertly breaks down complex frameworks including NIST, ISO/IEC standards, and compliance-driven approaches like HIPAA and GDPR, explaining how organizations should select frameworks based on their specific industry requirements.With his trademark clear explanations, he walks listeners through the process of gap analysis—a methodical approach to identifying differences between current security postures and desired standards. The episode then transitions to a comprehensive overview of access control models, including Discretionary, Mandatory, Role-Based, Attribute-Based, and Rule-Based controls, each illustrated with practical examples that bring abstract concepts to life.What sets this episode apart is the interactive element, as Professor JRod concludes with practice questions that challenge listeners to apply their newly acquired knowledge. This practical approach bridges the gap between theory and real-world implementation, making complex security concepts accessible to professionals and students alike. Whether you're preparing for certification or simply expanding your cybersecurity knowledge, this return episode delivers valuable insights from an educator who clearly missed sharing his expertise with his audience.Support the showIf you want to help me with my research please e-mail me.Professorjrod@gmail.comIf you want to join my question/answer zoom class e-mail me at Professorjrod@gmail.comArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod

Apptivate
Exposing the myth of ‘perfect data' with Piotr Prędkiewicz (FORMEL SKIN)

Apptivate

Play Episode Listen Later Aug 20, 2025 32:50


Questions Piotr addresses in this episode:What is FORMEL SKIN, and how does it solve dermatology's bottleneck in Germany?How did Piotr's career in analytics develop across multiple verticals?Why is ‘perfect data' a myth in mobile marketing?How do you responsibly track and aggregate users before registration?What's the difference between front-end and back-end behavioral data?How do device/user mismatches and changes create analytics headaches?What are the new challenges and gray areas in privacy (GDPR, CCPA, device fingerprinting)?Where does fraud hide in aggregated data, and how do you find it?Why does fraud persist, and what incentives make it so durable?How could success in mobile marketing be measured differently to promote collaboration and integrity?Timestamps(0:00) – Introducing FORMEL SKIN, Piotr's role, and Germany's digital dermatology(1:18) – Marketing analytics in dating, fintech, health(2:50) – Why ‘perfect data' is a myth(5:00) – Assigning pseudo-user IDs, device-based tracking(6:00) – Aggregated data, ‘chasing ghosts,' and its pitfalls(8:00) – Combining front-end and back-end data; challenges in stitching(9:36) – Device vs. user: confusion, mismatches, and noise(11:13) – Balancing privacy vs. marketing needs; legal and business conflicts(12:30) – Device fingerprinting: what's legal, what's risky, and why(14:22) – The end of one-to-one attribution; rise of aggregated, top-level analysis(16:05) – Marketing fraud: what's changed, sneaky affiliate/network tricks(19:08) – Incentives, alignment failures, and why fraud persists(21:40) – Filtering fraud: long onboarding, compliance, and technical vigilance(23:38) – ‘Success' in mobile marketing and why responsibility must be shared(32:08) – Wrap upQuotes(2:50)  “Don't expect perfect data – especially in marketing where different data sources are being combined.”(5:10)  “You try to anchor it to the device…within all the data security and the privacy setup and anchor it to this entity and create one entity.”(15:26) “We can use aggregated data for strategic decisions, like how to shift budgets from channel A to B.”Mentioned in This EpisodePiotr Prędkiewicz's LinkedinFORMEL SKIN

Cloud Wars Live with Bob Evans
Oracle's Wei Hu Talks Raft Replication, Vector Search, and Global Resilience | Cloud Wars Live

Cloud Wars Live with Bob Evans

Play Episode Listen Later Aug 19, 2025 23:10


Wei Hu is the Senior Vice President, High Availability Technologies, at Oracle. In today's Cloud Wars Live, Hu sits down with Bob Evans for a wide-ranging discussion on Oracle's globally distributed database, AI-native workloads, and how Oracle is helping businesses meet data sovereignty requirements while delivering high performance, elasticity, and always-on availability across regions.Where AI Meets Your DataThe Big Themes:Globally Distributed Exadata Database on Exascale: Oracle's Globally Distributed Exadata Database on Exascale Infrastructure delivers something few cloud providers can: high performance, high availability, and full compliance. Built on Oracle's powerful Exadata platform, this architecture removes the traditional need to purchase or manage hardware. Organizations can start small and elastically scale compute and memory across multiple regions.Agentic AI and Vector Search at Enterprise Scale: Oracle's database innovation is designed for real-world AI demands, especially agentic AI. AI agents need massive compute, consistent availability, and extremely fast access to live business data. Oracle's globally distributed architecture supports in-memory vector indexes for lightning-fast retrieval augmented generation (RAG), making AI more responsive and effective. Additionally, Oracle keeps AI close to the data — eliminating stale data issues and ensuring compliance.Built for a Sovereign Cloud World: Data residency and sovereignty are no longer optional, they're legal imperatives. Countries around the world are implementing strict rules on where data must be stored, how it can be accessed, and who can process it. Oracle addresses these challenges with policy-driven data distribution, allowing customers to define how and where data lives. Whether it's for compliance with India's payment data regulations or Europe's GDPR, Oracle enables precise control without requiring app changes or replication of the full stack.The Big Quote: “The other thing that's interesting about agentic AI is that it's very dynamic. The work comes in, the demands comes in, like a tidal wave. Then it goes away, right, then a little, then when it comes again, there's another tidal wave. So, what you really want to do is have an infrastructure that is elastic, that can scale up and down depending on the demand."More from Wei Hu and Oracle:Connect with Wei Hu on LinkedIn and learn more about Globally Distributed Exadata Database on Exascale Infrastructure.* Sponsored Podcast * Visit Cloud Wars for more.