Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Professor Sander van der Linden's fascination with misinformation traces back to his own family's scars under Nazism, a regime that perfected propaganda so insidious it seeped invisibly into everyday life. In his award-winning book Foolproof, van der Linden argues that misinformation is a psychological virus, spreading through minds the way a biological virus hijacks cells. From Third Reich leaflets to modern algorithms, the tools have evolved, but the core infection remains the same: exploiting our mental shortcuts and illusions of truth. As the Capitol riots show, unchecked lies can kill. Van der Linden's mission is urgent: to build psychological vaccines that inoculate society from misinformation before deepfakes, AI, and data-hungry platforms unleash the next generation of invisible propaganda and push us past the point of no return. Guest bio: Professor Sander van der Linden is a Cambridge social psychologist and award-winning author of Foolproof. He pioneered the concept of “psychological vaccines” against misinformation, advises governments and tech companies worldwide, and his research has shaped global strategies to combat fake news, conspiracy theories, and the growing threat of deepfakes. Van der Linden is the Professor of Social Psychology in Society in the Department of Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. SUPPORT THE ZACH SHOW BY SUBSCRIBING TO THE ZACH SHOW 2.0 (BONUS EPISODES & EXCLUSIVE CONTENT): https://auxoro.supercast.com/ DR. SANDER VAN DER LINDEN LINKS:Bio: https://www.psychol.cam.ac.uk/people/sander-van-der-lindenFoolproof (Book): https://bit.ly/4lAjwjETwitter (X): https://x.com/sander_vdlindenWebsite: https://www.sandervanderlinden.com/ THE ZACH SHOW LINKS:Instagram: https://www.instagram.com/auxoro/YouTube: https://bit.ly/3CLjEqFNewsletter: https://therealzachwrites.substack.com/TikTok: https://www.tiktok.com/@auxoropod To support the show, please leave a review on Spotify and Apple Podcasts. This nudges the algorithm to show The Zach Show to more new listeners and is the best way to help the show grow. Thank you for your support: Review us on Apple Podcasts: https://bit.ly/458nbhaReview us on Spotify: https://bit.ly/43ZLrAt
Brought to you by TogetherLetters & Edgewise!In this episode: Elon Musk's early birthday present to himself—the first Tesla just drove itself from its factory straight to the customerDenmark to tackle deepfakes by giving people copyright to their own featuresMammals could regenerate damaged tissue by turning on ‘genetic switch': Chinese teamGrammarly acquires AI email client SuperhumanCloudflare Introduces Blocking of A.I. Scrapers By DefaultICEBlock climbs to the top of the App Store charts after officials slam itBioprinted organs ‘10–15 years away,' says startup regenerating dog skinWeird and Wacky: Someone Built an Ad Blocker for Real Life, and I Can't Wait to Try ItThe first autonomous robot football match happened in China and it's like watching drunk toddlers with fewer ethical setbacksTech Rec:Sanjay - NHTSA: Check for Recalls: Vehicle, Car Seat, Tire, EquipmentAdam - TrmnlFind us here:sanjayparekh.com & adamjwalker.comTech Talk Y'all is a proud production of Edgewise.Media.
La Inteligencia Artificial avanza más rápido de lo que podemos imaginar. Pero mientras nos deslumbramos con sus beneficios, también empezamos a ver su lado más oscuro y peligroso.En este episodio de Futuro en Construcción, exploramos el lado más preocupante. Porque es una herramienta poderosa, capaz de transformar nuestra vida personal y laboral. Pero también puede volverse un riesgo grande. Y lo peor que podemos hacer frente a eso es no entender los peligros o mirar para otro lado.Este video no es para asustarte, sino para ayudarte a pensar. ¿Qué riesgos reales existen, cómo mitigarlos, y qué decisiones urgentes necesitamos tomar como sociedad… y como individuos?
A video of Justin... that Justin never made. This was the shocking moment that shook the families on Parental Guidance—a deepfake designed to highlight how real, how convincing, and how dangerous AI can be for our kids. In today’s recap, we dive into the confronting realities of deepfakes, online safety, and skyrocketing screen time. From fake videos to 65-hour screen weeks, how do we protect our kids in this hyperconnected world? KEY POINTS: Deepfakes are real, fast, and easy. It took producers just 20 minutes to create a convincing fake video of Justin. This raises serious concerns about privacy, consent, and exploitation—especially for teens. The gendered impact is severe. 98% of deepfake porn targets women and girls, with devastating emotional and psychological consequences. The 'be careful what you post' advice is outdated. Even innocent images like yearbook photos can be misused. The real focus needs to be on teaching empathy, respect, and legal consequences. Screen time shock. One family clocked 21 devices for 6 people. A parent averaged 65 hours a week—over 9 hours a day—and her son hit 16 hours in a single day. The real cost of screen time. Excessive screens impact brain development, sleep, social skills, physical health, and relationships. Digital connections aren't enough. Kids need real-life, face-to-face interaction for emotional and neurological growth—nine times more brain activation happens in person than on Zoom. QUOTE OF THE EPISODE: "When we spend time in front of a screen—especially when it's excessive—we do not grow our brain. In fact, the research suggests there’s less grey matter. That is a massive problem." – Dr Justin Coulson RESOURCES MENTIONED: The Parenting Revolution by Dr Justin Coulson TV Show: Parental Guidance Season 3 – available on 9Now Website: happyfamilies.com.au ACTION STEPS FOR PARENTS: Talk about it. Have open, honest conversations with your kids about AI, deepfakes, and consent. Teach empathy. Reinforce that using someone's image without consent is a violation of their humanity—and often a crime. Audit your screens. Count your family’s devices and track your own usage. Awareness is step one. Prioritise real-world connection. Schedule offline time daily. Prioritise nature, play, face-to-face conversations, and physical activity. Set boundaries. Create clear family rules about screen time, nighttime device use, and online safety. Model the behaviour. Kids notice if we say one thing but do another. Manage your own screen habits. Stay informed. Keep up with emerging tech risks and solutions. Read, listen, learn, and adjust. See omnystudio.com/listener for privacy information.
In this episode, Aaron and I explore how AI is reshaping motivational content, from remixed clips of figures like Huberman and Dispenza to imagined mashups of Steve Jobs and Tony Robbins. We dive into the evolution of motivational media, questioning whether AI-enhanced messages empower or dilute the core message. Using stories like the Outdoor Boys' content remixing, we examine what this means for creators and consumers. The takeaway: motivation has gone digital, but action still matters most. Whether crafted by humans or algorithms, the real growth comes when we turn inspiration into effort and move beyond passive consumption into real-world application.--------- EPISODE CHAPTERS ---------(0:00:01) - Lessons and Implementation via AI(0:03:56) - Navigating Information in the Digital Age(0:17:51) - AI Recreates Inspirational Commencement Speech(0:31:47) - Building Habits and Motivation(0:35:35) - Exploring AI and Digital Influence(0:41:01) - Navigating Reality in Digital Era(0:49:14) - Implementing Lessons From Books and AISend us a text
The Danish Government is set to allow citizens to claim property rights over their features and voices in a bid to tackle the impact of AI deepfakes. The proposed legislation would mean that people would have the right to ask platforms to take deepfake content down if it breached copyright infringement. Copyright lawyer Rick Shera is sceptical about the prospect of this new law making a difference. "The real issue for me is that it doesn't really address the underlying problem - which is the harm that's caused by deepfakes of a particularly intimate nature." LISTEN ABOVESee omnystudio.com/listener for privacy information.
Newest Deepfake Thwart: Copyright Yourself! by Nick Espinosa, Chief Security Fanatic
Latest on efforts to tackle the issue of deep-fakes generated through artificial intelligence, a surprising statistic about teen workers and family income, and a businesswoman making her mark in auto racing.
Concerns about the rise and rapid development of artificial intelligence often tend to focus on AI's threat to jobs or its potential to influence politics and elections. But what about the very real threat that AI poses to women? In her new book, The New Age of Sexism, feminist writer Laura Bates explores how the ever-evolving world of technology has become a danger to women and how the expanding scope of what's possible online is “reinventing misogyny.” In this episode, Bates talks to Róisín Ingle about the real harm caused by pornographic deepfakes, the alarming rise of AI girlfriends, and her eye-opening visit to a cyber brothel in Berlin.But first Irish Times journalist Niamh Towey is here to talk about some of the biggest stories of the week including the new Women's Aid report showing the rise in domestic violence disclosures, the latest in the Annie McCarrick case and the furore over the Bezos wedding in Venice. Hosted on Acast. See acast.com/privacy for more information.
In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust. Featuring past guests:Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310Navindra Yadav (Co-founder & CEO, Theom) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356Eric Siegel (CEO, Gooder AI & Author ) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391Ben Kus (CTO, Box) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034✅ What You'll Learn: What it means to design AI with safety, transparency, and human oversight in mindHow leading enterprises approach responsible AI development at scaleWhy data privacy and permissions are critical to safe AI deploymentHow to detect and mitigate bias in predictive modelsWhy responsible AI requires balancing speed with long-term impactHow trust, explainability, and compliance shape the future of enterprise AI ResourcesSubscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Other special compilation episodes Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)Data Privacy Day Special Episode: AI, Deepfakes & The Future of TrustThe Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & TrustWorld Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder
In this episode, Graham unravels Operation Endgame - the surprisingly stylish police crackdown that is seizing botnets, mocking malware authors with anime videos, and taunting cybercriminals via Telegram.Meanwhile, Carole exposes the AI-generated remote hiring threat. Could your next coworker be a North Korean hacker with a perfect LinkedIn?And BBC cyber correspondent Joe Tidy joins us to talk about "Ctrl-Alt-Chaos", his new book diving into the murky world of teenage hackers, ransomware gangs, and the strange motivations that lie behind digital mayhem.Plus: competitive pond husbandry, dead slugs, Hitster the board game, and a shoutout to the AI startup that hijacked Graham's SEO.All this and more is discussed in episode 423 of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault - it's like a cauldron of life... but for cybersecurity.Warning: This podcast may contain nuts, adult themes, and rude language.Episode links:Operation Endgame.Ctrl+Alt+Chaos.Lizard Squad Member: Why I Took Down Xbox and PlayStation - YouTube.Reckoning With the Rise of Deepfakes - The Regulatory Review.Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security - Fast Company. Why Your Hiring Process is Now a Cybersecurity Vulnerability - Pindrop.Best Practices for Defeating Deepfake Candidate Fraud - Dice Hiring.Phanpy - A minimalistic opinionated Mastodon web client.How to make a mini pond - Gardener's World.Hitster board game.Smashing Security merchandise (t-shirts, mugs, stickers and stuff)Sponsored by:Vanta– Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!Flare- Uncover the latest threats across the dark web and Telegram. Start your free trial today.Trelica by 1Password - Access Governance for every SaaS app. Discover, manage, and optimize access for any of your SaaS apps - whether managed or unmanaged.SUPPORT THE SHOW:Tell your friends and colleagues about “Smashing Security”, and leave us a review on
You've probably come across content online that seems real but is actually fake. And that experience is becoming more common with the proliferation of AI generated content. Our guest this week points out that the mental gymnastics of this starts to take a toll. Jia Tolentino, a staff writer at The New Yorker, wrote a piece all about this aptly titled, “My Brain Finally Broke.” She joins WITHpod to discuss how AI is changing our perception of the world, how online content can make us more likely to detach from reality and more. Note: this episode was recorded on 6/11/25.
Medios y redes con Mario Campos
Relebogile Mabotja speaks to Johan Steyn the Human- centred AI advocate about AI deepfake pornography which involves creating non-consensual, sexually explicit images or videos by superimposing a student’s face onto fabricated intimate scenes. It’s now emerging as a horrifying form of cyberbullying. 702 Afternoons with Relebogile Mabotja is broadcast live on Johannesburg based talk radio station 702 every weekday afternoon. Relebogile brings a lighter touch to some of the issues of the day as well as a mix of lifestyle topics and a peak into the worlds of entertainment and leisure. Thank you for listening to a 702 Afternoons with Relebogile Mabotja podcast. Listen live on Primedia+ weekdays from 13:00 to 15:00 (SA Time) to Afternoons with Relebogile Mabotja broadcast on 702 https://buff.ly/gk3y0Kj For more from the show go to https://buff.ly/2qKsEfu or find all the catch-up podcasts here https://buff.ly/DTykncj Subscribe to the 702 Daily and Weekly Newsletters https://buff.ly/v5mfetc Follow us on social media: 702 on Facebook https://www.facebook.com/TalkRadio702 702 on TikTok: https://www.tiktok.com/@talkradio702 702 on Instagram: https://www.instagram.com/talkradio702/ 702 on X: https://x.com/Radio702 702 on YouTube: https://www.youtube.com/@radio702 See omnystudio.com/listener for privacy information.
Can we fix the news? On this minisode, Neil deGrasse Tyson and co-host Gary O'Reilly sit down with Harleen Kaur, former space engineer and founder of Ground News, to explore our current media landscape, navigating bias, and fixing the internet.Go to https://ground.news/startalk to stay fully informed on the latest Space and Science news. Save 40% off through our link for unlimited access to the Vantage plan this month.NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here:https://startalkmedia.com/show/fixing-the-internet-with-harleen-kaur-bonus-minisode/ Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus.
How is AI actually being used in classrooms today? Are teachers adopting it, or resisting it? And could software eventually replace traditional instruction entirely?In this episode of This Week in Consumer AI, a16z partners Justine Moore, Olivia Moore, and Zach Cohen explore one of the most rapidly evolving — and widely debated — frontiers in consumer technology: education.They unpack how generative AI is already reshaping educational workflows, enabling teachers to scale feedback, personalize curriculum, and reclaim time from administrative tasks. We also examine emerging consumer behavior — from students using AI for homework to parents exploring AI-led learning paths for their children. Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsFind Zach on X: https://x.com/zachcohen25 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
June 20, 2025: Drex dives deep into a chilling case study of AI-powered financial fraud. Learn how cybercriminals used deepfake technology to steal $5 million through a fake CEO video call, and discover why 90% of internet content could be AI-generated by 2026. Learn what to look for and where the threats are evolving in this episode of UnFake.NotebookLM podcast sourced from Eric O'Neill | Speaker, Cybersecurity Expert, Legendary Spy HunterRemember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Link to episode page This week's Cyber Security Headlines – Week in Review is hosted by Rich Stroffolino with guest Howard Holton, COO and industry analyst, GigaOm Thanks to our show sponsor, Adaptive Security As deepfake scams and GenAI phishing evolve, Adaptive equips security teams with AI-powered phishing simulations featuring realistic personalized deepfakes and engaging security awareness training. Their new AI Content Creator turns threat intel and policy updates into interactive, multilingual training — instantly. Trusted by Fortune 500s and backed by Andreessen Horowitz and OpenAI, Adaptive helps you stay ahead of AI-driven threats. Learn more at adaptivesecurity.com. All links and the video of this episode can be found on CISO Series.com
In this episode, Amir sits down with Aaron Painter, CEO of Nametag, to explore how deepfakes and generative AI are reshaping identity security in the workplace. They discuss real-world attacks, such as the MGM breach, and how enterprises are responding with new technologies—from cryptographic identity verification to re-verification protocols. Aaron shares what companies are doing right, where they're vulnerable, and the role of identity in the future of enterprise security.
Drex breaks down three critical cybersecurity developments healthcare organizations need to monitor. First, escalating geopolitical tensions suggest increased cyber attacks from Iran and allied nations targeting U.S. critical infrastructure, including healthcare systems. Second, Vanderbilt University Medical Center reports sophisticated Microsoft Teams impersonation attacks where scammers pose as supervisors to request urgent actions like software installation or payments. Finally, deepfake fraud attempts skyrocketed 1300% in 2024, with new "spoofing as a service" platforms making AI-powered voice and video scams more accessible to cybercriminals. Healthcare IT leaders should prepare defenses against these evolving threats and educate staff on verification protocols for suspicious requests.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
News and Updates: GameStop damages Nintendo Switch 2 consoles: Google settles shareholder lawsuit for $500M: “To Be Less Evil” "AI" startup Builder.ai revealed to be 700 human employees: England's High Court warns lawyers against citing fake AI-generated cases: Meta platforms showed hundreds of "nudify" deepfake ads: OpenAI hits $10 billion in annual revenue: Business Insider lays off 21% of staff, goes "all-in on AI": Duolingo CEO faces backlash for "AI-first" plan:
Julie Smith, Author and Media Instructor at Webster University joins Megan Lynch how Google's new AI video tool could disrupt the film industry and deepfakes
Once the stuff of science fiction, deepfake technology has rapidly become one of the most powerful—and consequential—applications of generative AI, blurring the line between reality and illusion and reshaping how we trust what we see and hear online. This month we delve into this phenomenon with Professor Hany Farid, a pioneer in digital forensics, and Professor Siwei Lyu, whose lab develops state-of-the-art deepfake detection methods.Together, they'll walk us through the data journey—from the vast raw data sets that fuel synthetic media to the pixel-level signatures that can unmask it. Whether you're a computer scientist, policymaker, or simply curious about how synthetic content is transforming our information landscape, join us for an in-depth conversation about turning data into both convincing illusions and robust defenses—and learn how we can preserve trust and truth in our rapidly evolving digital world. Our guests: Hany Farid is a professor at the University of California, Berkeley, with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. He is also a member of the Berkeley Artificial Research Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering program, Vision Science program, and is a senior faculty advisor for the Center for Long-Term Cybersecurity. Siwei Lyu is a SUNY Distinguished Professor and a SUNY Empire Innovation Professor at the Department of Computer Science and Engineering, the director of the UB Media Forensic Lab, and founding co-director of the Center for Information Integrity at the University of Buffalo, State University of New York.
The ladies discuss Glenn Greenwald's sex tape, Laura McClure's AI deepfakes, Hoe Math's viral confession, and the return of the r-slur.
Welcome back to Forcepoint's To the Point Cybersecurity podcast! In this episode, co-host Jonathan Knepher sits down with Petko Stoyanov—cybersecurity expert and former Forcepoint host—for a thought-provoking discussion about the evolving landscape of AI in cybersecurity. Together, they unpack the shifting trends seen at this year's RSA conference, exploring how artificial intelligence is moving from marketing buzzword to mission-critical security feature. Petko dives deep into the real-world impact of generative AI models, the increasing sophistication of both attackers and defenders, and the pressing need for “security by design” in today's fast-moving digital world. They discuss the new questions CISOs and CIOs should be asking about AI—like where models are hosted, what data they process, and how to manage risks in regulated industries. Petko shares eye-opening anecdotes about the potential for AI to accidentally leak sensitive data, the rise of targeted phishing in new languages powered by generative models, and why the CISO role is broader and more challenging than ever. The conversation also touches on the future of automation, the risk of deepfakes and disinformation, and how organizations can stay resilient in an era where the line between attacker and defender is increasingly blurred. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e337
AI-generated fraud is now mainstream—and your team probably can't tell the difference. Ben Colman shares hard-earned insights on fighting deepfakes, building detection tech that actually works, and how to stay ahead in the AI arms race. Ben: www.linkedin.com/in/benpcolman Reality Defender: www.realitydefender.com Jon: www.linkedin.com/in/jon-mclachlan Sasha: www.linkedin.com/in/aliaksandr-sinkevich YSecurity: www.ysecurity.io
Derzeit fluten Deepfakes des neuen Papstes Leo XIV. das Internet. Da der Papst erst kürzlich gewählt wurde, besteht ein Vakuum an klaren Positionen, das mittlerweile viele nutzen, um mit täuschend echten KI-generierten Videos falsche Predigten und Aussagen zu verbreiten. Die KI-generierten Fake-Videos sind mittlerweile so gut gemacht, dass man den Unterschied oft nicht mehr auf den ersten Blick sieht. Für die Kirche wird das zunehmend zu einem Problem, denn Gläubige fallen auf die Fälschungen herein. Die Kirche reagiert mit offiziellen Richtigstellungen und versucht, durch frühzeitige Kommunikation von Positionen des Papstes das Vakuum zu verkleinern, um den Spielraum für Fälschungen einzuschränken. Wer profitiert von diesen Deepfakes? Dieser Frage gehen wir in dieser Newsplus-Folge nach. Habt Ihr Fragen oder Themen-Inputs? Schreibt uns gerne per Mail an newsplus@srf.ch oder sendet uns eine Sprachnachricht an 076 320 10 37. ____________________ In dieser Episode zu hören: - Peter G. Kirchschläger, Ethikprofessor an der Universität Luzern ____________________ Links: - https://www.srf.ch/audio ____________________ Team: - Moderation: Salvador Atasoy - Produktion: Sandro Della Torre - Mitarbeit: Tim Eggimann ____________________ Das ist «News Plus»: In einer Viertelstunde die Welt besser verstehen – ein Thema, neue Perspektiven und Antworten auf eure Fragen. Unsere Korrespondenten und Expertinnen aus der Schweiz und der Welt erklären, analysieren und erzählen, was sie bewegt. «News Plus» von SRF erscheint immer von Montag bis Freitag um 16 Uhr rechtzeitig zum Feierabend.
In this episode of the Defending The Edge Podcast with DefendEdge, we discuss the recent cyber attacks from ConnestWise to Google Calendar. In a recent update, CISA has updated the way of reporting, and the intelligence community is aiming to create a shared threat actor glossary. Deepfake technology is becoming increasingly advanced and alarming. Lastly, Anonymous has been in the headlines for various data dumps. Join us as we discuss all of these topics and more.
William Lyne of the UK's National Crime Agency joins us live at Infosecurity Europe to talk ransomware, AI threats, and the future of cybercrime disruption.When the UK's top cyber intelligence strategist sits down with you in London, you listen — and you hit record.At Infosecurity Europe 2025, the ITSPmagazine podcast team — Marco Ciappelli and Sean Martin — sat down with William Lyne, Deputy Director and Head of Cyber Intelligence at the UK's National Crime Agency (NCA). This is the guy who not only leads cyber strategy for the NCA, but has also represented the UK at the FBI in the U.S. and now oversees national-level ransomware disruption efforts. It's not just a conversation — it's a rare front-row seat into how one of the world's most serious crime-fighting agencies is tackling ransomware 3.0.The message? Ransomware isn't just a cyber issue. It's a societal one. And it's evolving faster than we're prepared for — unless we change the game.“It went from niche to national threat fast,” Lyne explains. “The tools were always there. It just took a few threat actors to stitch them together.”From banking malware to fully operational cybercrime-as-a-service ecosystems, Lyne walks us through how the underground economy has industrialized. Ransomware isn't just about tech — it's about access, scale, and business models. And most importantly, it's no longer limited to elite coders or closed-door Russian-speaking forums. The barrier to entry is gone, and the dark web is wide open for business.Sean brings up the obvious: “Why does this still feel like we're always reacting?”Lyne responds: “We've shifted. We're going after the ecosystem — the people, the infrastructure, the business model — not just the payload.” That includes disrupting ransomware-as-a-service, targeting marketplaces, and yes, investing in preemptive intelligence.Marco flips the script by comparing today's cyber landscape to something deeply human. “Extortion is nothing new — we've just digitalized it. This is human behavior, scaled by tech.”From there, the conversation takes a future-facing turn. Deepfakes, AI-powered phishing, the commoditization of generative tools — Lyne confirms it's all on their radar. But he's quick to note that cybercriminals aren't bleeding-edge innovators. “They adopt when the ROI is right. But AI-as-a-service? That's coming. And it will reshape how efficient — and damaging — these threats become.”And then the real insight lands:“You can't wait to be a victim to talk to law enforcement. We may already have access to the infrastructure. The earlier we hear from you, the better we can act — and fast.”That kind of operational openness isn't something you heard from law enforcement five years ago. It signals a cultural shift — one where collaboration is not optional, it's essential.William also highlights the NCA's partnerships with private sector firms, academia, and international agencies, including the Kronos operation targeting LockBit infrastructure. These kinds of collaborations prove that when information moves, so does impact.Why does this matter?Because while most cybersecurity media gets stuck in product buzzwords and vendor hype, this is the real stuff — how ransomware groups behave, how law enforcement thinks, and how society can respond. It's not theory. It's strategy, lived on the front lines.
Još jedna sezona našeg podcasta došla je do svog finala – i to ne bilo kakvog. Pripremili smo posebnu, završnu epizodu u kojoj smo se osvrnuli na sve ono što je obilježilo proteklih godinu dana. Filtrirali smo ih kroz našu osobnu prizmu i predstavili kroz omiljeni format topova i flopova. Što nas je oduševilo, što razočaralo, a što ostavilo podijeljenih mišljenja?_______________0:00 UvodFLOP0:27 Prekid Muska i Trumpa 5:30 Je li ovo poslovni pad za Muska?11:50 Promptanje iz fore zagađuje digitalni prostor i prirodu17:00 Milijunaši imaju preveliki utejcaj na svjetsku politiku - techbro oligarhija20:20 Deepfake kao problem koji je sve gori, a nitko se njime ne baviTOP23:45 Nikolina je upoznala legende tehnološkog novinarstva26:00 AI može učiniti naš životm boljim, ali sve ovisi kako ga koristimo28:00 Undabot je primjer kako pričati o zatvarnju trvtke i neuspjehu35:20 Kako je naš podcast oborio gledanost_______________
Ako sa vyrovnáme s tým, že už nemôžeme veriť ani vlastným očiam a ušiam? Technológia deepfake, ktorá kedysi patrila skôr do ríše vedeckej fantastiky, sa stáva realitou s hmatateľnými dopadmi. Ako táto technológia ovplyvní verejnú mienku, politiku, naše súkromie a v konečnom dôsledku aj našu schopnosť orientovať sa v realite?V novom dieli podcastu SHARE, ktorý je záznamom z diskusie v žilinskom Martinuse, sa redaktor Živé.sk a autor knihy Umelá inteligencia: Pripravte sa na budúcnosť Ján Trangel rozpráva s Michalom Gregorom, výskumníkom v oblasti umelej inteligencie z Kempelenovho inštitútu inteligentných technológií (KInIT), o fungovaní, vývoji, rizikách a obrane proti deepfakeom.Redaktori Živé.sk vydávajú knihu: Zo série rozhovorov sa dozviete, ako umelá inteligencia čoskoro zásadne zmení svet okolo nás:TIP: https://zive.aktuality.sk/clanok/0RfdZVW/nahliadnite-do-buducnosti-vydavame-knihu-o-umelej-inteligencii/V podcaste hovoríme aj o týchto témach:Ako sa vyvíjala technológia deepfake od prvých experimentov až po dnešné AI modely.Kde všade sa deepfake využíva: Od zábavného priemyslu po dezinformácie.Je možné odhaliť deepfake? O detektoroch, vodoznakoch a overovaní zdrojov.Prečo je ťažké zakázať AI generovanie tvárí a mali by existovať necenzurované modely?Ako sa ako bežný používateľ brániť a prečo je kritické myslenie kľúčové.Podcast SHARE pripravuje magazín Živé.sk.
In this episode I interview Heidi Olson. Heidi Olson, RN, MSN, CPN, SANE-P is a Certified Pediatric Nurse and Certified Pediatric Sexual Assault Nurse Examiner (SANE), she is also the Founder and President of Paradigm Shift Training and Consulting, which equips healthcare workers with skills-based training to be able to identify human trafficking and exploitation. Heidi has a wide range of experience in pediatric and forensic nursing and has taught as an adjunct instructor for several universities. We delve into the critical issues surrounding human trafficking, sexual assault, and the pervasive influence of pornography on society. Heidi shares her extensive experience in working with victims of sexual violence and highlights the misconceptions surrounding these topics. The discussion emphasizes the importance of open conversations, education, and awareness in combating these issues, as well as the need for better laws and regulations to protect children in the digital age.Chapters00:00 Introduction to the Conversation04:05 Heidi Olson's Background and Expertise06:35 Understanding Forensic Nursing07:53 The Reality of Sexual Assault Injuries10:41 Mislabeling Victims of Trafficking12:22 The Complexity of Human Trafficking14:06 Demographics of Victims16:01 The Role of Gender in Victimization18:08 Understanding Sexual Assault vs. Trafficking19:19 The Misconceptions of Trafficking23:22 The Impact of Pornography on Society27:04 The Addiction to Pornography30:39 Child-on-Child Sexual Assault32:24 Addressing the Porn Industry34:23 The Impact of Pornography on Relationships35:17 The Rise of Deepfakes and Their Consequences35:56 Desensitization and Vulnerability in Youth37:10 The Glamourization of Pornography in Content Houses38:28 The Reality Behind OnlyFans Success Stories40:14 The Misconception of Empowerment in Sex Work42:26 The Psychological Effects of Pornography on Mental Health43:42 Trauma Responses and the Cycle of Exploitation47:57 Protecting Children in the Digital Age49:44 Legislative Gaps in Protecting Youth51:28 The Need for Regulation in AI and Pornography54:12 The Role of Society in Addressing Pornography57:08 Finding Hope and Taking Action01:00:59 A Message to Women EverywhereParadigm Shift Website: https://paradigmshifttc.com/https://www.thoseothergirls.com/post/porn-s-role-in-trafficking-and-sexual-assault-those-other-girls-episode-329Follow on Instagram:TOG: @thoseothergirlspodcastMal: @malyourgalpal This is a Other Girls Media production.You are able to listen on Apple Podcasts, Spotify, iHeartRadio, Google Podcast, Amazon Music, Pandora PLUS YouTube.Join ourPatreonWant more TOG?Get theappCheck out ourwebsiteOrderMerchThose Other Girls Youtube Channel
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
In this episode of HR Heretics, Nolan Church talk to Danielle Mathis, Vice President, Global Talent & HRBP of Clari, revealing how fraudsters orchestrated an elaborate recruitment scam, deceiving 12 job seekers with fake offers while harvesting their personal data. The shocking conversation exposes the dark reality of deepfake candidates infiltrating hiring processes and why traditional recruitment methods are dangerously obsolete in our AI-manipulated world.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:
HEADLINES:• The UAE Is About To Get Hotter By Friday• The iPhone 17 Price In The UAE Might Have You Packing For Dubai• UAE Resident Falls Victim To Fraud By A Person Claiming To Be Her Boss• A Fake Video Of Ukrainian President Zelensky Buying A Home In Burj Khalifa Is Doing Rounds• Qudra Launches in Dubai: A Game-Changer For Freelancers And Businesses
Sichere dir jetzt 40 Euro als Android-Nutzer. Ganz einfach mit Privacy ReClaim: https://wbs.law/android (Werbung) Dein Account auf Instagram, Paypal oder woanders ist gesperrt ? Jetzt anwaltlich dagegen vorgehen: https://wbs.law/account-gesperrt Du benötigst eine Beratung zu Duftzwillingen? Dann melde dich jetzt bei WBS.LEGAL: https://wbs.law/duftzwillinge MontanaBlack ist wohl das Gesicht der Streaming-Community in Deutschland. Schon lange ist er über Twitch und YouTube hinaus bekannt und so wurde Monte, wie er auch genannt wird, mit der Zeit fast schon zu einer eigenen Marke. Diese Marke wollen sich natürlich auch Firmen zunutze machen, indem sie mit MontanaBlack für ihre Produkte werben. Eine Werbung ist uns dabei aber ins Auge gestochen, die durchaus problematisch ist – und zwar aus rechtlicher Sicht. Dabei geht es um KI und gefälschte Werbeclips. Ist das rechtlich erlaubt? Video Galileo zu Deepfakes: https://www.youtube.com/watch?v=0SKgh9j9rqY&t=140s Klengans Video: https://www.youtube.com/watch?v=RuYoL8nphOM&t=319s § 22 KUG: https://www.gesetze-im-internet.de/kunsturhg/__22.html § 23 KUG: https://www.gesetze-im-internet.de/kunsturhg/__23.html § 2 UrhG: https://www.gesetze-im-internet.de/urhg/__2.html § 23 UrhG: https://www.gesetze-im-internet.de/urhg/__23.html § 5 UWG: https://www.gesetze-im-internet.de/uwg_2004/__5.html Eingeblendetes Urteil: https://juris.bundesgerichtshof.de/cgi-bin/rechtsprechung/document.py?Gericht=bgh&Art=en&nr=51584&pos=0&anz=1 ▬▬▬▬▬▬▬▬▬▬▬▬▬ WBS.LEGAL sucht dich! Du bist auf der Suche nach einem attraktiven, spannenden und anspruchsvollen Job? Dann bewirb dich bei uns und komm in unser Team. Bei WBS.LEGAL arbeitest du im Herzen der Medienhauptstadt Köln und bist im Berufsleben immer am Puls der Zeit – garantiert. Hier unsere offenen Stellenangebote: https://www.wbs.legal/karriere/#jobs Was erwartet dich bei uns? Hier bekommst du weitere Infos: https://www.wbs.legal/karriere/. ▬▬▬▬▬▬▬▬▬▬▬▬▬ Rechtsanwalt Prof. Christian Solmecke Prof. Christian Solmecke hat sich als Rechtsanwalt und Partner der Kölner Medienrechtskanzlei WBS.LEGAL auf die Beratung der Internet-, IT- und Medienbranche spezialisiert. So hat er in den vergangenen Jahren den Bereich Internetrecht/E-Commerce der Kanzlei stetig ausgebaut und betreut zahlreiche Medienschaffende, Web-2.0-Plattformen und App- Entwickler. Neben seiner Tätigkeit als Rechtsanwalt ist Prof. Christian Solmecke vielfacher Buchautor und als Gründer der cloudbasierten Kanzleisoftware Legalvisio.de auch erfolgreicher LegalTech-Unternehmer. ▬▬▬▬▬▬▬▬▬▬▬▬▬ Virtueller Kanzlei-Rundgang: https://wbs.law/rundgang Startet euren Rundgang in 3D und 360° durch die Kanzlei WBS.LEGAL (inkl. YouTube- Studio) ▬▬▬▬▬▬▬▬▬▬▬▬▬ Social-Media-Kanäle von WBS.LEGAL Wir freuen uns, wenn du uns auch auf unseren weiteren Social-Media-Kanälen besuchst und uns dort folgst. Jeder unserer Kanäle steht für sich und bringt dir garantiert einen Mehrwert. ▬Instagram und TikTok▬ Auf unseren erfolgreichen Kanälen auf Instagram und TikTok räumen wir täglich mit Rechtsirrtümern auf und präsentieren dir rechtliche Lifehacks. Damit bist du immer auf dem Laufendem und bekommst deine tägliche Dosis Alltagsrecht. Kurz, knackig und immer auf den Punkt. Folge uns auf Instagram und TikTok und du kannst vor deinen Freunden mit neuem Wissen glänzen. ➥ Instagram: https://wbs.law/recht2go ➥ TikTok: https://wbs.law/recht2goTikTok ▬Facebook▬ Auf Facebook sind wir inzwischen schon alte Hasen, denn seit Jahren informieren wir dich dort täglich über aktuelle Rechts-News. Gerne kannst du uns dort auch eine Anfrage als private Nachricht schicken. Schau vorbei! Hier der Link: ➥ https://wbs.law/facebook ▬X / Twitter▬ Erfahre als Erster, wenn es wichtige Rechts-News gibt. Knackige Statements zu aktuellen Themen bekommst du auf unserem X-Account (ehemals Twitter)! Hier der Link: ➥ https://wbs.law/twitter ▬Podcasts▬ Du bist unterwegs, unter der Dusche oder hörst einfach gerne Podcasts? Dann haben wir
After discovering deepfake pornography featuring her image online, Breeze Liu faced a harrowing journey that exposed the deep failures of our legal and tech systems. But instead of staying silent, she took action. In this powerful episode, Breeze joins Vital Voices CEO Alyse Nelson to share how she turned trauma into purpose—founding Alecto AI to help others detect and remove non-consensual content.Now a Vital Voices Global Fellow and a driving force behind the bipartisan Take It Down Act, Breeze is reshaping the digital landscape to protect survivors. This conversation is a call to action for tech accountability—and a testament to the power of survivor-led change.
Text FOODS to 64000 to get twenty percent off all IQBAR products, plus FREE shipping. Message and data rates may apply. Follow Tim on IG: @timchantarangsu Follow David on IG: @davidsocomedy Follow Robyn on IG: @robynlynncouch Check out Goodie Brand at https://www.GoodieBrand.com Check out Tim's Patreon for exclusive content at https://www.patreon.com/timchantarangsu If you want to support the show, and get all the episodes ad-free go to: https://dudesbehindthefoods.supercast.com/ To watch the Dudes Behind the Foods podcast on YouTube go to: www.youtube.com/timothy Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/DudesBehindtheFoodsPodcast Learn more about your ad choices. Visit podcastchoices.com/adchoices
The last time Philipp Pointner joined me on the podcast, AI was just a whisper on the horizon. We discussed the foundational role of digital identity in establishing trust between individuals and platforms. But fast forward to 2025, and the conversation has shifted dramatically. We're no longer just discussing verification for compliance. We're reckoning with synthetic identities, deepfake armies, and the growing push for reusable digital credentials. In today's episode, Philipp returns to unpack how much has changed since 2022. As Chief of Digital Identity at Jumio, he's right at the intersection of where AI meets trust. Fraud has evolved from lone hackers into coordinated operations that run on an industrial scale. ID fraud is no longer a matter of bad actors copying documents. Its entire networks generate tens of thousands of AI-manipulated identities with the click of a button. So, how are businesses adapting? Philipp breaks down the layered approach required for modern identity verification, touching on risk signals across various channels, including documents, devices, faces, emails, and more. He also points to an encouraging trend: cross-industry collaboration. Companies, regulators, and tech vendors are no longer working in silos. They are coordinating efforts to counter this new breed of threats. But this isn't just about fraud. We explore how identity solutions are now a key part of improving onboarding, boosting customer satisfaction, and even reducing operational costs. With significant regulatory shifts underway in Europe, the UK, and the US, from mobile driver's licenses to national digital identity schemes, businesses can't afford to fall behind. Philipp also tackles the AI paradox. The same technology that enables deepfakes is now critical for combating them. Jumio is using AI to detect fakes, verify liveness, and stay one step ahead of fraudsters. Biometrics, meanwhile, are becoming mainstream, not just in airports but as part of everyday onboarding and authentication. If you're thinking about what identity means for your business in 2025 and beyond, this episode offers a practical, timely look at what's here now and what's coming next. Whether you care about compliance, conversion rates, or protecting your users, it all starts with knowing who you're dealing with. And increasingly, that requires more than just asking for a photo ID. What role will your organization play in the future of digital trust?
In this episode of Project Synapse, Jim and Marcel navigate a rapidly evolving AI landscape while John appears intermittently through AI-generated clips, with his permission. They delve into intriguing developments at Google Labs, including a mysterious AI model that briefly appeared and vanished. The discussion highlights the differing ambitions of Google and OpenAI, examining Google's focus on creating a comprehensive information hub versus OpenAI's broad-reaching aspirations, like the recent OpenAI for Business initiative. A central theme is the persistent rivalry in AI, specifically between Gemini and ChatGPT, while also touching on niche players like Anthropic's Claude and Perplexity. The conversation takes a deep dive into the complexities of integrating AI into daily life, the potential benefits, and the significant risks, including issues surrounding privacy and identity. Counterpoint to their technical musings is a look at the societal impacts of AI, including job displacement and the need for universal basic income. Finally, the hosts ponder the unsettling yet fascinating future where personal identity might be verified through biometric scanning, as proposed by World ID. 00:00 Introduction and Setting the Scene 00:40 Unexpected Developments in AI 01:53 Google vs. OpenAI: The Rivalry 03:55 AI Integration in Everyday Life 04:36 The Rise of Niche AI Players 05:42 Personal Experiences with AI Tools 12:10 The Future of AI and Privacy Concerns 17:20 The Evolution of AI and Robotics 26:53 Smart Home Integration and Standards 34:21 The Illusion of Choice in Technology 36:13 The Privacy Paradox 36:31 The Integration of AI in Daily Life 38:52 The Rise of Deep Fakes and Identity Theft 41:35 The Future of Personal Data and Security 44:51 The Debate on Universal Identification 46:52 The Acceleration of Technological Change 52:19 The Need for Intelligent Design in AI 53:34 The Role of Governments and Corporations 01:04:11 Concluding Thoughts and Future Discussions
Featured Guests: Andrew Bud, founder and CEO, iProov | Tyler Ganim, investor, Powerhouse CapitalRyan Reynolds's Wrexham, the Welsh football club he bought three years ago for only £2 million, is now eyeing a staggering $475 million valuation. A shadowy cybercrime group called "Grey Nickel" is bypassing banks' identity checks with deepfakes and AI, exposing massive gaps in global digital security. And Anduril just pulled in a $2.5 billion raise, doubling its previous value to now over $30 billion.DOWNLOAD PUBLIC: Public.com/ventureInvest in everything—stocks, options, bonds, crypto. You can even earn some of the highest yields in the industry—like the 7% or higher yield you can lock in with a Bond Account. Public is a FINRA-registered, SIPC-insured platform that takes your investments as seriously as you do. Fund your account in five minutes or less at public.com/venture and get up to $10,000 when you transfer your old portfolio.All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Public Investing offers a High-Yield Cash Account where funds from this account are automatically deposited into partner banks where they earn interest and are eligible for FDIC insurance; Public Investing is not a bank. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1890144), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC.A Bond Account is a self-directed brokerage account with Public Investing, member FINRA/SIPC. Deposits into this account are used to purchase 10 investment-grade and high-yield bonds. The 7%+ yield is the average, annualized yield to worst (YTW) across all ten bonds in the Bond Account, before fees, as of 5/15/2025. A bond's yield is a function of its market price, which can fluctuate; therefore, a bond's YTW is not “locked in” until the bond is purchased, and your yield at time of purchase may be different from the yield shown here. The “locked in” YTW is not guaranteed; you may receive less than the YTW of the bonds in the Bond Account if you sell any of the bonds before maturity or if the issuer defaults on the bond. Public Investing charges a markup on each bond trade. See our Fee Schedule. *Terms and Conditions apply.
Link to episode page This week's Cyber Security Headlines - Week in Review is hosted by Rich Stroffolino with guest Rusty Waldron, chief business security officer, ADP Thanks to our show sponsor, Conveyor Let me guess, another security questionnaire just landed in your inbox. Which means all the follow up tasks you don't have time for are close behind. What are you going to do? Here's a better question: what would Sue do? Sue is Conveyor's new AI Agent for Customer Trust. She handles the entire security review process like answering every customer request from sales, completing every questionnaire or executing every communications and coordination task in-between. No more manual work. Just a quick review when she's done. Ready to let Sue take the reins? Learn more at www.conveyor.com. All links and the video of this episode can be found on CISO Series.com
The Polyamory Plot TwistWelcome to the premiere of "Don't Let Them," our new monthly series where Robyn McKay and I unpack the manipulation, mind games, and modern-day madness that too often lurk in the spaces meant to empower us. And trust us—we're not short on stories.Born from a shared desire to speak our truth authentically and fueled by you, the listeners loving our dynamic, this is where we throw out the script and get real, off-the-cuff, about literally all the things. In this first episode, we dive headfirst into topics like challenging outdated therapy models (bye-bye, "blank slate"!) and the surprising power of self-disclosure. We unpack the critical need for discernment in the spiritual space, especially as AI deepfakes make it harder than ever to tell what's real online. And yes, we tackle that polyamory controversy (featuring Aubrey Marcus), sharing our take on its perceived manipulative energy and calling out the performative aspect of spirituality we see playing out. Plus, we discuss the tricky "hero's journey trap" of using trauma as public performance. Expect honest opinions, a no-nonsense approach, plenty of sarcasm, and maybe even a few laughs about Chat GPT's hilarious podcast title ideas for this new series.TODAY'S HIGHLIGHTS(00:00) Intro(01:22) Don't let them is born(06:03) The Metamorphosis...better than ever(09:25) ChatGPT's hilarious struggle for a podcast name(14:02) The importance of authenticity(20:06) Barbie goes to college...and makes stupid decisions(22:44) Blank Slate vs Clean Field - Intuition vs Influence(30:38) Trauma-based mind control(37:01) The Aubrey Marcus controversy(43:44) The dark side of human consciousness and the nuances of online authenticity(46:10) Spiritual manipulation and leadership(01:02:00) AI, Deepfakes and the Algorithm (01:12:15) The addiction pattern in the personal development space(01:20:19) What to expect**WAYS TO ENTER MY WORLD** When you leave a review of the podcast, send us a screenshot and we'll send you a $250 credit, you can apply to anything else in my world.Core Wound Solution Private Podcast Join 1,000+ women discovering the real root of their stuckness— and how to shift it for good. Each episode is under 20 minutes. No fluff. Just real transformation.Book a Free Call to chat about how we can start working together. If you've done the mindset work but still feel stuck, I'll help you dissolve what's really in the way.CONTACT ROBYN drrobynmckay.com InstagramListen to She{ology}CONTACT ALYSE@alyse_breathesalysebreathes.cominfo@alysebreathes.com
AI-generated deepfakes are everywhere on social media. Now, you can take a test developed by Northwestern University to see how well you spot them. Marketplace's Nova Safo took the test, sifting through a bunch of real and fake images. He got five out of six right, which is the average in a study Northwestern conducted. Lead researcher Matt Groh also helped develop a litmus test, a series of things to look for to spot deepfakes.
AI-generated deepfakes are everywhere on social media. Now, you can take a test developed by Northwestern University to see how well you spot them. Marketplace's Nova Safo took the test, sifting through a bunch of real and fake images. He got five out of six right, which is the average in a study Northwestern conducted. Lead researcher Matt Groh also helped develop a litmus test, a series of things to look for to spot deepfakes.
In the U.S., the cost of cybercrime in 2025 is estimated to hit $639.2 billion— up from $452.3 billion in 2024—an alarming surge fueled in large part by advancements in AI. Today, Nicole sits down with tech journalist Laurie Segall to uncover one of the darkest AI-driven scams: a disturbing scheme where scammers generate fake nude images to extort their victims. In this conversation, Laurie breaks down the most common deepfake crimes and scams, how to spot them, and how to protect yourself. She also shares the jaw-dropping story of her investigation into a shadowy figure known as “Mr. Deepfakes,” a man she describes as one of the most dangerous people on the internet, and what happened when she tracked him down and confronted him face-to-face. Nicole and Laurie zoom out to ask even bigger questions: What does this mean for women, for democracy, and for the future of AI? Spoiler alert: we are still very much in the Wild West. Follow Laurie's work, and her investigation into Mr. Deepfakes here.
In this episode of The Ross Simmonds Show, Ross cuts through the overwhelming noise around AI in marketing to reveal what's real, what's useful, and what's pure hype. From AI-fueled content creation and personalization to ads, analytics, and the dangers of AI deepfakes, Ross shares his honest take on how marketers can harness AI effectively—without losing their creativity (or their jobs). Hint: AI isn't here to replace great marketers, but it will expose mediocre ones. Whether you're feeling FOMO about AI tools or tired from constant information overload, this episode offers grounded insights and an actionable roadmap for embracing AI in a sustainable and meaningful way. Key Takeaways and Insights: Understanding the Current AI Hype Cycle AI is at the peak of inflated expectations. Nearly every tool is rushing to add AI capabilities. $10B spent in AI implementation efforts by consulting firms. The opportunity is real—but so is the noise. Content Creation: The Draft, Not the Director Tools like ChatGPT, Jasper, Rytr, SEMrush, and HubSpot offer “okay” drafts. Great marketers use AI for first drafts, then add emotion, creativity, and data. AI helps compress content production time significantly. Key quote: “The AI tool is not your creative director.” Repurposing & Distribution at Scale One of the best use cases for AI in content marketing. Turn one podcast into dozens of social assets, short videos, quotes, etc. Old content can be brought back to life using AI. 24/7 brand promotion equals scale and efficiency. AI for Creating Visuals & Ads Tools now allow marketers to generate images and mock ads from product photos. Emerging use case: image creation with tools like ChatGPT's DALL·E. AI lowers cost and time for creative asset generation. AI-Powered Insights from Data AI can identify behavior trends, segment audiences, and support lead scoring. Automate personalized communications triggered by user behavior. In-depth AI-based customer feedback analysis is now possible. Reminder: "Good data = good AI." Personalization: Right Message, Right Time AI enables highly tailored emails, landing pages, and even video messages. Important: Avoid going too far or getting creepy with data usage. Balance personalization with ethics and relevance. The Fluff: SEO Snake Oil & “Turnkey” Strategies Beware of tools promising 10,000 blog posts and instant rankings. "Hallucination" risk with auto-generated content hurts SEO in the long run. One-click marketing strategy generators? Hard pass. Deepfakes & AI Influencers Increasing use of AI-generated personas to push products deceptively. Danger of consumers being misled by realistic but fake endorsements. The Future of Marketing & AI AI will not replace everyone—but will replace some. “It's a force multiplier for the great. A replacement for the mediocre.” Use AI to do more, faster—but don't skip thinking and strategy. To thrive, marketers must master the human elements: storytelling, empathy, strategy, and creativity. Resources & Tools:
The Green Room - https://rumble.com/playlists/aa56qw_g-j0 BUY CAST BREW COFFEE TO FIGHT BACK - https://castbrew.com/ Join The Discord Server - https://timcast.com/join-us/ Hang Out With Tim Pool & Crew LIVE At - http://Youtube.com/TimcastIRL EPSTEIN VIDEO To Be RELEASED By FBI, People Claim Bongino IS LYING, Will Post AI Deepfake
Take It Down Act: Legislation: The act aims to protect individuals, especially women and children, from revenge porn and deep fakes. Bipartisan Support: Passed by both the Senate and the House, and is headed to President Trump's desk for signing. Provisions: Makes posting non-consensual intimate images a federal felony, and mandates tech platforms to remove such content within 48 hours of notification. Personal Story: The act was inspired by a Texas girl who was victimized by deep fakes. Maryland Man Controversy: Democratic Support: Democrats are criticized for supporting an MS-13 gang member involved in domestic violence and human trafficking. Audio Evidence: Testimonies and police recordings reveal the individual's violent behavior and criminal activities. DOGE Revelations: Waste, Fraud, and Abuse: Elon Musk's DOGE team uncovers significant misuse of funds in various government agencies. Institute of Peace: Found to have paid a former Taliban member and deleted accounting records. Department of Education: Misused a $4 billion COVID fund for extravagant expenses like renting Caesar's Palace. Please Hit Subscribe to this podcast Right Now. Also Please Subscribe to the 47 Morning Update with Ben Ferguson and the Ben Ferguson Show Podcast Wherever You get You're Podcasts. Thanks for Listening #seanhannity #hannity #marklevin #levin #charliekirk #megynkelly #tucker #tuckercarlson #glennbeck #benshapiro #shapiro #trump #sexton #bucksexton#rushlimbaugh #limbaugh #whitehouse #senate #congress #thehouse #democrats#republicans #conservative #senator #congressman #congressmen #congresswoman #capitol #president #vicepresident #POTUS #presidentoftheunitedstatesofamerica#SCOTUS #Supremecourt #DonaldTrump #PresidentDonaldTrump #DT #TedCruz #Benferguson #Verdict #justicecorrupted #UnwokeHowtoDefeatCulturalMarxisminAmericaYouTube: https://www.youtube.com/@VerdictwithTedCruzSee omnystudio.com/listener for privacy information.