Podcasts about deepfakes

Artificial intelligence-based human image synthesis technique

  • 4,432PODCASTS
  • 7,447EPISODES
  • 44mAVG DURATION
  • 3DAILY NEW EPISODES
  • Feb 16, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about deepfakes

Show all podcasts related to deepfakes

Latest podcast episodes about deepfakes

Apropos – der tägliche Podcast des Tages-Anzeigers
KI-generierte Nacktbilder: Eine neue Dimension von Deepfakes

Apropos – der tägliche Podcast des Tages-Anzeigers

Play Episode Listen Later Feb 16, 2026 28:06


Gefälschte Videoaufnahmen, sogenannte Deepfakes, sind keine Seltenheit mehr: Sie zeigen zum Beispiel Donald Trump in der Papstrobe oder Mona Vetsch, die für zweifelhafte Finanzseiten wirbt. Doch jetzt erreichen sie eine neue Dimension.Deepfakes sehen immer echter aus und die Maschen der Betrüger werden immer perfider. So auch im Fall von Markus. Kurz nachdem er einen unbekannten Facetime-Anruf annimmt, erhält er ein Video zugeschickt. Es zeigt ihn beim Masturbieren. Die Betrüger hatten das Video mit KI so manipuliert, dass die Szene echt wirkte. Dann drohen sie ihm, es zu verschicken, wenn er nicht zahlt.Auch an einer Schweizer Schule wurde kürzlich ein Fall bekannt, in dem Oberstufenschüler KI-generierte Nacktbilder von Mitschülerinnen über Snapchat verbreiteten.Wie funktionieren Deepfakes? Was bedeuten solche Aufnahmen für die Betroffenen? Und was können Behörden dagegen tun? Das erklärt Oliver Zihlmann, Leiter des Tamedia Recherchedesks in einer neuen Folge des täglichen Podcasts «Apropos».Host: Alexandra AreggerProduzentin: Valeria MazzeoMehr zu DeepfakesDie Recherche von Oliver Zihlmann zum Fall von MarkusDer KI-Nacktbild-Skandal an einer Schweizer SchuleSo ist die Rechtslage in der Schweiz bei Deepfakes Unser Tagi-Spezialangebot für Podcast-Hörer:innen: tagiabo.chHabt ihr Feedback, Ideen oder Kritik zu «Apropos»? Schreibt uns an podcasts@tamedia.ch Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

FEATURING - Der Musikpodcast
#FDMP277: Selbstzweifel & Druck als DJ/Produzent!

FEATURING - Der Musikpodcast

Play Episode Listen Later Feb 15, 2026 80:41


Heute wird ss wieder musikalisch und spannend! Die Folge #FDMP277 ist vollgepackt mit interessanten Themen aus der Musikszene. Wir sprechen erneut über das Twitch-DJ Programm und werfen natürlich auch einen Blick auf die Deepfake-Thematik rund um Deadmau5. Außerdem geht es um Innelleas angekündigte Auszeit und um den Druck sowie die Selbstzweifel, mit denen man als DJ oder Produzent zu kämpfen haben kann, selbst wenn nach außen hin scheinbar alles perfekt läuft. Natürlich haben wir auch diesmal wieder sechs starke Tracks für euch dabei. Also macht es euch gemütlich, lehnt euch zurück und genießt die neue Folge „Featuring“. Unsere Playlist: https://open.spotify.com/playlist/7iSkklpyLKEEz2dd81jLLx?si=943d4f809c264de5 Ralf Brixx: https://www.instagram.com/ralfbrixx/ Heinrich & Heine: https://www.instagram.com/heinrichundheine/ Amber Music Label Group: https://www.instagram.com/ambermusiclabelgroup/ Audio Safari: https://www.instagram.com/audiosafari/

Bhagvat Puran - Marm ni vaat (Gujarati)
Devi Bhagvat Skandh 5 Adhyay 6

Bhagvat Puran - Marm ni vaat (Gujarati)

Play Episode Listen Later Feb 15, 2026 19:23


જ્યારે મહિષાસુરે કરોડો પાડાઓની માયા રચી, ત્યારે સાક્ષાત્ દેવો પણ ભયભીત થઈ ગયા! જાણો કેવી રીતે આ માયા આજના Deepfakes જેવી છે અને પચાસ દિવસ ચાલેલા અંધકાસુર સાથેના યુદ્ધનું રહસ્ય. સદ્દગુરુના જ્ઞાનથી માયાના જાળા તોડવા અત્યારે જ Subscribe કરો.

The Naked Emperor
Introducing Understood: Deepfake Porn Empire

The Naked Emperor

Play Episode Listen Later Feb 13, 2026 3:13


Non-consensual deepfake porn is becoming increasingly pervasive, and it didn't just come out of nowhere. These deepfakes were created and curated by people, on platforms, inside online subcultures. And they were allowed to spread, while governments dragged their feet, tech companies shrugged, and the targets — almost always women — paid the price.Tech journalist Sam Cole has been covering deepfake porn since its inception. In this season of Understood, she follows the trail all the way to the source, tracing an investigation across three countries and four newsrooms into the very real person behind the world's largest deepfake porn website: Mr. Deepfakes himself.

Oppdatert
Falsk porno: Hannahs deepfake-mareritt

Oppdatert

Play Episode Listen Later Feb 13, 2026 21:30


Hannah sine feriebilder ble manipulert og misbrukt. Så tok hun saken i egne hender. (Foto: Illustrasjonsbilde/ Ismail Burak Akkan). Hør alle episodene i appen NRK Radio

DevSecOps Podcast
# 07 - 11 - Temer a IA?

DevSecOps Podcast

Play Episode Listen Later Feb 13, 2026 55:56


IA é ferramenta. Poderosa. Rápida. Escalável. E completamente indiferente ao que é certo ou errado. Neste episódio do DevSecOps Podcast, mergulhamos nos perigos reais da Inteligência Artificial além do hype e além do medo irracional. Falamos sobre modelos que aprendem vieses humanos, automação de desinformação em escala industrial, geração de código vulnerável com confiança absurda e a falsa sensação de segurança quando “a IA revisou”. IA não é ética. Não é moral. Não é consciente. É estatística com GPU. Discutimos também o impacto prático no desenvolvimento de software e na segurança de aplicações. Devs usando copilots sem validar saída. Times confiando em respostas geradas como se fossem verdade revelada. Ataques potencializados por modelos generativos. Engenharia social turbinada. Deepfakes cada vez mais convincentes. A IA amplia o melhor e o pior de nós. No fim, a pergunta não é se a IA é perigosa. Toda tecnologia poderosa é. A pergunta é: estamos usando com criticidade ou com preguiça intelectual? Porque quando a máquina erra, ela erra em escala. E quando o humano delega o pensamento, ele terceiriza a responsabilidade. E responsabilidade, meu amigo, não dá para fazer deploy automático.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.

Lawyerist Podcast
Ethics, Judgment, and Trust in a World of Legal AI, with Damien Riehl

Lawyerist Podcast

Play Episode Listen Later Feb 12, 2026 41:39


Lawyers have always relied on tools—but AI is different. It doesn't just assist with tasks; it makes decisions, applies judgment, and shapes outcomes. In episode #602 of the Lawyerist Podcast, Stephanie Everett talks with Damien Riehl about what ethical responsibility looks like when AI starts doing legal work on its own.  Their conversation examines how AI systems embed values, why verification matters more than transparency, and how lawyers can responsibly use tools they don't fully understand. They also explore what legal expertise looks like in an AI-powered future—and why intuition, trust, and integrity may matter more than ever as machines take over the “widgets” of legal work.  Listen to our other episodes on Ethics and Responsibility in AI.  EP. 582 Deepfakes, Data, and Duty: Navigating AI Ethics in Law, with Merisa Bowers Apple | Spotify | LTN  EP. 543 What Lawyers Need to Know About the Ethics of Using AI, with Hilary Gerzhoy Apple | Spotify | LTN    Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X!    If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you.    Access more resources from Lawyerist at lawyerist.com.    Chapters / Timestamps:   00:00 – Introduction  05:55 – Meet Damien Riehl  08:10 – Why AI Is a Different Kind of Legal Tool  11:05 – When AI Starts Doing Legal Work  14:30 – Ethics, Values, and AI Judgment  18:45 – Foundation Models vs. Legal-Specific AI  21:15 – The “Duck Test” and Trusting AI Output  24:45 – Trust but Verify: Reviewing AI Work  28:40 – What Lawyers Are Underestimating About AI  31:10 – What Still Requires Human Judgment  34:30 – Intuition, Trust, and Integrity in Law  37:40 – What This Means for Billing and the Future  40:40 – Closing Thoughts   

BarCode
Jim West

BarCode

Play Episode Listen Later Feb 12, 2026 59:07


The future of cybersecurity is not coming. It is already here. AI is writing code faster than humans. Deepfakes can impersonate your boss. Quantum computers threaten the encryption that protects everything we trust. And most organizations are still playing catch up.In this episode of BarCode, Chris sits down with Jim West, a 30 plus year cybersecurity veteran who has seen every wave of the industry. From building machines in the early days of dial up to advising on quantum risk and AI driven defense, Jim breaks down what is hype, what is real, and what is about to change everything. This is not theory. This is what comes next.If you want to understand how to think like an attacker, adapt like a defender, and prepare for a world where machines outpace humans, this conversation is your briefing.Welcome to the future of security.00:00 Introduction to Jim West and His Expertise04:59 Jim's Origin Story and Early Career10:36 The Importance of Certifications in Cybersecurity17:16 The Rise of Quantum Computing in Cybersecurity27:05 Preparing for Quantum Day and Its Implications28:28 Exploring Quantum Computing and Qiskit28:58 AI's Role in Cybersecurity Threats30:45 The Evolution of Deepfake Technology31:45 Quantum Computing as a Service33:09 The Intersection of AI and Quantum Computing34:34 Future Scenarios: AI and Quantum in Cyber Warfare38:39 AI's Impact on Society and Human Interaction39:24 The Creative Potential of AI46:41 Balancing AI and Human Interaction52:46 Unique Bar Experiences and Future Ventures[Facebook – Jim West Author] – https://www.facebook.com/jimwestauthorOfficial author page where Jim West shares updates about his books, cybersecurity insights, speaking engagements, and creative projects.[LinkedIn – Jim West] – https://www.linkedin.com/in/jimwest1Professional networking profile highlighting his cybersecurity leadership, certifications, conference speaking, mentoring, and industry experience.[Official Author Site – Jim West] – https://jimwestauthor.com/Personal website featuring his published works, cybersecurity thought leadership, creative projects, and links to his social platforms.[BookAuthority – 100 Best Cybersecurity Books of All Time] – https://bookauthority.orgA curated book recommendation platform that recognized Jim West's work among the “100 Best Cybersecurity Books of All Time,” reflecting industry impact and credibility.[ISACA (Information Systems Audit and Control Association)] – https://www.isaca.orgA global professional association focused on IT governance, risk management, and cybersecurity, where Jim West has spoken at multiple regional and international events.[GRC (Governance, Risk, and Compliance) Conference – San Diego] – https://www.grcconference.comA cybersecurity conference centered on governance, risk management, and compliance practices, referenced in relation to industry speaking engagements.[EC-Council (International Council of E-Commerce Consultants)] – https://www.eccouncil.orgA cybersecurity certification organization known for programs such as CEH (Certified Ethical Hacker) and events like Hacker Halted, where Jim West has participated and spoken.

Boys Club
Ep: 223 - The Messy Olympics, Gigi Claudid our AI agent, Tatum Hunter on Internet Culture, Mashal Waqar and Artem Brazhnikov from Octant on sustainable funding models, Nick Devor of Barrons on Kalshi, Polymarket and the Super Bowl

Boys Club

Play Episode Listen Later Feb 12, 2026 91:19


00:00 Introduction to Boys Club Live 00:44 The viral Vogue clip 03:46 Market Talk 07:13 Shoutout to Octant  11:29 AI Etiquette and Social Contracts 15:19 Gigi Claudid: Training our AI agent 20:49 Norwegian Athlete's Emotional Confession 23:34 Unpacking Relationship Drama 24:44 Messy Olympics: Scandals in Sports 25:32 Partner Shoutout: Anchorage Digital 27:27 Podcast Recommendation: The Rest is History 29:40 Interview with Tatum Hunter: Internet Culture Insights 30:06 Deepfakes and AI Ethics 38:43 Personal Surveillance and Trust Issues 48:52 TikTok's Mental Health Rabbit Hole 52:16 Shill Minute: Best Cookie in Crown Heights 53:08 Introduction to Octant: Innovating Funding Models 54:52 Funding Ethereum: Grants and Sustainability 56:50 Octant V2: Revolutionizing Community Funding 58:43 Sustainable Growth and the Future of Ethereum 01:05:56 The Intersection of Venture Capital and Sustainable Funding 01:11:25 Guest Nick Devor of Barrons on Prediction Markets 01:12:50 Gambling and Insider Trading in Prediction Markets 01:23:01 CFTC Challenges and the Future of Regulation 01:26:11 Free Groceries: A Marketing Strategy 01:29:50 Conclusion and Final Thoughts  

Radio Boston
How tech companies in Mass. are trying to guard against deepfakes

Radio Boston

Play Episode Listen Later Feb 12, 2026 4:23


Now that artificial can make very convincing copies of people's voices, technology companies are emerging to help detect AI-created media and fraud.

The Laundry
Re-Spin: Can fincrime professionals fight romance fraud?

The Laundry

Play Episode Listen Later Feb 12, 2026 38:39


To mark Valentine's Day and the season of romance, we're re-spinning an incredibly important episode.More than £106m was lost to romance fraud in the UK in 2025, a 9% increase on the year before. Deepfake technology is also making these scams far more sophisticated than before.Our expert host, Marit Rødevand, is joined by Anna Rowe, Co-Founder of LoveSaid, and Simon Miller, Director of Policy and Communications at Stop Scams UK, to ask: Can fincrime professionals fight romance fraud? The panel discuss: the biggest challenges in tackling romance fraud, the solutions needed from the compliance industry, and the importance of post-fraud support for victims.[Originally broadcast in March 2024]Producer: Matthew Dunne-MilesEngineers: Dominic Delargy, Nicholas Thon____________________________________The Laundry podcast explores the complex world of financial crime, anti-money laundering (AML), compliance, sanctions, and global financial regulation.Hosted by Marit Rødevand, Fredrik Riiser, and Robin Lycka, each episode features in-depth conversations with leading experts from banking, fintech, regulatory bodies, and investigative journalism.Tune in as we dissect headline news, unpack regulatory trends, and examine the real-world consequences of non-compliance — all through a uniquely compliance-focused lens.The Laundry is proudly produced by Strise.Get in touch at: laundry@strise.aiSubscribe to our newsletter, Fresh Laundry, here. Hosted on Acast. See acast.com/privacy for more information.

Dachthekenduett
EU, Energiewende und Epstein-Files: Das passiert gerade wirklich.

Dachthekenduett

Play Episode Listen Later Feb 12, 2026 92:07


In Folge 204 des Dachthekenduetts sprechen Martin Moczarski und Sascha Koll über Epstein-Files als Druckmittel, Energiewende-Kehrtwende (Öl/Gas/Schwedt), AfD/ÖRR-Backfire (Miosga–Chrupalla), das gekippte 2,5-Mrd.-Rechenzentrum, neue Abgaben/Steuerlogik sowie EU-Melde-App, Altersprüfung & „Gamer“-Propaganda.Libertäre Stammtische:https://die-libertaeren.de/events/kategorie/libertaerer-stammtisch/summary/Möchten Sie unsere Arbeit unterstützen?––––––––––––––––––––––––––––––––––––––––––––––––Spenden Sie Werkzeuge für die libertäre GlücksschmiedePayPal (auch Kreditkarte) / Überweisung / Bitcoin / Monero:

Financial Sense(R) Newshour
2026: The Year Deepfakes Hacked Our Brains (Preview)

Financial Sense(R) Newshour

Play Episode Listen Later Feb 11, 2026 1:17


Feb 10, 2026 – This year marks a turning point, as deepfakes reach new heights in realism and influence. FS Insider interviews Dr. Siwei Lyu, director of the Institute for AI and Data Sciences, about the rapid evolution and growing dangers of deepfakes...

Le retour de Mario Dumont
«Deepfakes» chez Costco et Walmart: voici comment déterminer si une vidéo est fausse

Le retour de Mario Dumont

Play Episode Listen Later Feb 11, 2026 6:43


Des fraudeurs utilisent l’intelligence artificielle pour créer de fausses vidéos mettant en scène de prétendus employés de Costco et Walmart afin de piéger les internautes sur les réseaux sociaux. Entrevue avec Olivier Blais, cofondateur et chef des technologies chez Moov AI. Regardez aussi cette discussion en vidéo via https://www.qub.ca/videos ou en vous abonnant à QUB télé : https://www.tvaplus.ca/qub ou sur la chaîne YouTube QUB https://www.youtube.com/@qub_radio Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr

The Bid Picture - Cybersecurity & Intelligence Analysis
456. The Brief - February 10, 2026

The Bid Picture - Cybersecurity & Intelligence Analysis

Play Episode Listen Later Feb 10, 2026 18:09


Check out host Bidemi Ologunde's new show: The Work Ethic Podcast, available on Spotify and Apple Podcasts.Email: bidemiologunde@gmail.comIn this episode, host Bidemi Ologunde breaks down the week of Feb 2–8, 2026, when an ancient idea, the Olympic Truce, collided with modern reality: AI-built platforms leaking identities, satellites and cyber defenses becoming battlefield "terrain," sanctions escalating into lawfare, and ceasefire language clashing with ongoing violence. What happens when "trust" becomes the scarcest resource online? Who controls connectivity in war zones: states or private networks? When do sanctions stop being diplomacy and start reshaping international justice? And in an era of drones, deepfakes, and cyberattacks, what does a "truce" even mean?On the Bid Picture Podcast, I talk about big ideas, and Lembrih is one of them. Born from Ghanaian roots, Lembrih is building an ethical marketplace for Black and African artisans: makers of heritage-rich products often overlooked online. The vision is simple: shop consciously, empower communities, and share the stories behind the craft. Lembrih is live on Kickstarter now, and your pledge helps build the platform. Visit lembrih.com, or search “Lembrih” on Kickstarter.Support the show

Cloud Security Podcast
How Attackers Bypass AI Guardrails with Natural Language

Cloud Security Podcast

Play Episode Listen Later Feb 10, 2026 46:36


In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - ⁠⁠⁠⁠⁠⁠Eduardo's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos

Gospel Tech
Parenting AI: Deepfakes and S**tortion (Part 2)

Gospel Tech

Play Episode Listen Later Feb 10, 2026 23:27


CONTENT WARNING: We're diving into the tough but important topic of parenting in an AI-shaped world, and while younger kids probably shouldn't listen in, this could be a great conversation to share with your middle or high schoolers so it feels more like learning together than an interrogation. My hope is that this equips you to parent well as we raise kids in a world shaped by AI. Today we continue our conversation on parenting AI with a look at deepfakes and s**tortion. These are big topics that have an outsized impact on children. We need to know what they are, how they happen, and what to do if our child is targeted. The goal is to be a present, and informed, parent so that your children they need to grow.Show Notes: https://bit.ly/4qfOCiG 

Moneycontrol Podcast
5031: India's new rules to regulate AI deepfakes; River gears up for $80 million round; Layoffs hit India GCCs: AUMOVIO cuts 1,000

Moneycontrol Podcast

Play Episode Listen Later Feb 10, 2026 8:17


In today's Tech3 from Moneycontrol, we track a muted start to Fractal Analytics' IPO and what it signals for AI listings in India. We also break down India's new rules to regulate AI deepfakes, River's talks to raise $80 million in fresh funding, job cuts at automotive tech firm AUMOVIO impacting India GCCs, and why college campuses are fast becoming launchpads for India's next generation of esports professionals.

Moneycontrol Podcast
5032: Textiles tariff unease, GCC layoffs bite & vigilance over AI deepfakes

Moneycontrol Podcast

Play Episode Listen Later Feb 10, 2026 3:59


In this edition of Moneycontrol Editor's Picks our focus: the India-US trade deal fact sheet and its fine print. We analyse what the lower tariffs on Bangladesh mean for India's textile sector and whether addressing non-tariff barriers could mean compromise on GM imports. Also, layoffs at Germany's Aumovio GCC in India has triggered fears about the GCC space that has so far shown robust growth. From policy updates, corporate fundraising and gold ETFs - find the latest updates inside.

The BreakPoint Podcast
Deepfakes, Identity, and the Collapse of Reality

The BreakPoint Podcast

Play Episode Listen Later Feb 9, 2026 4:39


The thinning of the soul needs the robustness of Truth.  __________ For additional resources, or to download and share this commentary, visit breakpoint.org.

Canaltech Podcast
Fraudes nas bets: deepfakes, golpes e os riscos para jogadores no Brasil

Canaltech Podcast

Play Episode Listen Later Feb 9, 2026 14:15


Com a popularização das apostas online no Brasil, também cresceram os golpes, as fraudes de identidade e o uso de deepfakes para enganar jogadores. No episódio de hoje do Podcast Canaltech, a repórter Jaqueline Sousa conversa com Krist Galloway, head de iGaming da Sumsub, sobre os principais riscos desse mercado. Durante a entrevista, ele explica como criminosos usam tecnologia para criar aplicativos falsos, anúncios enganosos com celebridades e esquemas de lavagem de dinheiro. O executivo também detalha como a biometria, a inteligência artificial e a análise de transações ajudam a identificar contas suspeitas. O episódio aborda ainda o papel da regulamentação, os desafios dos sites ilegais, o combate ao vício em apostas e o impacto de tecnologias como o Pix nesse cenário. Você também vai conferir: sem tirar do bolso: celular poderá ser controlado apenas pela voz, SpaceX pode lançar celular com conexão direta à Starlink e cientistas criam chip mais fino que um fio de cabelo. Este podcast foi roteirizado e apresentado por Fernada Santos e contou com reportagens de Marcelo Fischer, Nathan Vieira e Raphael Giannotti, sob coordenação de Anaísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Leandro Gomes e a arte da capa é de Erick Teixeira.See omnystudio.com/listener for privacy information.

SWR3 Gag des Tages | SWR3
Polizei in Rheinland-Pfalz hat einen DeepFakeDetector

SWR3 Gag des Tages | SWR3

Play Episode Listen Later Feb 9, 2026 1:39


Zur Polzeiarbeit gehört neuerdings auch das Aufspüren sog. Deep-Fakes. Echt jetzt ? Ja !

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Teaser For "Zero-Trust" Reality: Surviving the Deepfake Apocalypse & Identity Collapse

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 8, 2026 2:06


Full Audio at https://podcasts.apple.com/us/podcast/the-zero-trust-reality-surviving-the-deepfake/id1684415169?i=1000748816725

Media Storm
News Watch pt.2: Trump's 'new' world order, and who is behind Grok AI deepfakes?

Media Storm

Play Episode Listen Later Feb 6, 2026 33:46


Like this episode? Support Media Storm on ⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠! In January alone, Donald Trump abducted the Venezuelan President, listed himself as President of Venezuela on Wikipedia, almost launched another tariff war after demanding Greenland, directly threatened Colombia, Mexico and Cuba, told Honduran vote counters there'd be “hell to pay” if his favourite candidate didn't win, and dropped bombs on Caribbean boats that killed more than a hundred people. Yet at the World Economic Forum in Davos the same month, he launched his ‘Board of Peace'. Make it make sense! But is Trump's new world order really that new? In a postwar world of covert regime change, privatised ownership of natural resources, and sanctions designed to strangle uncooperative economies, was the international rules-based order just a lie all along?  Plus: headlines told us that "Non-consensual sexualised deepfakes were created by the AI chatbot Grok"  and that "Grok AI made sexualised images of children". But who gave Grok the prompt to do it? Missing from the headlines, as is so often the case when it comes to stories about sexual abuse against women and girls, is MEN. We discuss why no one can seem to name the problem - so much so, our government used a SNAKE to represent male violence in a recent advert (end snake violence against women and girls!) And we end with our new segment: Holding Onto Hope. The episode is hosted and produced by Mathilda Mallinson (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@mathildamall⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠) and Helena Wadia (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@helenawadia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠)  The music is by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ @soundofsamfire⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow us on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Bluesky⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok ⁠⁠⁠ Learn more about your ad choices. Visit podcastchoices.com/adchoices

SWR3 Topthema
Deepfake Detektor - KI mit KI bekämpfen

SWR3 Topthema

Play Episode Listen Later Feb 6, 2026 2:39


Lässt sich KI wirklich mit KI bekämpfen? Experten sind unterschiedlicher Meinung.

Decoder with Nilay Patel
Reality is losing the deepfake war

Decoder with Nilay Patel

Play Episode Listen Later Feb 5, 2026 48:55


Today, we're going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. To do this, I sat down with Verge reporter Jess Weatherbed, who covers creative tools for us — a space that's been totally upended by generative AI. We've been talking about how the photos and videos taken by our phones are getting more and more processed for years on The Verge. Here in 2026, we're in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood onto social platforms at scale. So Jess and I discussed the limitations of AI labeling standards like C2PA, and why social media execs like Instagram boss Adam Mosseri are now sounding the alarm.  Links:  This system can sort real pictures from AI fakes — why aren't we using it? | The Verge You can't trust your eyes to tell you what's real, says Instagram | The Verge Instagram's boss is missing the point about AI on the platform | The Verge Sora is showing us how broken deepfake detection is | The Verge Reality still matters | The Verge No one's ready for this | The Verge What is a photo, @WhiteHouse edition | The Verge Google Gemini is getting better at identifying AI fakes | The Verge Let's compare Apple, Google & Samsung's definitions of 'photo' | The Verge The Pixel 8 and the what-is-a-photo apocalypse | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane.  The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Right-Hand Roadmap
#58: Stop Wasting Money on AI Tools: Why Operators Need Skepticism in Their Tech Stack

The Right-Hand Roadmap

Play Episode Listen Later Feb 5, 2026 11:20


The Operator's Guide to AI Tool Selection (Before Your CEO Buys Another One) Your entrepreneur is excited about another AI tool, but before you add it to your tech stack, you need to know this: MIT research shows that 95% of AI investments have produced zero returns at the company level.  The Salesforce disaster is the perfect case study: they laid off 4,000 employees to pivot to AI (after promising it wouldn't impact jobs), then had to pivot back when the large language models proved unreliable and experienced drift.  As operators and Seconds-In-Command, you're fielding these AI tool requests constantly, but most SMBs aren't ready for agentic AI or even vibe-coded applications that pose serious security risks (60% of businesses shut down after a cyber attack). In this episode, host Megan Long covers some basic frameworks and points of skepticism to be aware of before adopting any AI tool - agentic or vide-coded.  Beyond ROI concerns, there are real ethical considerations. Being intentional about AI tool selection isn't just about avoiding wasted budget; it's about building efficiencies responsibly without compromising security or causing harm. You'll hear all about: 00:29 - Introduction: The plethora of AI tools promising the world and how operators are fielding these from excited CEOs 00:59 - Origin story: Second First Mastermind quarterly cohort meetings and how vendor selection became a hot topic 01:49 - The 6 critical questions to ask before purchasing any software or tool (pull up your notes app!) 02:57 - The overwhelming answer: Yes, we've all wasted significant time and money on failed software purchases 03:14 - The AI reality check: MIT research shows 95% of AI investments have produced zero returns 03:36 - The nuance: Individuals find personal efficiencies, but company-level P&L shows no benefits 03:45 - Surprising finding: Most AI investments go to Sales & Marketing instead of Operations 03:59 - Salesforce case study: Laid off 4,000 employees for AI, then had to pivot back when it failed 04:40 - Vibe coding concerns: Security and compliance risks when beginners code their own apps 05:18 - The scary stat: 60% of businesses shut down following a cyber attack 05:43 - What is agentic AI and why it sounds so promising (systems that act autonomously on your behalf) 06:14 - Why most SMBs aren't ready: Clean your house before inviting the AI guest over 06:52 - Four guidelines for selecting AI tools: Start low-cost, tie to value creation, plan to scale, use KYA framework 08:11 - The Know Your Agent (KYA) framework: Capability, behaviors, decision tracing, abuse prevention, sandboxes, and human overrides 09:15 - Soapbox moment: Using AI ethically and understanding why people are anti-AI 09:50 - The creative industry impact: Animation directors, musicians, and the elimination of royalties 10:27 - Other ethical concerns: Deepfakes, surveillance, misinformation, environmental harm in rural communities   Rate, review & follow on Apple Podcasts Click Here to Listen! OR WATCH ON YOUTUBE If you haven't already done so, follow the podcast to make sure you never miss a value-packed episode. Links mentioned in the episode: Second First Membership Second First One-on-One Coaching Second First on Instagram Second First on LinkedIn Megan Long on LinkedIn

The Future of Work With Jacob Morgan
Deepfake Workers, Robo-Bosses, and the Trust Breakdown Inside Modern Companies

The Future of Work With Jacob Morgan

Play Episode Listen Later Feb 4, 2026 27:09


Feb 4, 2026: In this episode of Future-Ready Today, I explore a fundamental shift in the workplace: the transition from a task economy to a trust economy. As artificial intelligence moves from "future tech" to "daily tool," the basic mechanics of how we hire, manage, and let go of people are under intense pressure. We aren't just dealing with new software; we're dealing with a breakdown in identity and accountability. I dive deep into five stories shaping this week's headlines: The Deepfake Candidate: Why identity verification is becoming the most critical new skill in HR. California's Algorithmic Guardrails: The new legislative push to ensure humans—not code—remain responsible for firing decisions. The "Job Apocalypse" Debate: Analyzing Ben Horowitz's take on why new work emerges even as old categories vanish. The $818 Billion Admin Tax: How poorly designed organizations are drowning in emails, and why AI might be the only way out. The AI Layoff Script: Why "technology made us do it" is becoming the new corporate excuse, and how leaders can maintain credibility during transitions. The Bottom Line: The future of work won't be won by the companies with the most AI. It will be won by the companies that use technology to remove "administrative garbage" while doubling down on human accountability.

The Barbell Mamas Podcast | Pregnancy, Postpartum, Pelvic Health
How Social Media Shapes Women's Health Choices

The Barbell Mamas Podcast | Pregnancy, Postpartum, Pelvic Health

Play Episode Listen Later Feb 4, 2026 54:50 Transcription Available


Ever feel like every scroll brings a new rule for your body? We sit down with Dr. Emily Fender, a health communication scientist whose research tracks how women's health messages spread across TikTok, Instagram, and YouTube—and why the loudest claims aren't always the most useful. Together, we break down a simple lens you can use anywhere online: threat versus efficacy. Are you being scared into attention, or actually given steps and resources to act? That distinction shows up in everything from contraception myths to perinatal mental health, where severity gets clicks but supportive guidance often goes missing.We dig into cycle syncing and the difference between evidence, overreach, and personalized training. You'll hear why rigid phase-based rules can backfire, creating shame and cost barriers, and how athletes worry these narratives label women as fragile for half the month. We zoom out to the bigger system: incentives that reward certainty, influencer marketing that sells protocols, and even expertise drift when clinicians post outside their lane. Then we get practical about risk communication—turning relative risk into absolute numbers, spotting absolute statements, and demanding receipts when someone says “studies show.”We also scout the horizon with AI. Some tools can surface studies and highlight exact evidence, but they can't replace synthesis or context. Deepfakes and confident summaries raise the bar for skepticism, so we share a quick checklist to stress test posts before you share or act: scope, sources, statistics, and a simple “does this make sense” pass. Use social media for community, discovery, and momentum—then ground your choices in evidence, your values, and your lived experience. If you've been craving fewer rules and more clarity, this conversation offers a calmer, smarter way to navigate women's health online. Subscribe, share with a friend who lifts, and leave a review to tell us the one claim you want decoded next.___________________________________________________________________________Don't miss out on any of the TEA coming out of the Barbell Mamas by subscribing to our newsletter You can also follow us on Instagram and YouTube for all the up-to-date information you need about pelvic health and female athletes. Interested in our programs? Check us out here!

Jamie and Stoney
2/4/26 - Happy Skubal Arbitration Day! Is there another move coming for the Pistons? Have you been fooled by AI deep fakes?

Jamie and Stoney

Play Episode Listen Later Feb 4, 2026 174:44


2/4/26 - Happy Skubal Arbitration Day! Is there another move coming for the Pistons? Have you been fooled by AI deep fakes?

The Community's Conversation
Cybersecurity in the Age of AI

The Community's Conversation

Play Episode Listen Later Feb 4, 2026 53:18


Artificial intelligence is transforming cybersecurity at unprecedented speed. From state government to public transit to global business, leaders are confronting new risks while deploying new tools to defend critical systems. This forum examines how AI is changing cyber threats, what organizations can do to stay ahead, and why cybersecurity has become a leadership issue for every sector in Central Ohio. Featuring: Kirk Herath, Cybersecurity Strategic Advisor to Governor Mike DeWine and Chair, CyberOhio Sophia Mohr, Chief Innovation and Technology Officer, COTA Michael Wyatt, Global Identity Offering Leader, Cyber and Strategic Risk, Deloitte The host is Padma Sastry, Adjunct Faculty at The Ohio State University College of Engineering. This forum was sponsored by COTA and Deloitte. The presenting sponsor of the CMC livestream is The Center for Human Kindness at the Columbus Foundation. CMC's livestream partner is The Columbus Dispatch. This forum was also supported by Downtown Columbus Inc. and The National Veterans Memorial and Museum. If you would like to keep exploring this week's forum topic, our partners at The Columbus Metropolitan Library recommend reading "FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI Generated Deceptions," by Perry Carpenter (2025). This forum was recorded before a live audience at The National Veterans Memorial and Museum in Columbus Ohio on February 4, 2026.

RESUMIDO
#350 — Autenticidade virou sobrevivência / Caos da IA chegou / Grok gera pornô em massa

RESUMIDO

Play Episode Listen Later Feb 3, 2026 62:20


Apresentado por Bruno Natal.--Loja RESUMIDO (camisetas, canecas, casacos, sacolas): https://www.studiogeek.com.br/resumido--Faça sua assinatura!https://resumido.cc/assinatura--O Google empurra a web para respostas prontas sem links e o Instagram avisa que a autenticidade é o futuro. Deepfakes distorcem evidências em crimes reais e músicas de IA viralizam sem autoria enquanto plataformas tentam conter o estrago. O X está entupido de imagens sexualizadas geradas por IA e criadores tentam decifrar por que seus conteúdos somem.Como identificar o que ainda é real?No RESUMIDO #350: o caos da IA se materializou, busca padrão por IA pode matar os links, autenticidade é o novo ouro nas redes digitais, idosos brasileiros presos online, músicas de IA viralizam sem ninguém saber quem é o autor, ChatGPT segue o caminho do Facebook, Grok gera pornô em massa e muito mais!--Ouça e confira todos os links comentados no episódio: https://resumido.cc/podcasts/o-caos-da-ia-se-materializou-autenticidade-e-o-novo-ouro-grok-gera-porno-em-massa/

AP Audio Stories
Paris prosecutors raid X offices as part of investigation into child abuse images and deepfakes

AP Audio Stories

Play Episode Listen Later Feb 3, 2026 0:45


AP correspondent Karen Chammas reports Paris prosecutors raid X offices as part of investigation into child abuse images and deepfakes.

Wintrust Business Lunch
Noon Business Lunch 2/3/26: Job search tips, deepfake risk, Green Attic Roofing, Momento

Wintrust Business Lunch

Play Episode Listen Later Feb 3, 2026


Segment 1: Tom Gimbel, job expert and founder of LaSalle Network, joins John to tell us the best way to apply for a job in 2026. What do you need to do to enhance your accomplishments and help you stand out? Segment 2:  Phillippe Weiss, President, Seyfarth at Work, joins John Williams to talk about the rise in […]

Ninja News Japan
Human Intimate Relations

Ninja News Japan

Play Episode Listen Later Feb 3, 2026 28:55


Lawyers weigh in and we have premium voice acting. Send us a voice message https://www.speakpipe.com/ChunkMcBeefChest Linktree https://linktr.ee/chunkmcbeefchest

WWL First News with Tommy Tucker
Hour 2: Cracking down on deepfakes and breaking down the budget

WWL First News with Tommy Tucker

Play Episode Listen Later Feb 3, 2026 18:30


Hour 2: Cracking down on deepfakes and breaking down the budget full 1110 Tue, 03 Feb 2026 15:57:00 +0000 lQ48FGUr3XGe6jEupztUKg3EkgrWv7q4 louisiana,news WWL First News with Tommy Tucker louisiana,news Hour 2: Cracking down on deepfakes and breaking down the budget Tommy Tucker takes on the days' breaking headlines, plus weather, sports, traffic and more   2024 © 2021 Audacy, Inc. News False https://player.amperwavepodca

WWL First News with Tommy Tucker
This lawmaker wants to crack down on sexually explicit deepfakes of minors

WWL First News with Tommy Tucker

Play Episode Listen Later Feb 3, 2026 10:04


One of the downsides of AI is sexually explicit deepfakes, something that happened recently at a Louisiana school. State Representative Mike Bayham has prefiled a bill that would add that to what's considered child sex abuse materials. We talk with him about his bill.

Freedomain with Stefan Molyneux
6285 PLEASE DON'T DISAPPEAR! Twitter/X Space

Freedomain with Stefan Molyneux

Play Episode Listen Later Feb 2, 2026 109:32


In this episode of Friday Night Live on 30 January 2026, Stefan Molyneux looks at the Epstein document release and how deepfake tech affects what people accept as real. He talks with a caller about staying skeptical amid all the digital noise, building real connections, and owning up to one's choices. Molyneux pushes the caller to deal with the paralysis tied to family issues, stressing that sharp thinking is key to cutting through media tricks.GET FREEDOMAIN MERCH! https://shop.freedomain.com/SUBSCRIBE TO ME ON X! https://x.com/StefanMolyneuxFollow me on Youtube! https://www.youtube.com/@freedomain1GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!https://peacefulparenting.com/Join the PREMIUM philosophy community on the web for free!Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!See you soon!https://freedomain.locals.com/support/promo/UPB2025

The Fit Mess
How Deep Fakes Are Justifying Real Violence

The Fit Mess

Play Episode Listen Later Feb 2, 2026 23:30


AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time.We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything.What we cover:Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotionHow the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truthWhat happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymoreWhy this breaks institutional trust forever — Once credibility is destroyed, it doesn't come backHow Russia's playbook became America's playbook — PsyOps tactics are now domestic policyWhat to do when you can't believe your own eyes — Practical skepticism in an age of slopChapters:00:00 — Intro: The Deep Fake Problem in Minneapolis02:37 — Why Immigrants Are Being Targeted With Fake Narratives04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version07:18 — Alex Prettie Must Killed While Filming ICE Agents09:44 — Nikita Armstrong's Tears Were Added By AI11:45 — The Putin Playbook: Flood the Zone With Confusion14:13 — How Deep Fakes Break Institutional Trust Forever17:37 — This Isn't Politics—It's Basic Human Decency19:26 — Trump's 35% Approval Rating and What It Means22:03 — What You Can Do When You Can't Trust Your EyesSafety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group

Top-Thema mit Vokabeln | Deutsch lernen | Deutsche Welle

Wie klug sind unsere Kühe? – In Österreich zeigt Kuh Veronika ein besonderes Verhalten: Sie benutzt Werkzeuge und überrascht damit die Forschung. Die Entdeckung könnte vieles verändern. Sind einige Tiere vielleicht klüger, als wir denken?

The Deep Dive Radio Show and Nick's Nerd News
We Are Never Getting Rid Of Deepfakes

The Deep Dive Radio Show and Nick's Nerd News

Play Episode Listen Later Jan 30, 2026 7:20


We Are Never Getting Rid Of Deepfakes by Nick Espinosa, Chief Security Fanatic

Hysteria
ICE Fission

Hysteria

Play Episode Listen Later Jan 29, 2026 91:01


Erin and Alyssa dig into the latest news from the Twin Cities— the senseless tragedy of Alex Pretti's death, and the inspiring resolve of the Minnesotans who continue to stand up for each other. With Greg Bovino's “demotion,” are things about to take a turn for the better, or is this cynical political window-dressing from Team Trump? Then, Melania Trump's movie premiere at the White House's janky new makeshift room, and Paris Hilton's fight on Capitol Hill to ban AI-generated deep fake porn. And of course, we wrap up with Sani-Petty. Alex Pretti's Friends and Family Denounce ‘Sickening Lies' About His Life (NYT 1/25)Republican calls are growing for a deeper investigation into fatal Minneapolis shooting of Alex Pretti (PBS 1/26)Scoop: Stephen Miller behind misleading claim that Alex Pretti wanted to "massacre" agents (AXIOS 1/27)Trump Defends Noem as She Faces Bipartisan Criticism (WSJ 1/27)Democrats Vow Not to Fund ICE After Shooting, Imperiling Spending Deal (NYT 1/24)Melania's $75 Million Movie Premiered in a Makeshift Theater (The Daily Beast 1/24)‘They sold my pain for clicks': Paris Hilton urges lawmakers to act on nonconsensual deepfakes (The 19th 1/22) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Tech Won't Save Us
Elon Musk Profits Off Non-Consensual Deepfakes w/ Kat Tenbarge

Tech Won't Save Us

Play Episode Listen Later Jan 29, 2026 66:31


Paris Marx is joined by Kat Tenbarge to discuss the explosion of abusive deepfakes on X, including how Elon Musk is profiting from the sexual exploitation of women and children while his followers use Grok to engage in gender-based violence. Kat Tenbarge is an independent journalist who writes Spitfire News. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Kat has been thoroughly covering the Grok and XAI deepfake and sexual abuse story Paris wrote about why the Grok scandal shows we need more comprehensive tech regulation The deepfake documentary mentioned was called Another Body You can see the result of Megan Thee Stallion's defamation lawsuit here Grok was blocked in Indonesia and Malaysia in response to its generation of explicit images

Start Making Sense
Elon Musk Profits Off Non-Consensual Deepfakes w/ Kat Tenbarge | Tech Won't Save Us

Start Making Sense

Play Episode Listen Later Jan 29, 2026 66:31 Transcription Available


Paris Marx is joined by Kat Tenbarge to discuss the explosion of abusive deepfakes on X, including how Elon Musk is profiting from the sexual exploitation of women and children while his followers use Grok to engage in gender-based violence.Kat Tenbarge is an independent journalist who writes Spitfire News.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

En Casa de Herrero
Reportaje: La tortura del 'deepfake y Grok cuando la IA se utiliza para destruir

En Casa de Herrero

Play Episode Listen Later Jan 28, 2026 11:19


Luis Herrero analiza junto a Àngels Hernández los daños que provocan los "deepfake" y la IA de Grok.

Jeff & Jenn Podcasts
Good Morning and E News: Paris Hilton is crusading against deepfakes...

Jeff & Jenn Podcasts

Play Episode Listen Later Jan 26, 2026 25:32


Good Morning and E News: Paris Hilton is crusading against deepfakes, Brad Pitt's biggest movie, Harry Styles and the Pope, and more. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Science Friday
Deepfakes Are Everywhere. What Can We Do?

Science Friday

Play Episode Listen Later Jan 22, 2026 22:36


Deepfakes have been everywhere lately, from fake AI images of Venezuelan leader Nicolás Maduro following his (real) capture by the United States, to X's Grok AI generating nonconsensual images of real people in states of undress. And if you missed all that, you've almost certainly had your own deepfake close encounter in your feed: maybe rabbits bouncing on a trampoline or an unlikely animal friendship that seems a little too good to be true.Deepfakes have moved beyond the realm of novelty, and it's more difficult than ever to know what is actually real online. So how did we get here and what is there, if anything, to do about it?Joining Host Flora Lichtman are Hany Farid, who's studied digital forensics and how we relate to AI for over 25 years, and Sam Cole, a journalist at 404 Media who's covered deepfakes and their impact since 2017.Guests:Dr. Hany Farid is a professor of electrical engineering and computer sciences at University of California, Berkeley.Sam Cole is a journalist at 404 Media, based in New York, NYTranscripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

On with Kara Swisher
Elon's “Nudify” Mess: How X Supercharged Deepfakes

On with Kara Swisher

Play Episode Listen Later Jan 22, 2026 53:36


On Christmas Eve, Elon Musk's X rolled out an in-app tool that lets users alter other people's photos and post the results directly in reply. With minimal safeguards, it quickly became a pipeline for sexualized, non-consensual deepfakes, including imagery involving minors, delivered straight into victims' notifications.  Renée DiResta, Hany Farid, and Casey Newton join Kara to dig into the scale of the harm, the failure of app stores and regulators to act quickly, and why the “free speech” rhetoric used to defend the abuse is incoherent. Kara explores what accountability could look like — and what comes next as AI tools get more powerful. Renée DiResta is the former technical research manager at Stanford's Internet Observatory. She researched online CSAM for years and is one of the world's leading experts on online disinformation and propaganda. She's also the author of Invisible Rulers: The People Who Turn Lies into Reality. Hany Farid is a professor of computer sciences and engineering at the University of California, Berkeley. He's been described as the father of digital image forensics and has spent years developing tools to combat CSAM. Casey Newton is the founder of the tech newsletter Platformer and the co-host of The New York Times podcast Hard Fork.  This episode was recorded on Tuesday, January 20th. When reached for comment, a spokesperson for X referred us to a ⁠a statement post on X⁠, which reads in part: We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Charlie Kirk Show
THOUGHTCRIME Ep. 111 — Autistic Barbie? Hollywood Deepfakes? British DEI Video Games?

The Charlie Kirk Show

Play Episode Listen Later Jan 17, 2026 91:55 Transcription Available


The ThoughtCrime crew discusses the most essential topics of the weed, including: -What do they make of Mattel's first-ever autistic Barbie doll? -Does AI mean that Hollywood actors are obsolete forever? -Who is "Amelia" and why is she the new avatar of European nationalism? Watch every episode ad-free on members.charliekirk.com! Get new merch at charliekirkstore.com!Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.

Human Events Daily with Jack Posobiec
THOUGHTCRIME Ep. 111 — Autistic Barbie? Hollywood Deepfakes? Jessica Is The New Karen?

Human Events Daily with Jack Posobiec

Play Episode Listen Later Jan 17, 2026 88:28


The ThoughtCrime crew discusses the most essential topics of the week, including:-What do they make of Mattel's first-ever autistic Barbie doll?-Does AI mean that Hollywood actors are obsolete forever?-Who is "Amelia" and why is she the new avatar of European nationalism?Support the show