Podcasts about deepfakes

Artificial intelligence-based human image synthesis technique

  • 4,371PODCASTS
  • 7,314EPISODES
  • 44mAVG DURATION
  • 3DAILY NEW EPISODES
  • Feb 2, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about deepfakes

Show all podcasts related to deepfakes

Latest podcast episodes about deepfakes

Freedomain with Stefan Molyneux
6285 PLEASE DON'T DISAPPEAR! Twitter/X Space

Freedomain with Stefan Molyneux

Play Episode Listen Later Feb 2, 2026 109:32


In this episode of Friday Night Live on 30 January 2026, Stefan Molyneux looks at the Epstein document release and how deepfake tech affects what people accept as real. He talks with a caller about staying skeptical amid all the digital noise, building real connections, and owning up to one's choices. Molyneux pushes the caller to deal with the paralysis tied to family issues, stressing that sharp thinking is key to cutting through media tricks.GET FREEDOMAIN MERCH! https://shop.freedomain.com/SUBSCRIBE TO ME ON X! https://x.com/StefanMolyneuxFollow me on Youtube! https://www.youtube.com/@freedomain1GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!https://peacefulparenting.com/Join the PREMIUM philosophy community on the web for free!Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!See you soon!https://freedomain.locals.com/support/promo/UPB2025

Top-Thema mit Vokabeln | Deutsch lernen | Deutsche Welle

Wie klug sind unsere Kühe? – In Österreich zeigt Kuh Veronika ein besonderes Verhalten: Sie benutzt Werkzeuge und überrascht damit die Forschung. Die Entdeckung könnte vieles verändern. Sind einige Tiere vielleicht klüger, als wir denken?

Mysteries of Science
Deepfakes

Mysteries of Science

Play Episode Listen Later Jan 30, 2026 19:48


Can you believe everything you see and hear online? Dan and Michael navigate the fascinating and unsettling world of deepfakes: images, videos, and audio so realistic you might not know they’re fake. With help from tech experts and researchers, they reveal how deepfakes are made, why they’re getting better and harder to spot and challenge you to outsmart the fakes yourself. Along the way, find out how deepfake tech brings movie characters back to life and why we all need to sharpen our digital detective skills for the future. Test your skills, learn the science behind AI trickery, and discover how to stay savvy in a world where seeing (or hearing) isn’t always believing... See omnystudio.com/listener for privacy information.

Hysteria
ICE Fission

Hysteria

Play Episode Listen Later Jan 29, 2026 91:01


Erin and Alyssa dig into the latest news from the Twin Cities— the senseless tragedy of Alex Pretti's death, and the inspiring resolve of the Minnesotans who continue to stand up for each other. With Greg Bovino's “demotion,” are things about to take a turn for the better, or is this cynical political window-dressing from Team Trump? Then, Melania Trump's movie premiere at the White House's janky new makeshift room, and Paris Hilton's fight on Capitol Hill to ban AI-generated deep fake porn. And of course, we wrap up with Sani-Petty. Alex Pretti's Friends and Family Denounce ‘Sickening Lies' About His Life (NYT 1/25)Republican calls are growing for a deeper investigation into fatal Minneapolis shooting of Alex Pretti (PBS 1/26)Scoop: Stephen Miller behind misleading claim that Alex Pretti wanted to "massacre" agents (AXIOS 1/27)Trump Defends Noem as She Faces Bipartisan Criticism (WSJ 1/27)Democrats Vow Not to Fund ICE After Shooting, Imperiling Spending Deal (NYT 1/24)Melania's $75 Million Movie Premiered in a Makeshift Theater (The Daily Beast 1/24)‘They sold my pain for clicks': Paris Hilton urges lawmakers to act on nonconsensual deepfakes (The 19th 1/22) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Tech Won't Save Us
Elon Musk Profits Off Non-Consensual Deepfakes w/ Kat Tenbarge

Tech Won't Save Us

Play Episode Listen Later Jan 29, 2026 66:31


Paris Marx is joined by Kat Tenbarge to discuss the explosion of abusive deepfakes on X, including how Elon Musk is profiting from the sexual exploitation of women and children while his followers use Grok to engage in gender-based violence. Kat Tenbarge is an independent journalist who writes Spitfire News. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Kat has been thoroughly covering the Grok and XAI deepfake and sexual abuse story Paris wrote about why the Grok scandal shows we need more comprehensive tech regulation The deepfake documentary mentioned was called Another Body You can see the result of Megan Thee Stallion's defamation lawsuit here Grok was blocked in Indonesia and Malaysia in response to its generation of explicit images

Start Making Sense
Elon Musk Profits Off Non-Consensual Deepfakes w/ Kat Tenbarge | Tech Won't Save Us

Start Making Sense

Play Episode Listen Later Jan 29, 2026 66:31 Transcription Available


Paris Marx is joined by Kat Tenbarge to discuss the explosion of abusive deepfakes on X, including how Elon Musk is profiting from the sexual exploitation of women and children while his followers use Grok to engage in gender-based violence.Kat Tenbarge is an independent journalist who writes Spitfire News.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Disruption / Interruption
Disrupting the Skills Crisis: Mastering Learning with VR and AI with Colin Cooper

Disruption / Interruption

Play Episode Listen Later Jan 29, 2026 33:27


In this episode of Disruption/Interruption, KJ sits down with Colin Cooper, CEO and co-founder of Illuminate XR, to explore the massive skills gap threatening our workforce. With over 100 companies under his belt and thousands of global hires, Colin has witnessed firsthand how our 200-year-old education system is failing to prepare people for today's AI-driven world. Discover how immersive technology, emotional intelligence training, and personalized learning are revolutionizing the way humans learn, and why the next few years will determine whether we step into the "age of humanity" or fall behind forever. Four Key Takeaways The Education System Is 200 Years Behind (4:42) Our schools still operate on an industrial-age factory model designed to create compliant workers, not creative thinkers. Classrooms haven't fundamentally changed in over 1,000 years, and curriculum remains rooted in preparing students for a world that no longer exists. Meet Learners Where They Are (7:40)Real learning happens when you reduce cortisol and increase dopamine by connecting education to personal interests. Whether it's tailoring physics lessons to football or basketball, or using horses to teach emotional intelligence, personalization is the key to engagement and retention. AI Should Amplify, Not Replace (20:05) The future isn't about AI replacing teachers or workers—it's about using AI as a personal assistant to handle repetitive tasks. Start by identifying one repetitive task in your job and automate it with AI tools like ChatGPT, Claude, or Perplexity. We're Living Through 25 Years of Progress Every Year (29:20)The convergence of AI and VR—technologies that shouldn't have merged for another 15-20 years—has compressed innovation timelines. What used to take 10-15 years to bring to market now takes weeks. The next 3-4 years will be transformative, and we have one shot to get it right. Quote of the Show (29:40):"When a year goes by, you normally get one year's worth of progress, but where we're at today, a year goes by and it's like 20 to 25 years of technology growth." – Colin Cooper Join our Anti-PR newsletter where we’re keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome. Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval Ways to connect with Colin Cooper:LinkedIn: https://www.linkedin.com/in/colinmbcooper/Company Website: https://illuminatexr.com How to get more Disruption/Interruption: Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruptionApple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.

En Casa de Herrero
Reportaje: La tortura del 'deepfake y Grok cuando la IA se utiliza para destruir

En Casa de Herrero

Play Episode Listen Later Jan 28, 2026 11:19


Luis Herrero analiza junto a Àngels Hernández los daños que provocan los "deepfake" y la IA de Grok.

The Lending Link
The Fraud You Can't See Coming: Deepfakes and AI Impersonation

The Lending Link

Play Episode Listen Later Jan 28, 2026 30:43


Why is fraud getting harder to spot just as AI gets better at pretending to be human? In this episode of The Lending Link, host Nathan George sits down with Justin Keene, Ph.D., CEO and Co-Founder of Moveris, to explore one of the fastest-growing threats in digital onboarding and account access: deepfakes and AI impersonation.They talk about why old ways of spotting fraud, like checking device information, how people act, and even video checks to see if someone is real, are not enough anymore. As AI-powered attacks become smarter and easier to deploy widely, scammers can now bypass many of the checks that banks and lenders have relied on for a long time. Justin shares how Moveris tackles the problem in a new way by looking for real signs that a person is actually there, not just signs of trickery, using body and mind signals picked up by a regular camera.They also talk about how AI is changing the way fraud works, making smaller banks, credit unions, and lenders who work with people with lower credit scores more likely to be targeted. They explain why tricks like using stolen usernames and passwords, reusing old identities, and using fake videos to fool people are happening more often. They end by looking to a future where showing you are a real person and the same person each time could become the main way to prove who you are online, instead of using passwords, CAPTCHA, and other steps that slow things down.

Engadget
The EU is investigating Grok and X over potentially illegal deepfakes

Engadget

Play Episode Listen Later Jan 27, 2026 6:20


X's lack of controls potentially 'exposed citizens in the EU to serious harm,' regulators said. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Janett Arceo y La Mujer Actual
Dr. Emilio Maus… Cuando las herramientas de IA se usan para crear deepfakes sexuales

Janett Arceo y La Mujer Actual

Play Episode Listen Later Jan 27, 2026 92:42 Transcription Available


¡¡NUEVO PODCAST!!-Dr. Emilio Maus… Cuando las herramientas de IA se usan para crear deepfakes sexuales (no es “contenido”, es violencia)"  -Gianco Abundiz….  "El IVA de los Seguros"  -Ivonne Sieck…  "El uso de Adaptogenos para el Estrés"  -Saúl Uribe (El Sastre del Niño Jesús) “No es el Niño Dios, es el Niño Jesús”   -Manoella Torres… Concierto: “Celebra el Amor” 

Jeff & Jenn Podcasts
Good Morning and E News: Paris Hilton is crusading against deepfakes...

Jeff & Jenn Podcasts

Play Episode Listen Later Jan 26, 2026 25:32


Good Morning and E News: Paris Hilton is crusading against deepfakes, Brad Pitt's biggest movie, Harry Styles and the Pope, and more. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Airtalk
Are we at a political turning point after the second Minnesota shooting by a federal agent? AI deepfake detection, and more

Airtalk

Play Episode Listen Later Jan 26, 2026 99:03


Today on AirTalk: Are we at a political turning point after the second Minnesota shooting by a federal agent? (0:30) In the AI era, how are deepfake images being detected? (16:01) SoCal History: New book explores infamous murder of Elizabeth Short (33:59) Is there legal basis for an administrative warrant granting ICE entry into private spaces? (51:38) How is ICE impacting businesses in Orange County? (1:05:06) Are older folks more screen addicted than their younger peers? (1:28:42) A previous version of this podcast incorrectly ascribed the killing of Alex Pretti to Immigration and Customs Enforcement (ICE). This has been corrected. Visit www.preppi.com/LAist to receive a FREE Preppi Emergency Kit (with any purchase over $100) and be prepared for the next wildfire, earthquake or emergency

The 404 Media Podcast
Exposing the People Behind Deepfake Porn Sites with Bellingcat Investigator Kolina Koltai

The 404 Media Podcast

Play Episode Listen Later Jan 26, 2026 52:25


This week, Sam is in conversation with Kolina Koltai. Kolina is an investigator, senior researcher and trainer at Bellingcat. Her investigations focus on the people and systems behind AI companies and platforms that peddle non-consensual deepfake explicit imagery. They discuss how she found herself in this field, her recent investigation uncovering the man behind two deepfake porn sites, and how it feels to watch these sites go down after exposing the people running them. YouTube Version: https://youtu.be/CbmUwwVGaf4 Stories discussed: Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites Behind a Secretive Global Network of Non-Consensual Deepfake Pornography Unmasking MrDeepFakes: Canadian Pharmacist Linked to World's Most Notorious Deepfake Porn Site Learn more about your ad choices. Visit megaphone.fm/adchoices

TechFirst with John Koetsier
Generative Hollywood: E! founder Larry Namer on AI

TechFirst with John Koetsier

Play Episode Listen Later Jan 26, 2026 20:14


AI is hitting entertainment like a sledgehammer ... from algorithmic gatekeepers and AI-written scripts to digital actors and entire movies generated from a prompt.In this episode of TechFirst, host John Koetsier sits down with Larry Namer, founder of E! Entertainment Television and chairman of the World Film Institute, to unpack what AI really means for Hollywood, creators, and the global media economy.Larry explains why AI is best understood as a productivity amplifier rather than a creativity killer, collapsing months of work into hours while freeing creators to focus on what only humans can do. He shares how AI is lowering barriers to entry, enabling underserved niches, and accelerating new formats like vertical drama, interactive storytelling, and global-first content.The conversation also dives into:• Why AI-generated actors still lack true human empathy• How studios and IP owners will be forced to license their content to AI companies• The future of deepfakes, guardrails, and regulation• Why market fragmentation isn't a threat — it's an opportunity• How China, Korea, and global platforms are shaping what comes next • Why writers and storytellers may be entering their best era yetLarry brings decades of perspective from every major media transition — cable, streaming, global expansion — and makes the case that AI is just the next tool in a long line of transformative technologies.If you care about the future of movies, television, creators, and culture, this is a conversation you don't want to miss.⸻

The Rebuilt Man
AI Girlfriends, Deepfakes, & The Fight For Masculinity | Ep. 333

The Rebuilt Man

Play Episode Listen Later Jan 26, 2026 11:23


JOIN THE 7 DAY RESET - ▶️ www.therebuiltman.com/7dayreset   What if the porn struggle you're facing today is only the beginning?   In this powerful episode of The Rebuilt Man, Coach Frank Rich confronts one of the most urgent and overlooked threats to modern masculinity: the rise of AI-generated pornography, deepfakes, and artificial intimacy.   Pornography has always been at the forefront of technological adoption — from the internet to streaming to virtual reality. Now, with artificial intelligence capable of generating hyper-realistic sexual content, AI girlfriends, and emotionally responsive digital companions, the stakes have never been higher.   This episode explores: Why AI porn is fundamentally more dangerous than traditional porn   How deepfakes and AI-generated models are already flooding the internet   Why loneliness and isolation make men uniquely vulnerable to artificial intimacy   How technology is being designed to hijack attention, desire, and identity   Why filters and willpower won't be enough in the future   And how brotherhood, discipline, structure, and identity transformation are the real solution   This isn't a fear-based episode. It's a wake-up call.   Because the future will reward disciplined men — and destroy passive ones.   What You'll Learn In This Episode Why pornography consistently leads the adoption of new technology   How AI porn differs psychologically from traditional porn   The rise of deepfake porn and AI-generated sexual content   How AI companions and "digital girlfriends" are replacing real connection   Why loneliness is the real driver behind compulsive sexual behavior   Why identity transformation is more powerful than willpower   How brotherhood and accountability rewire the brain   Why The Rebuilt Man framework is designed for the world men are walking into   How to begin your journey with the free 7-Day Porn Reset  

The Signal
The global outrage over Musk's Grok AI image abuse

The Signal

Play Episode Listen Later Jan 25, 2026 15:18


Elon Musk's AI tool, Grok, has caused global outrage after it made it really easy to virtually undress people via AI image editing. Musk's competitor to ChatGPT is integrated into the social media platform X and earlier this month users started asking it to do some pretty gross things. X was flooded with images of semi-naked women, with users able to reply to anyone's perfectly innocent picture and ask it to put them in a bikini, for example. While Musk initially downplayed the scandal and was slow to act, he's now put limits on the tool. But is that the end of the story and should X be banned?Today, Sam Cole, a tech journalist and co-founder of 404 Media on the Grok scandal and how it enabled the exploitation and abuse of women. Featured: Sam Cole, co-founder of 404 Media

BBW BadGirl With Isabella Martin
Episode #152: The Running Man & Tech Updates

BBW BadGirl With Isabella Martin

Play Episode Listen Later Jan 25, 2026 17:09


AllMyLinks.com/BBWBadgirl Main site: LosAngelesBBW.com Email: BellaMartinLA@gmail.com Premium Onlyfans: onlyfans.com/BBWBadgirl or BBWBadgirl.com Free Onlyfans: Onlyfans.com/bbwbadgirlbella The BBWBadgirl podcast where 80's sci fi movie girl and supersize BBW Isabella Martin (LosAngelesBBW.com), discusses sex, life, and sexwork (virtual, escorting, phonesex + BDSM domination etc). She explores kinks, desires and shares her experiences while giving valuable tips to men on topics such as having an affair, fat sex, approaching fat girls, weed & sex, sex toys for men and so much more. For open-minded men who love, lust, and appreciate FAT women. Episode #152: The Running Man & Tech Updates Old Topic: * X crackdown: won't let you unwrap Isabella from 2021 * AI: UK laws are now AOC & Paris Hilton with Defiance Act * Take of two clients and progibited terms: Don't be the mad 420 guy. New Topics * Movies and Isabella's Sexwork Filter: The Running Man would have been caught day 1 Twitter: @BBWBadgirlPOD or @IsabellaBBW or @IsabellaCrush IG: BBWBadgirlPOD Contributions to this podcast's development can be made at cashapp $BBWBadgirlPOD

Masters of Privacy
Sam Kaplan: Cybersecurity in the age of agentic AI, deep fakes, and social engineering

Masters of Privacy

Play Episode Listen Later Jan 25, 2026 35:46


Can AI agents be deployed for enhanced protection? What is a “triple extortion”? How is ransomware evolving? Is there hope for SMEs?Sam Kaplan is a policy, legal, and national security professional with over eighteen years of experience across the public and private sectors. He is currently the Assistant General Counsel for Public Policy & Government Affairs at Palo Alto Networks, providing legal guidance on domestic and international legislative, regulatory, and policy matters, with a focus on cybersecurity, AI governance, privacy, data security, international data flows, and public-private capacity building.Before Palo Alto Networks, Sam led the global product policy team for Facebook's News Feed and News Tab at Meta Platforms, addressing issues like AI/ML fairness, algorithmic transparency, platform integrity, election security, misinformation, and harmful content.Prior to his private sector roles, Sam spent over thirteen years in the Federal Government. He held senior leadership positions at the U.S. Department of Homeland Security, including Assistant Secretary for Cyber, Infrastructure, Risk and Resilience Policy and Chief Privacy Officer. Earlier government roles included work at the U.S. Department of Justice (Office of Legal Policy, Bureau of Alcohol, Tobacco, Firearms and Explosives, and U.S. Attorney's Office for the Eastern District of Virginia) and as Counselor to a member of the Privacy and Civil Liberties Oversight Board, focusing on the U.S. Intelligence Community.References:* Sam Kaplan on LinkedIn* Palo Alto Networks* Unit 42 Research (Palo Alto Networks)* Cyber Information Sharing and Collaboration Program (CISCP) at CISA (Cybersecurity and Infrastructure Security Agency) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe

The Munk Debates Podcast
Friday Focus: Trump's Davos TACO, Carney's vision for Canada, and an AI deepfake blurs the lines of reality

The Munk Debates Podcast

Play Episode Listen Later Jan 23, 2026 19:48


Become a Munk Donor ($50 annually) to get 72-hour advanced access to full length editions of the Friday Focus podcast with Janice Stein. Go to www.munkdebates.com to sign up. Rudyard and Janice begin the show with the remarkable-sized TACO Trump delivered this week, whipping the world into a frenzy over threats to take over Greenland. His tough talk turned into a whimper after a meeting with the NATO Secretary General and a framework of a deal that has not been accepted by European leaders. What happened behind closed doors that made Trump back down from these threats? How did the bond market play into this? Meanwhile Mark Carney's Davos speech was well received by the European elite, but what is his plan for Canada? How will he enact sweeping and fundamental change in this country that match the substance of his speech? In the second half of the show Rudyard and Janice discuss an AI-generated video impersonating former UN Ambassador John Bolton discussing Mark Carney's China visit. The convincing video - which has been viewed over 180,000 times - is a reminder of how easily AI can be used by interference and influence campaigns by foreign governments. What if 2026 is the year we can't tell what's real and what's not?

City Cast Pittsburgh
ICE Rapid Response Trainings, Fresh Sharts & Fred Rogers Deepfakes

City Cast Pittsburgh

Play Episode Listen Later Jan 23, 2026 52:11


More than 500 Pittsburghers showed up for a training this week on how to safely witness and respond to ICE activity — including City Cast contributor Meg St-Esprit. She joins host Megan Harris and executive producer Mallory Falk to share what she learned from the newly-formed group Frontline Dignity. Plus, the team discusses AI-generated deepfakes of Mister Rogers, behind-the-scenes scheming to keep the Post-Gazette alive, and a viral KDKA typo. Get your tickets to The Future of Pittsburgh Journalism, a panel featuring our very own Megan Harris and other local media experts. It's happening next Thursday, Jan. 29 at 5:30 p.m. at the Heinz History Center. Get your tickets to hear Megan on a panel about the future of Pittsburgh journalism. Notes and references from today's show: PODCAST: Can You Be Charged for Getting in ICE's Way? [City Cast Pittsburgh] PODCAST: What Actually Happens During an ICE Raid? [City Cast Pittsburgh] Coraopolis overturns ICE agreement following community pushback and council shakeup [Public Source] Doubling down, doubling back: local departments diverge on ICE cooperation [Public Source] Munhall, Stowe police walk back ICE partnerships after quietly inking agreements [Public Source] Hundreds taken into custody at Pittsburgh ICE office, an emerging regional deportation hub [Post-Gazette] The crass Fred Rogers of AI deepfakes has some Pittsburghers 'horrified' [Public Source] Some Post-Gazette workers call for new union leadership [TribLive] Investor group seeks to make Post-Gazette nonprofit [Axios Pittsburgh] Pittsburgh school board members to vote next week on whether to reconsider building closures [WESA] Learn more about the sponsors of this January 23rd episode: Pittsburgh Opera P3R The Ascent by Christopher Walker Become a member of City Cast Pittsburgh at membership.citycast.fm. Want more Pittsburgh news? Sign up for our daily morning Hey Pittsburgh newsletter. We're also on Instagram @CityCastPgh! Interested in advertising with City Cast? Find more info here.

Agent of Wealth
The AI Scam Wave: Deepfakes, Voice Clones, and Your Bank Account With Robert Siciliano

Agent of Wealth

Play Episode Listen Later Jan 23, 2026 36:43


As wealth transfers across generations, cybercriminals are positioning themselves to intercept it.In this episode of The Agent of Wealth, host Marc Bautis is joined by Robert Siciliano, one of the nation's most trusted authorities on cybersecurity, identity theft, and fraud prevention.Robert is a private investigator, Certified Speaking Professional, bestselling author, and the creator of The Strategic Human Firewall™. You've likely seen him on CNN, MSNBC, Fox, or The Today Show, helping millions understand how cybercriminals think — and how to stay one step ahead of them.Together, Marc and Robert explore why affluent families are increasingly targeted by fraudsters, how denial and misplaced trust make people vulnerable, and why AI-driven scams like deepfake voices and impersonation are changing the threat landscape entirely.In this episode, you will learn:Why “It won't happen to me” is one of the most dangerous mindsets in cybersecurity — and how denial quietly increases risk.Why affluent families and older generations are prime targets right now.How modern scams actually work, including wrong-number texts, romance scams, and long-game trust-building tactics.What practical steps can be taken to reduce risk immediately.The role adult children and trusted advisors play in protecting aging parents without taking away independence.And more!Tune in for a candid, eye-opening conversation about why situational awareness — not paranoia — is the new baseline for protecting wealth, identity, and family legacy in a digital world.Resources:Episode Transcript & Blog | protectnowllc.com | Bautis Financial: 8 Hillside Ave, Suite LL1 Montclair, New Jersey 07042 (862) 205-5000 | Schedule an Introductory CallWant to be a guest on The Agent of Wealth? Send Marc Bautis a message on PodMatch, here: https://tinyurl.com/mt4z6ywc

Discover Lafayette
Bob Miller, CEO and Founder of IRGame, Gamification for Incident Response Training

Discover Lafayette

Play Episode Listen Later Jan 23, 2026 60:11


Bob Miller, CEO and Founder of IRGame, is a technology entrepreneur with 30+ years of experience across cybersecurity and emerging technologies. He's a pioneer in using AI-powered gamification for incident response (“IR”) training, designed specifically for busy executives who can't spend full days in training but must make high-stakes decisions quickly during real crises. IRGame puts executive teams through realistic scenario such as ransomware, data breaches, business email compromise, and AI-related incidents, so they can practice decision-making under pressure. Returning to Lafayette and building startups Bob graduated in 1988 from University of Louisiana – Monroe in Computer Science and Math. He moved back to Louisiana from San Jose around 2010 and chose Lafayette as home. Almost immediately, the Lafayette Economic Development Authority (LEDA) contacted him about helping build a startup accelerator. With experience across roughly 10 startups, he became founding director of what he named the Opportunity Machine, where his title was “Head Machinist”). Bob later continued mentoring via the Accelerator Board. After three years, engineer and entrepreneur Bill Fenstermaker recruited him to help commercialize products at Fenstermaker & Associates. Bob worked on projects including a custom GIS system and underwater acoustics, following earlier work in areas like satellite systems. Later he became COO at Waitr in its early stage, helping scale from about 300 to 3,000 employees in roughly 12–14 months, the kind of operational scaling challenge he's often brought in to manage. He then joined a local managed service provider and helped transform it into a managed security service provider, an experience that directly led to IR Game. Why IR Game exists Bob identified a persistent problem: many organizations resist spending time and money on cybersecurity because they don't understand it and lack an emotional connection because they have never experienced a crisis. Traditional tabletop training exercises meant to train a business team on how to respond during a crisis (paper scenarios, PowerPoint presentations, and sitting around a conference table discussing solutions) have existed for decades, but they're time-consuming (often 80–90 hours to prepare) and require pulling people into a room for a full day, which makes them expensive and hard to scale. If it's hard, many companies simply don't do it. Bob attended a cybersecurity conference and participated in a tabletop designed for managed service providers, an exercise that was “fundamentally terrifying” and eye-opening. A worst-case Managed Service Provider (“MSP”) scenario is when a third-party tool, especially remote monitoring and management (RMM) software, gets compromised. That can lead to ransomware across an MSP's entire customer base simultaneously. The exercise illustrated IRGame's central insight: about 80% of incident response is non-technical in nature: financial consequences, shutdown decisions, customer impact, employee panic, communications, reputational and legal exposure. Bob brought the tabletop back to his company and ran it with 80 of 130 employees, customizing it with real customer names, revenue figures, and tenure. Even with a mature incident response plan and twice-yearly practice, they discovered a dozen needed changes. That convinced him that if a well-prepared security organization learns that much from a scenario, “everybody can.” The breakthrough: turning tabletop into an online multiplayer game During that exercise, a longtime software collaborator of Bob’s mentioned he still had a dormant game app framework built years earlier for a high-school project with Bob's daughter. He believed he could convert the paper tabletop into an online multiplayer experience in a weekend. After running the in-person tabletop on Thursday, he demonstrated a working browser-based multiplayer version on Sunday. They showed it to cybersecurity tabletop authors and industry influencers, Matt Lee and Ethan Tancredi, who were shocked by how quickly the tabletop content had been transformed into a functional digital game. Soon after, they invited about 20 people to test it. The early version looked rough, like a 1980s text adventure, but it worked. The response was far stronger than expected: participants reported intense emotional engagement and immediate practical takeaways. One government participant said it left him rattled, with pages of notes and a need for a drink; an MSP in Hawaii asked when he could use it with customers. That became a monthly community practice program: they've run 25+ free games, putting 1,000+ people through the system. As demand grew—especially from providers wanting to use it with customers—IRGame chose to commercialize. IR Game mirrors tabletop training but compresses it into a high-intensity, guided simulation. A scenario is narrated like scenes in a movie. Participants answer opening questions to get teams communicating quickly, which is critical because incident response requires fast coordination. Players assume roles and must allocate limited resources to tasks. Challenges pile up faster than teams can handle them, forcing prioritization and tradeoffs, just like real incidents. A key design element is pressure: a relentless timer counts down; there's no pause button. This stress reveals the truth: under pressure, people become more honest about gaps in their preparedness. That's valuable because organizations often sugarcoat weaknesses—until a simulation forces real reactions. Bob explained an example crisis scenario: a business email compromise (which he says is currently a dominant incident type). A financial firm discovers a customer wired money to a “new account” supposedly sent by the CFO, yet the CFO didn't send it. As the story unfolds, participants learn the compromise likely affected many customers, not just one. The game surfaces operational realities executives often miss: internal rumors, uncontrolled communications, legal exposure triggered by words like “breach,” and the need for an “event mode” communications policy that calms the organization and prevents chaos. AI scenarios and new risks IRGame also focuses on emerging AI-related risks. Miller says they ran what they described as the first AI incident scenario at a national security conference (IT Nation Secure) and now maintain multiple AI scenarios. The point is not to create fear, but to provide a safe environment to practice decisions around new threat patterns. Practical cybersecurity guidance for individuals and small businesses Bob emphasizes that cybersecurity is no longer optional and that AI strengthens attackers as well as defenders. He predicts that in 2026 smaller businesses will face increased targeting, because automation lets “two dudes and a dog” run campaigns that once required larger teams, making up revenue in volume rather than big single payouts. He also notes that cybercriminal ecosystems now resemble legitimate businesses, including tools, support, and organizational structure. Bob recommends baseline controls that are realistic for small organizations: unique passwords, password managers, multi-factor authentication, training on phishing, cyber insurance, and economical endpoint monitoring (EDR/MDR). These measures raise the cost for attackers so they move on to easier targets, though no control is perfect. On password managers, Bob uses Keeper and mentions 1Password and others. He strongly warns against saving passwords in browsers. He also flags emerging concerns about AI-enabled browsers that maintain a large “context window” across many sites, potentially increasing risk if compromised. On online exposure to your information, such as emails and staff info on websites, he advises sharing only what's necessary. Data can be scraped and used for phishing and impersonation. Deepfakes and better-written scams are making social engineering harder to detect. He also notes that much personal data is already exposed through breaches, citing Louisiana's DMV breach as an example of widespread data loss where every licensed driver's Social Security Number was compromised. Incident response planning and insurance pressure A recurring theme: organizations need an incident response plan and must practice it, especially as cyber insurers increasingly demand proof. In a room of 50+ attorneys he spoke to recently, Miller found only three had a plan, and none practiced it. He warned that future claims could be denied if companies claim they had plans but don't demonstrate practice. Trying IRGame for free IRGame offers free public sessions: the last Friday of every month, sign-up available via their website. Miller notes they also post recordings and content online (LinkedIn and YouTube). Visit https://www.irgame.ai/ for more information and to sign up for a free public session. You can also see how IRGame works by visiting its youtube channel at https://www.youtube.com/@IRGameify Personal note: music and creativity Outside cybersecurity, Miller is a musician, primarily blues/rock, and often appears on video with guitars behind him. He draws a parallel between software development and music: both require creativity within rules. He argues policies and procedures aren't bureaucracy—they're like scales and tempo: structure that enables effective performance under pressure.

Science Friday
Deepfakes Are Everywhere. What Can We Do?

Science Friday

Play Episode Listen Later Jan 22, 2026 22:36


Deepfakes have been everywhere lately, from fake AI images of Venezuelan leader Nicolás Maduro following his (real) capture by the United States, to X's Grok AI generating nonconsensual images of real people in states of undress. And if you missed all that, you've almost certainly had your own deepfake close encounter in your feed: maybe rabbits bouncing on a trampoline or an unlikely animal friendship that seems a little too good to be true.Deepfakes have moved beyond the realm of novelty, and it's more difficult than ever to know what is actually real online. So how did we get here and what is there, if anything, to do about it?Joining Host Flora Lichtman are Hany Farid, who's studied digital forensics and how we relate to AI for over 25 years, and Sam Cole, a journalist at 404 Media who's covered deepfakes and their impact since 2017.Guests:Dr. Hany Farid is a professor of electrical engineering and computer sciences at University of California, Berkeley.Sam Cole is a journalist at 404 Media, based in New York, NYTranscripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

On with Kara Swisher
Elon's “Nudify” Mess: How X Supercharged Deepfakes

On with Kara Swisher

Play Episode Listen Later Jan 22, 2026 53:36


On Christmas Eve, Elon Musk's X rolled out an in-app tool that lets users alter other people's photos and post the results directly in reply. With minimal safeguards, it quickly became a pipeline for sexualized, non-consensual deepfakes, including imagery involving minors, delivered straight into victims' notifications.  Renée DiResta, Hany Farid, and Casey Newton join Kara to dig into the scale of the harm, the failure of app stores and regulators to act quickly, and why the “free speech” rhetoric used to defend the abuse is incoherent. Kara explores what accountability could look like — and what comes next as AI tools get more powerful. Renée DiResta is the former technical research manager at Stanford's Internet Observatory. She researched online CSAM for years and is one of the world's leading experts on online disinformation and propaganda. She's also the author of Invisible Rulers: The People Who Turn Lies into Reality. Hany Farid is a professor of computer sciences and engineering at the University of California, Berkeley. He's been described as the father of digital image forensics and has spent years developing tools to combat CSAM. Casey Newton is the founder of the tech newsletter Platformer and the co-host of The New York Times podcast Hard Fork.  This episode was recorded on Tuesday, January 20th. When reached for comment, a spokesperson for X referred us to a ⁠a statement post on X⁠, which reads in part: We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Business of Tech
Authority Challenges for MSPs: Deepfake Risks, AI Security Shifts, and Vendor Accountability

Business of Tech

Play Episode Listen Later Jan 22, 2026 17:31


Escalating distrust in identity systems and misuse of AI are forcing a shift in security accountability for small and midsize businesses. Recent analysis highlights that the prevalence of deepfake-driven business email compromise and non-human digital identities is eroding confidence in traditional protective solutions. According to Techyle and supporting reports referenced by Dave Sobel, the ratio of non-human to human identities in organizations is now 144:1, further complicating authority and responsibility for managed service providers (MSPs). As trust in exclusive third-party control disintegrates, co-managed security models are becoming standard, repositioning decision-making and liability.The rise of AI-generated data—described as “AI slop”—has prompted increased adoption of zero trust models, with 84% of CIOs reportedly increasing funding for generative AI initiatives. However, as rogue AI agents are recognized as a significant insider threat, current security services are often ill-equipped to manage these new vulnerabilities. Regulatory bodies, including CISA, have issued guidance noting that the integration of AI into critical infrastructure introduces greater risk of outages and security breaches, particularly when governance remains ambiguous. High-profile vulnerabilities in open-source AI platforms used within cloud environments further highlight the persistence of operational risks.Adjacent technology updates include new releases from vendors such as 1Password, WatchGuard, JumpCloud, and ControlUp. These offerings focus on enhancing phishing prevention, expanding managed detection and response, and automating endpoint management for MSPs. However, Dave Sobel emphasizes that these tools introduce additional layers of automation and integration without adequately clarifying who ultimately holds authority and accountability when failures or breaches occur. There is a consistent warning that stacking solutions or outsourcing core functions without redefining operational control creates gaps between action and oversight.For MSPs and IT leaders, the key takeaway is that security risk is no longer defined by missing technology but by unclear governance, undefined authority, and misaligned incentives. Without explicit contractual and operational delineation of responsibility when deploying AI and automation, service providers are increasingly exposed to liability by default. The advice is to move beyond tool-centric strategies and focus on process clarity: define who authorizes, audits, and terminates non-human identities; establish which parties approve automation actions; and ensure clients understand shared responsibilities to mitigate silent risk accumulation. Four things to know today00:00 TechAisle Warns SMB Security Will Shift in 2026 as Identity Attacks and AI Agents Redefine Risk05:44 AI Moves Deeper Into Critical Infrastructure as Open-Source and Human Weaknesses Expand the Attack Surface09:35 MSP Security Platforms Automate Phishing Prevention and MDR—Outpacing Governance and Control Models12:12 AI-Powered MSP Tools Promise Control and Efficiency, But Shift Responsibility by Default This is the Business of Tech.    Supported by:  https://scalepad.com/dave/

Influencer Confidential
AI Is About to Change Brand Deals FOREVER (Creators, Watch This ASAP) #281

Influencer Confidential

Play Episode Listen Later Jan 22, 2026 11:32 Transcription Available


Whether you love AI or hate it, one thing is clear: IT'S HERE TO STAY. We've all seen the AI and deepfake videos floating around the internet lately and they're equal parts fascinating and terrifying! And no, the creator industry is not exempt from this shift. In fact… it's already showing up inside brand deal contracts! In this episode, I'm walking you through what I'm actually seeing right now when it comes to AI and brand partnerships from both the brand side and the creator side so you can think bigger about how you negotiate, protect yourself, and future-proof your business. Because here's the part most creators don't realize yet: If it's not in the contract, it can arguably be allowed.

Night Attack Audio Feed
Great Night #238: Bob Ross of Deep Fakes

Night Attack Audio Feed

Play Episode Listen Later Jan 21, 2026


Justin calls in from home on baby-duty while Ashley recovers, meanwhile Brian becomes the Bob Ross of Deep Fakes. Get an extra episode every week only at https://www.patreon.com/greatnight!

Using the Whole Whale Podcast
Grok's Deepfake Crisis Is Forcing a Global Reckoning on AI, Platform Safety, and Accountability (news)

Using the Whole Whale Podcast

Play Episode Listen Later Jan 21, 2026 17:14


This week on the Nonprofit News Feed, the hosts dive into pressing digital safety issues and uplifting philanthropic news. The episode covers the alarming rise of deep fakes and their implications for nonprofits, and celebrates a significant donation to support LGBTQ+ youth. Main Topics: Global Crisis of Deep Fakes: The episode highlights the disturbing trend of deep fakes, particularly on the AI platform Grok, which is generating non-consensual and sexualized images. The hosts stress the urgent need for nonprofits working with vulnerable groups to advocate for stricter regulations and educate communities on digital safety. Implications for Nonprofits: The conversation emphasizes the critical role of nonprofits in combating AI-enabled cyberbullying and gender-based violence. Organizations are called to update internet safety protocols and engage in advocacy to protect affected communities. Mackenzie Scott's Transformative Gift: In a beacon of hope, Mackenzie Scott's $45 million unrestricted donation to the Trevor Project is celebrated. This lifeline comes after federal funding cuts and highlights the importance of substantial, unrestricted support for nonprofits in crisis. Dissolution of the Corporation for Public Broadcasting: The episode also covers the dissolution of this vital organization due to funding cuts, impacting local journalism and broadcasting. The hosts discuss the potential repercussions on local news and the need for new revenue streams.

Night Attack Video Feed
Great Night #238: Bob Ross of Deep Fakes

Night Attack Video Feed

Play Episode Listen Later Jan 21, 2026


Justin calls in from home on baby-duty while Ashley recovers, meanwhile Brian becomes the Bob Ross of Deep Fakes. Get an extra episode every week only at https://www.patreon.com/greatnight!

The Human Risk Podcast
Becky Holmes on Romance Scams

The Human Risk Podcast

Play Episode Listen Later Jan 21, 2026 68:28


What lies behind Romance Fraud? Romance fraud is one of the fastest-growing forms of fraud worldwide, and one of the most emotionally devastating. It's also one of the most misunderstood.On this episode, I'm speaking to Becky Holmes, author of the bestselling book Keanu Reeves Is Not in Love With You. Becky didn't become interested in romance fraud through victimhood or research. She stumbled into it during the pandemic after being approached by scammers online — and instead of ignoring them, she decided to wind them up. What began as a joke — sending absurd messages, inventing ridiculous scenarios, and pushing scam scripts to breaking point — turned into something much more serious. Through humour, Becky uncovered the psychological mechanics of romance fraud: how trust is built, how isolation and gaslighting work, and why believing you're “too smart to fall for it” is often the most dangerous belief of all.In this conversation, we explore why laughing at scammers is not the same as blaming victims, why romance fraud closely mirrors patterns seen in abusive relationships, and why shame — not stupidity — keeps people trapped. We also talk about humour as a gateway to learning, the limits of victim-focused storytelling, and the uncomfortable truth that none of us are immune. This is a funny conversation in places. And then it isn't. This is not the first time the Human Risk Podcast has explored romance fraud. On a previous episode, I spoke with Anna Rowe, a victim of romance fraud, about the profound emotional and psychological impact of being deceived by someone you believed you loved.In this episode, we discuss:Why romance fraud is a psychological scam, not a technical oneHow humour can expose manipulation without mocking victimsThe striking parallels between romance fraud and abusive relationshipsIsolation, gaslighting, and shame as tools of controlWhy “it would never happen to me” is such a dangerous beliefThe role of AI, deepfakes, and evolving scam tacticsWhy fraud literacy matters — and why people don't seek it out until it's too lateThe emotional cost of online exposure and harassmentWhat institutions, platforms, and society still get wrong about fraudGuest ProfileBecky Holmes is an author, speaker, and writer specialising in fraud, online manipulation, and digital harm. Her first book, Keanu Reeves Is Not in Love With You, explores the world of romance fraud through humour, storytelling, and lived experience.Her second book, The Future of Fraud, examines how scams are evolving in a world shaped by AI and digital identity. Links and resourcesBecky's first book Keanu Reeves Is Not in Love With You - https://share.google/fKQ6qCL1l8Ygl1ey2The Future of Fraud her second (out April 2026) - https://share.google/fKQ6qCL1l8Ygl1ey2Becky on LinkedIn: https://www.linkedin.com/in/beckyholmeshatesspinach/Becky on Instagram: Becky Holmes (@deathtospinach)Becky on Twitter/X: https://x.com/deathtospinach?Becky's book agent profile: https://www.curtisbrown.co.uk/client/becky-holmesPrevious Human Risk Podcast episode with Anna Rowe on being a victim of romance fraud: https://www.humanriskpodcast.com/anna-rowe-on-romance-scams/AI-Generated Timestamped Summary00:00 – Why romance fraud mattersChristian explains why the podcast is returning to romance fraud, linking this episode to an earlier conversation with victim Anna Rowe (linked in the show notes).02:00 – How Becky Holmes got into romance fraudBecky describes how being approached by scammers during lockdown — and deciding to wind them up — accidentally turned into deep expertise.05:00 – When jokes expose the scriptAbsurd replies, fake crime scenes, and the moment Becky realised scammers weren't reading messages, just following scripts.09:00 – Laughing at scammers, not victimsWhy humour can highlight manipulation without blaming those who fall victim — and how the book shifts from comedy to something much darker.14:00 – Romance fraud as psychological abuseThe parallels with abusive relationships: isolation, gaslighting, shame, and why people stay, return, or fall again.21:00 – “It would never happen to me”Why believing you're too smart to fall for romance fraud is often the biggest risk of all.28:00 – What the media gets wrongVictim-focused storytelling, ignored systems, and why AI, deepfakes, and scam scripts matter more than headlines.36:00 – Fraud literacy and preventionWhy people don't seek out information about fraud until it's too late — and how humour can be a gateway to awareness.45:00 – The personal cost of online exposureOnline harassment, cyberflashing, and the emotional toll of spending years inside the systems you're critiquing.55:00 – What's next for BeckyUpcoming books, speaking work, and where to find her online.

Nonprofit News Feed Podcast
Grok's Deepfake Crisis Is Forcing a Global Reckoning on AI, Platform Safety, and Accountability (news)

Nonprofit News Feed Podcast

Play Episode Listen Later Jan 21, 2026 17:14


This week on the Nonprofit News Feed, the hosts dive into pressing digital safety issues and uplifting philanthropic news. The episode covers the alarming rise of deep fakes and their implications for nonprofits, and celebrates a significant donation to support LGBTQ+ youth. Main Topics: Global Crisis of Deep Fakes: The episode highlights the disturbing trend of deep fakes, particularly on the AI platform Grok, which is generating non-consensual and sexualized images. The hosts stress the urgent need for nonprofits working with vulnerable groups to advocate for stricter regulations and educate communities on digital safety. Implications for Nonprofits: The conversation emphasizes the critical role of nonprofits in combating AI-enabled cyberbullying and gender-based violence. Organizations are called to update internet safety protocols and engage in advocacy to protect affected communities. Mackenzie Scott's Transformative Gift: In a beacon of hope, Mackenzie Scott's $45 million unrestricted donation to the Trevor Project is celebrated. This lifeline comes after federal funding cuts and highlights the importance of substantial, unrestricted support for nonprofits in crisis. Dissolution of the Corporation for Public Broadcasting: The episode also covers the dissolution of this vital organization due to funding cuts, impacting local journalism and broadcasting. The hosts discuss the potential repercussions on local news and the need for new revenue streams.

Law and Chaos
Ep 197 — OBBBA the Hut

Law and Chaos

Play Episode Listen Later Jan 20, 2026 55:19


Breaking! The president is bugf*** insane! But first … docket alerts. Dun-dun-DUNNNNN.We've reached the “yelling at chestnuts for being lazy” stage of the presidency, as Trump barks out ever-nuttier orders. Meanwhile in Texas, Judge Matthew Kacsmaryk managed to moot the appeal of an LGBTQ+ student group at West Texas A&M which sued in 2023 to enjoin their school's ban on drag shows. All he had to do was rush out an opinion in 48 hours saying that drag shows are not expressive conduct covered by the First Amendment. And lie about Fifth Circuit precedent!TG we have at least one good legal development, this time out of Minnesota!For the main show, we'll break down the legal implications of Elon Musk building a chatbot in his own abusive troll image. Will the Take It Down Act put a stop to AI generation of deepfake CSAM and NCII? Meanwhile over at the Department of Homeland Security, Secretary Kristi Noem and her minions are a wee smidge confused about the Fourth Amendment. If you can even believe it! But even a blind squirrel gets a nut once in a while. DHS wins a round against legislators seeking to inspect ICE detention facilities thanks to some creative accounting. But the victory may be short-lived.]And subscribers will get another visit with that Overstock.com weirdo Patrick Byrne, who managed to lose a defamation case against the most defamation-proof defendant in America. Guest starring: Stefanie Lambert.Show Links:https://www.lawandchaospod.com/BlueSky: @LawAndChaosPodThreads: @LawAndChaosPodTwitter: @LawAndChaosPodSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

NZ Tech Podcast
Hornet Trackers, Siri upgrades + Grok's deepfake fiasco

NZ Tech Podcast

Play Episode Listen Later Jan 20, 2026 53:26


Hear from host Paul Spain and Seeby Woodhouse (Voyager Internet) as they unpack this week's tech news including, One NZ kicking off its 3G shutdown in Dunedin, Radio Trackers boost Auckland's hornet hunt, EB Games' nationwide exit and the government's new AI advisory pilot for SMEs. They explore Apple's choice to use Gemini AI for Siri and Grok's deepfake controversy. Plus, Seeby shares candid reflections on leadership, work-life balance, and why coming back to hands-on management has been both energizing and inspiring.A big thank you to our show partners One NZ, Spark, Workday, 2degrees and Gorilla Technology.

Richard Syrett's Strange Planet
1308 A.I. Digital Demons, Deepfakes, and the Crime Wave No One Will Stop

Richard Syrett's Strange Planet

Play Episode Listen Later Jan 19, 2026 66:29


FOLLOW RICHARD Website: https://www.strangeplanet.ca YouTube: @strangeplanetradio Instagram: @richardsyrettstrangeplanet TikTok: @therealstrangeplanet EP. #1308 A.I. Digital Demons, Deepfakes, and the Crime Wave No One Will Stop Artificial intelligence is sold as neutral, helpful, inevitable. I don't buy it. On this episode of Strange Planet, I sit down with privacy pioneer Dr. Katherine Albrecht to examine the real-world crimes of A.I.—deepfakes, impersonation, psychological manipulation, and synthetic fraud already unfolding at scale. We expose systems that deceive with confidence, diffuse responsibility, and quietly reshape human behavior. Then we go further, asking whether modern technology has crossed from tool to temptation—outsourcing truth, judgment, and conscience itself. No hype. No corporate filters. Just a hard audit of a system learning us, using us, and normalizing harm in real time today. GUEST: Dr. Katherine Albrecht is an internationally recognized privacy expert, bestselling author, and early critic of the digital surveillance state. With a doctorate from Harvard University, she has testified before the U.S. Congress and the European Commission and helped expose RFID tracking and corporate data harvesting years before they went mainstream. She is the co-founder of StartMail, a privacy-first encrypted email service, and the author of Spychips. Known for being early—and right—Albrecht now warns that A.I. poses not just technical risks, but profound moral and spiritual ones. BOOKS: Spychips: How Major Corporations and Government Plan to Track Your Every Move with RFID The Spychips Threat: Why Christians Should Resist RFID and Electronic Surveillance I Won't Take the Mark: A Bible Book and Contract for Children WEBSITES: https://katherine-albrecht.com/ https://www.realisgood.org/ SUPPORT OUR SPONSORS!!! CARGURUS CarGurus is the #1 rated car shopping app in Canada on the Apple App and Google Play store. CarGurus has hundreds of thousands of cars from top-rated dealers, plus those deal ratings, price history, and dealer reviews on every listing so you can shop with confidence. Their advanced search tools and easy-to-use app put you in control, with real-time alerts for price drops and new listings so you'll never miss a great deal. And when you're ready, CarGurus connects you with trusted dealerships for a transparent and hassle-free buying process. Buy your next car today with CarGurus at cargurus dot ca. GHOSTBED Every GhostBed mattress is designed with premium materials, proven cooling technology, and their exclusive ProCore™ layer—a targeted support system that reinforces the center of the mattress where your body's heaviest. It helps keep your spine aligned and your back supported while you sleep. Right now, during GhostBed's Holiday Sale, you can get 25% off sitewide for a limited time. Just go to GhostBed.com/strangeplanet and use promo code STRANGEPLANET at checkout. FOUND – Smarter banking for your business Take back control of your business today. Open a Found account for FREE at Found dot com. That's F-O-U-N-D dot com. Found is a financial technology company, not a bank. Banking services are provided by Lead Bank, Member FDIC. Join the hundreds of thousands who've already streamlined their finances with Found. HIMS - Making Healthy and Happy Easy to Achieve Sexual Health, Hair Loss, Mental Health, Weight Management START YOUR FREE ONLINE VISIT TODAY - ⁠HIMS dot com slash STRANGE⁠ ⁠https://www.HIMS.com/strange⁠ MINT MOBILE Premium Wireless - $15 per month. No Stores. No Salespeople. JUST SAVINGS Ready to say yes to saying no? Make the switch at MINT MOBILE dot com slash STRANGEPLANET. That's MINT MOBILE dot com slash STRANGEPLANET BECOME A PREMIUM SUBSCRIBER!!!⁠ ⁠https://strangeplanet.supportingcast.fm⁠ Three monthly subscriptions to choose from. Commercial Free Listening, Bonus Episodes and a Subscription to my monthly newsletter, InnerSanctum. Visit ⁠https://strangeplanet.supportingcast.fm⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://strangeplanet.supportingcast.fm/

The Necessary Conversation
Trump's Most Authoritarian Week Yet

The Necessary Conversation

Play Episode Listen Later Jan 18, 2026 74:32


⚠️ NOTE: Haley is not on this episode — she's out galavanting — but Mary Lou brings in her MAGA friend Jules as a special guest, making for one of the most intense and revealing conversations we've had yet.This week on The Necessary Conversation, we break down a terrifying escalation of federal power, shocking allegations tied to the Epstein network, and Trump openly flirting with the end of elections and international war — all while ICE violence spirals out of control inside the United States.

The Charlie Kirk Show
THOUGHTCRIME Ep. 111 — Autistic Barbie? Hollywood Deepfakes? British DEI Video Games?

The Charlie Kirk Show

Play Episode Listen Later Jan 17, 2026 91:55 Transcription Available


The ThoughtCrime crew discusses the most essential topics of the weed, including: -What do they make of Mattel's first-ever autistic Barbie doll? -Does AI mean that Hollywood actors are obsolete forever? -Who is "Amelia" and why is she the new avatar of European nationalism? Watch every episode ad-free on members.charliekirk.com! Get new merch at charliekirkstore.com!Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.

Human Events Daily with Jack Posobiec
THOUGHTCRIME Ep. 111 — Autistic Barbie? Hollywood Deepfakes? Jessica Is The New Karen?

Human Events Daily with Jack Posobiec

Play Episode Listen Later Jan 17, 2026 88:28


The ThoughtCrime crew discusses the most essential topics of the week, including:-What do they make of Mattel's first-ever autistic Barbie doll?-Does AI mean that Hollywood actors are obsolete forever?-Who is "Amelia" and why is she the new avatar of European nationalism?Support the show

The Charlie Kirk Show
THOUGHTCRIME Ep. 111 — Autistic Barbie? Hollywood Deepfakes? British DEI Video Games?

The Charlie Kirk Show

Play Episode Listen Later Jan 17, 2026 91:55 Transcription Available


The ThoughtCrime crew discusses the most essential topics of the weed, including: -What do they make of Mattel's first-ever autistic Barbie doll? -Does AI mean that Hollywood actors are obsolete forever? -Who is "Amelia" and why is she the new avatar of European nationalism? Watch every episode ad-free on members.charliekirk.com! Get new merch at charliekirkstore.com!Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.

PBS NewsHour - Segments
Musk's Grok AI faces more scrutiny after generating sexual deepfake images

PBS NewsHour - Segments

Play Episode Listen Later Jan 16, 2026 6:30


Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy

Danny In The Valley
Grok, deepfakes and who should police AI

Danny In The Valley

Play Episode Listen Later Jan 16, 2026 38:56


What happens when AI gets it wrong? After a backlash over the misuse of Elon Musk's AI tool Grok, new restrictions have been imposed on editing images of real people. Is this a sign that AI regulation is lagging, and who should be in charge – governments or Silicon Valley? This week, Danny and Katie are joined by AI computer scientist Kate Devlin from King's College London to discuss why this moment could be a turning point for global AI rules.Image: Getty Hosted on Acast. See acast.com/privacy for more information.

TODAY
TODAY News, January 16: Winter Weather on the Move | NFL Playoff Weekend Preview | Weight Loss Deepfake Drug Scams

TODAY

Play Episode Listen Later Jan 16, 2026 32:54


Brutal cold and blinding snow sweep across the Midwest and Northeast. Plus, a look ahead at a packed NFL playoff weekend. Also, the latest on AI weight loss drug scams. And, a closer look at the 2016 social media trend.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Culture Translator
Roundtable: Trade School, Grok Deepfakes, Fiber vs. Protein

The Culture Translator

Play Episode Listen Later Jan 16, 2026 58:01


Three Big Conversations: Gen Z heads to trade school en masse - 05:14 The controversy around Grok's creation of deepfake images - 21:26 Fiber is the new protein - 42:00 Slang of the Week: "Choppelganger" - 20:20 --> Check out the definition in our resource The Parent Guide to Teen Slang In Other News: - 55:06 Harry Styles just dropped a mysterious website called WeBelongTogether.co filled with crowd footage and the words "We belong together," and fans are losing their minds trying to decode what it means. The year in anime is kicking off with a bang, with new episodes of popular on-going shows like Jujutsu Keisen, Freiren: Beyond Journey's End, Fire Force, and premieres of highly anticipated newcomers like Sentenced to Be a Hero. The Golden Globes awards declared KPop Demon Hunters 2025's best animated film, and Gen Z favorite Timothée Chalamet won best actor for his turn in Marty Supreme. Survey data from Talker Research found that 63% of Gen Z now intentionally carves out screen-free windows in their days, a higher percentage than any other generation. Popular comedian and influencer Druski has been bouncing around the internet this week because of his skit poking fun at (or perhaps critiquing) the over-the-top production and performative pastors of some mega-churches. Become a monthly donor today, join the Table. Check out the podcast now on our YouTube Channel! Click to check out The Pour Over Get your question on Ask Axis! Send in your questions to ask@axis.org. For more Axis resources, go to axis.org.

Stuff Mom Never Told You
How Grok is Hurting Women and Children

Stuff Mom Never Told You

Play Episode Listen Later Jan 15, 2026 60:14 Transcription Available


X's AI chatbot Grok has recently been involved in some high profile cases of generating damaging and disturbing images of young girls and women. However, this is not new, but part of a pattern across several tech platforms and is rooted in societal issues. Bridget Todd breaks it down for us.See omnystudio.com/listener for privacy information.

The Culture Translator
CT: Trade School, Fiber vs. Protein, and Grok Deepfakes

The Culture Translator

Play Episode Listen Later Jan 15, 2026 10:49


Gen Z heads to trade school en masse, fiber is the new protein, and the controversy around Grok's creation of deepfake images. Slang of the Week: "Choppelganger" --> Check out the defenition in our resource The Parent Guide to Teen Slang In Other News: Harry Styles just dropped a mysterious website called WeBelongTogether.co filled with crowd footage and the words "We belong together," and fans are losing their minds trying to decode what it means. The year in anime is kicking off with a bang, with new episodes of popular on-going shows like Jujutsu Keisen, Freiren: Beyond Journey's End, Fire Force, and premieres of highly anticipated newcomers like Sentenced to Be a Hero. The Golden Globes awards declared KPop Demon Hunters 2025's best animated film, and Gen Z favorite Timothée Chalamet won best actor for his turn in Marty Supreme. Survey data from Talker Research found that 63% of Gen Z now intentionally carves out screen-free windows in their days, a higher percentage than any other generation. Popular comedian and influencer Druski has been bouncing around the internet this week because of his skit poking fun at (or perhaps critiquing) the over-the-top production and performative pastors of some mega-churches. Become a monthly donor today, join the Table. Check out the podcast now on our YouTube Channel! Click to check out The Pour Over Get your question on Ask Axis! Send in your questions to ask@axis.org. For more Axis resources, go to axis.org.

Marketplace All-in-One
AI deepfakes could lead to fines and country-wide bans

Marketplace All-in-One

Play Episode Listen Later Jan 13, 2026 8:44


From the BBC World Service: There's mounting pressure on Grok — the AI chatbot that's part of Elon Musk's social media platform, X — over the generation of explicit images. The U.K. is rushing through a new law making it illegal to create nonconsensual intimate images; meanwhile, Indonesia and Malaysia have already blocked access to Grok. Then, the race is heating up to build the most powerful quantum computers. We visit one of the biggest players, Google's quantum lab in California

Marketplace Morning Report
AI deepfakes could lead to fines and country-wide bans

Marketplace Morning Report

Play Episode Listen Later Jan 13, 2026 8:44


From the BBC World Service: There's mounting pressure on Grok — the AI chatbot that's part of Elon Musk's social media platform, X — over the generation of explicit images. The U.K. is rushing through a new law making it illegal to create nonconsensual intimate images; meanwhile, Indonesia and Malaysia have already blocked access to Grok. Then, the race is heating up to build the most powerful quantum computers. We visit one of the biggest players, Google's quantum lab in California

Trumpcast
How Grok Filled X With Deepfake Porn

Trumpcast

Play Episode Listen Later Jan 11, 2026 33:48


To kick off the new year, users on X have been asking Grok to nudify people - and kids - at an astonishing rate. And X leadership is egging them on. Guest: Jason Koebler, founder of 404 Media and host of the 404 Media PodcastWant more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen.Podcast production by Evan Campbell, and Patrick Fort. Hosted on Acast. See acast.com/privacy for more information.

What Next | Daily News and Analysis
What Next: TBD | Tech, power, and the future - How Grok Filled X With Deepfake Porn

What Next | Daily News and Analysis

Play Episode Listen Later Jan 11, 2026 33:48


To kick off the new year, users on X have been asking Grok to nudify people - and kids - at an astonishing rate. And X leadership is egging them on. Guest: Jason Koebler, founder of 404 Media and host of the 404 Media PodcastWant more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen.Podcast production by Evan Campbell, and Patrick Fort. Hosted on Acast. See acast.com/privacy for more information.