Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM AI is transforming identity security and giving cybercriminals superpowers. This episode explores the rise of deepfakes, social engineering, and why traditional MFA and recovery flows fail. Learn how businesses can protect digital trust and secure remote work in an AI-driven world.
AI-generated videos of the Royal Family are flooding the internet, featuring everything from rapping Queens to wrestling Dianas, as British intelligence warns of foreign meddling. Meanwhile, King Charles earns praise for his newfound warmth with the public, Queen Camilla's son jokes that a royal title would spark a revolution, and “Supernanny” hails William and Kate's down-to-earth parenting. Plus, meet Gen-Z's new royal heartthrob Albert Windsor, and find out why Diana once had tabloids smuggled into Kensington Palace.Hear our new show "Crown and Controversy: Prince Andrew" here.Check out "Palace Intrigue Presents: King WIlliam" here.
Storm Technology, a Littlefish company, has announced survey findings which reveal that 27% of IT leaders are concerned about their ability to detect deepfake attacks over the next 12 months. This concern was felt by more respondents in larger enterprises (33%) than SMBs (23%). The research - conducted by Censuswide and involving 200 IT decision-makers and leaders across Ireland and the UK (100 in each market) - found that the biggest concerns around AI and security over the next year are data breaches (34%), data protection (33%), and increased risk of adversarial or cyber-attacks (31%). Meanwhile, a quarter (25%) consider shadow AI (use of unsanctioned or unpermitted tools) among their biggest concerns. This is not necessarily surprising given that half of respondents (50%) know that people in their organisation are using such tools and some 55% admitted to using unsanctioned or unpermitted tools themselves. Forty-two per cent of IT leaders also opined that company data is not safe for input into these platforms. Perhaps exacerbating this issue, just 60% of companies have been specific about which AI tools are sanctioned or permitted. More broadly, over a fifth (21%) of IT leaders do not have a high degree of trust in AI tools and almost a third (32%) of companies do not have a strategy in place to address any AI risks that arise. The research showed that 79% of IT leaders in Ireland and the UK agree their organisation needs to focus more on the regulation of AI tools and 28% do not believe their governance around AI tools is adequate. This rose to more than a third (35%) among Irish respondents. When it comes to AI and data, 24% of IT leaders do not think their business data is ready for AI, with a similar proportion (23%) of the opinion that that their data governance policies are not robust enough to support secure AI adoption. This could explain why 78% believe a data readiness project is required to ensure successful AI adoption in their company. Sean Tickle, Cyber Services Director, Littlefish, said: "AI is rapidly reshaping the enterprise landscape, but the speed of adoption is outpacing the maturity of governance. When nearly a third of organisations lack a strategy to manage AI risk, and over half of IT leaders admit to using unsanctioned tools, it's clear that shadow AI isn't just a user issue - it's a leadership one. "Deepfake threats, data governance gaps, and a lack of trust in AI platforms are converging into a perfect storm. To stay secure and competitive, businesses must invest in visibility, policy clarity, and data readiness - because without those, AI becomes a liability, not a differentiator." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
You ever have the feeling what you see on the news and socials is exactly and only what they want you to hear? Yeah, me too. Turns out it's true. Very rarely do you get the full story so you can make your own informed decisions. And, this practice is skewing and distorting the way we view and interact with the world. My guest today, Jeremy Carey – Founder of IRONCLAD, knows exactly what this is about and in his new series (partnered with Shawn Ryan), Target Intelligence: Psyop, they unpack exactly how this is happening in every aspect of life. We talk about what he calls the "Forgotten Audience" and the "Doer Class," the power of telling good stories – accurately, why authenticity is the key to success in an inauthentic world, and understanding the "Education-to-Entertainment" ratio in consuming content. SHOW HIGHLIGHTS 00:00 - Opening 02:03 - Why Ironclad Exists 03:41 - The Content Attention Crisis 05:03 - The Forgotten Audience 06:31 - Storytelling Without Agendas 08:04 - Authenticity in Modern Media 09:10 - Working with Elite Creators 10:28 - Advice for Creators & Entrepreneurs 11:42 - Early Failures and Lessons 14:01 - God's Will, Hard Work, and Obstacles 17:20 - Getting Reps & Evolving Your Craft 18:17 - Inside the Target Intelligence Series 21:16 - Psychological Operations in Everyday Life 22:53 - Useful Idiots & Hidden Influence 24:26 - Conspiracies, Credibility & "They" 27:05 - The Future of Media Consumption 29:19 - AI Disruption in Film & Branding 31:27 - Masculinity Content & Market Saturation 32:37 - Tools, AI & Adding Value 34:00 - Advertising, Trust & Audience Alignment 36:42 - Serving the Right Audience 39:32 - On-Demand Society & Tech Shifts 41:26 - Upcoming Ironclad Projects 43:51 - Turning Podcasts Into Film Series 45:39 - How to Know What's Real Anymore 48:13 - Deepfakes, Trust & Reputation 50:19 - The Importance of Authenticity 53:27 - Where to Find the Series Battle Planners: Pick yours up today! Order Ryan's new book, The Masculinity Manifesto. For more information on the Iron Council brotherhood. Want maximum health, wealth, relationships, and abundance in your life? Sign up for our free course, 30 Days to Battle Ready
One thing I've learned after years of covering cybersecurity is that the “state of the threat landscape” rarely sits still long enough to fit neatly into a headline. Every time you think you've understood the latest trend, something shifts under […] The post From Polymorphic Attacks to Deepfakes: The Shifting Threat Landscape appeared first on TechSpective.
Think your kid's phone is just a screen? Think again. It's an always-open door. A Pandora's box you can only regret you shut. We sit down with Ben, The Family IT Guy, a 30-year cybersecurity veteran and dad, to map the new terrain of digital parenting: algorithm-driven feeds, anonymous chats, sextortion, deepfakes, and the quiet ways addictive design erodes sleep, focus, and safety. No panic, no fluff—just the playbook families need right now.From there, we tackle platform myths. Roblox's parental controls still let young kids into explicit spaces. These dopamine driven app designs and their origins make it a magnet for predators and sextortion. The rule of thumb is clear: avoid algorithms and open DMs. If a product can scroll forever or message anyone, treat it as high risk.To watch Episode #87, like and subscribe: https://youtu.be/1Syc0LbBInESextortion gets the spotlight it deserves. Criminal networks now use AI to forge nude images from everyday photos and clone voices from short clips. The guidance is firm: don't pay, tell a trusted adult immediately, file with the National Center for Missing and Exploited Children, and use Take It Down to remove images. Build a family tech agreement that includes a “free pass” when kids report harm—predators rely on shame and silence. We also dig into schools' devices, expanding surveillance tech, and why privacy can't be assumed.AI isn't the villain or the savior—it's a power tool. Used unsupervised, it can short-circuit thinking. Used well, it helps kids ask better questions and break problems into first principles. Our stance: kids don't use AI alone; adults learn it first and model critical thinking. By the end, you'll have clear rules, smart tools, and scripts to talk with your kids without fear or lectures.If this helped, follow, share with a parent who needs it, and leave a review so more families can find these safety tools. Your next best step: set a family device drop-off time tonight and talk about a free pass.Send us a textSupport the showFollow Wild Chaos on Social Media: Apple iTunes: https://podcasts.apple.com/us/podcast/the-wild-chaos-podcast/id1732761860Spotify: https://open.spotify.com/show/5KFGZ6uABb1sQlfkE2TIoc?si=8ff748aa4fc64331 Instagram: https://www.instagram.com/wildchaospodcastBam's Instagram: https://www.instagram.com/bambam0069Youtube: https://youtube.com/@wildchaospodTikTok: https://www.tiktok.com/@thewildchaospodcastMeta (Facebook): https://www.facebook.com/TheWildChaosPodcast
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to AI Unraveled (November 21, 2025): Your daily strategic briefing on the business impact of AI.Today's Highlights: Foxconn brings OpenAI's hardware ambitions to US soil; internal leaks reveal OpenAI's deep anxiety over Google's Gemini 3; Google begins monetizing AI search with ads; and the new Nano Banana Pro model redefines on-device efficiency.Strategic Pillars & Topics:
Bridget is joined by producer Joey Patt to cover the week's tech news that you might have missed. Meg Thee Stallion targeted by racist, sexist deepfakes: https://www.nbcnews.com/tech/tech-news/megan-thee-stallion-responds-sexually-explicit-deepfake-x-rcna156424 Trump Administration planning executive order preventing AI regulation by states: https://www.politico.com/news/2025/11/19/white-house-prepares-executive-order-to-block-state-ai-laws-00660719 Women are under-represented in the building trades, where they often face harassment or worse. A young welder's murder highlights the risks: https://www.linkedin.com/posts/sarah-kotzen_tradeswomen-workplacesafety-nawic-activity-7395254105088323584-NNd4/ Youth sports are costing parents a fortune, and private equity firms are making it worse: https://www.levernews.com/wall-street-is-paywalling-your-kids-sports/ AI-powered teddy bear talks to kids about knives, bondage, and starting fires: https://www.cnn.com/2025/11/19/tech/folotoy-kumma-ai-bear-scli-intl If you’re listening on Spotify, you can leave a comment there to let us know what you thought about these stories, or email us at hello@tangoti.com Follow Bridget and TANGOTI on social media! || instagram.com/bridgetmarieindc/ || tiktok.com/@bridgetmarieindc || youtube.com/@ThereAreNoGirlsOnTheInternetSee omnystudio.com/listener for privacy information.
It is that time of year. Time to reflect… to look back… to realize how little we've learned, and shake our heads in sadness. 123456 is still the most used password according to a new study - and people still think that Taylor Swift is trying to sell them stuff on social media. We're looking at weak passwords and deepfake celebrities for 2025 on this edition of The Checklist, brought to you by SecureMac. Check out our show notes: SecureMac.com/Checklist And get in touch with us: Checklist@Securemac.com
Hour 1 of https://RushToReason.com launches with a powerful roundtable featuring Dr. Kelly Victory (https://www.rushtoreason.com/dr-kelly-victory-2), trauma and public-health expert, and Steve House, former health-care executive and longtime analyst. Together, they take listeners into a fast-moving, unfiltered look at free speech, medical censorship, and the disturbing global trends shaping public policy. Why did honest medical debate vanish during COVID? Why is the CDC now admitting gaps in vaccine-autism research after years of denial? And how far are today's Malthusian-minded leaders willing to go—climate mandates, 15-minute cities, population control agendas—to gain power over your daily life? The conversation goes deeper as Steve and Dr. Kelly expose a growing crisis of trust: a collapse in confidence in doctors, the media, the legal system, and government. What happens when truth becomes subjective, experts contradict each other, and critical thinking disappears? And what does it cost to stand against the narrative? Packed with raw honesty, firsthand experiences, and piercing questions, this hour asks listeners to decide: will you choose freedom over force… and can America recover before the window closes? HOUR 2 Hour 2 opens with a powerful, no-nonsense interview featuring Alvin Lui, President of Courage Is a Habit (https://courageisahabit.org), who exposes what he calls the real mission of today's public schools. Is K–12 education still focused on academics—or has it become a pipeline for social-justice indoctrination, emotional manipulation, and Marxist ideology disguised as “empathy” and “mental health”? Alvin breaks down how language contamination, restorative justice, and culturally responsive teaching reshape children's worldview by teaching them to rely on the government, not their parents. How do you protect your kids when teachers get the “best hours” of your child's day… and you only get the leftovers? Then the hour pivots to government overreach on a broader scale—from California's jaw-dropping proposal that could force teens up to age sixteen into booster seats, to new credit-card surcharge schemes that threaten to nickel-and-dime consumers into submission. Are these policies really about safety and business costs… or part of a growing trend where personal freedom shrinks and regulation expands? Packed with insight, candor, and urgency, this hour dares listeners to ask: How much control are we willing to give away—and who benefits when we do? HOUR 3 Hour 3 explodes out of the gate with Jersey Joe (https://www.youtube.com/@jerzeejoe3145) and John taking on one of the biggest battles of our time: truth vs. deception. What happens when people build their worldview on memes, bad data, and AI-generated political fakes that look and sound like the real thing? From Thomas Sowell's warnings about moral superiority to a deepfake Bernie Sanders quote that even Joe fell for, the hour raises a chilling question: how do you find truth in a world where lies are easier than ever to manufacture? Then the tone shifts to opportunity and grit, as Mike Rowe and NVIDIA's Jensen Huang predict that America's next millionaires won't come from Silicon Valley—they'll come from the trades. Plumbers, electricians, mechanics, and young entrepreneurs willing to take risks are building wealth the old-fashioned way: sweat, skill, and guts. But the hour doesn't stay calm for long. Joe and John rip through outrageous real-world stories—dangerous crime policies, ICE operations, exploding government costs, and EVs that fail spectacularly. Finally, Phil Kerpen, president of American Commitment (https://www.americancommitment.org), exposes the stunning truth about AARP and its billion-dollar ties to UnitedHealth. Is the nation's most famous “senior advocacy group” really advocating for seniors… or something far more lucrative? Hour Three is packed with facts, fire, and the kind of questions that make you rethink everything.
Taylor Swift has been named the most dangerous celebrity based off deep fakes made of her. Crazy rumor that Timothee Chalamet is a secret rapper in the UK. Going to see Wicked for Good tomorrow here’s what to watch. Make sure to also keep up to date with ALL our podcasts we do below that have new episodes every week: The Thought Shower Let's Get Weird Crisis on Infinite Podcasts See omnystudio.com/listener for privacy information.
Kopfsalat - Der "Freunde fürs Leben" Podcast über Depressionen
In dieser Folge des Podcasts „Kopfsalat“ von Freunde fürs Leben e.V. spricht Moderator Sven Haeusler erneut mit der HateAid-Mitgründerin Anna-Lena von Hodenberg, diesmal über die systematischen Hintergründe digitaler Gewalt. Die Medienexpertin erklärt, wie Algorithmen, Desinformation und die Logiken sozialer Plattformen Hass verstärken, warum rechtsextreme Akteure digitale Angriffe strategisch nutzen und wie KI-gestützte Deepfakes unsere Wahrnehmung zusätzlich verunsichern. Digitale Gewalt ist für HateAid kein Zufall, sondern Teil eines ökonomischen und politischen Systems, das von diesen Entwicklungen profitiert. Gleichzeitig macht Anna-Lena von Hodenberg deutlich, warum sie trotz dieser Entwicklungen zuversichtlich bleibt und wie wir digitale Räume besser geschützt gestalten können. Kurzbeschreibung: Die Organisation HateAid unterstützt Betroffene mit Beratung, Beweissicherung und Prozesskostenhilfe und setzt sich dafür ein, dass Plattformen nicht länger Hass und Desinformation verstärken. Anna-Lena von Hodenberg betont, wie wichtig es ist, Vorfälle ernst zu nehmen, nicht allein zu bleiben und illegale Inhalte zu melden. Auch Außenstehende können helfen, indem sie Betroffenen öffentlich den Rücken stärken. Die sechste Staffel wird gefördert durch die Deutsche DepressionsLiga e.V. und die DAK Gesundheit. Moderation und Redaktion: Sven Haeusler - http://www.svenson-suite.de/cms/ Audio Editor: Julius Hofstädter Musik und Jingle: Bela Brauckmann - https://www.belabrauckmann.com/ Produktion und Redaktion: Diana Doko, Jasmin Hollatz und Vera Colditz von Freunde fürs Leben e.V. – https://www.frnd.de/ Idee: Sonja Koppitz - https://www.sonjakoppitz.de/ Foto: Benno Kraehahn - https://kraehahn.com/portfolio.html Coverdesign: Stefanie Butscheidt - http://www.grafische-requisite.de/kontakt/
In this episode I sit down with Zain Zaidi, co‑founder and CEO of TransCrypts, to explore his personal story and the mission driving his startup. He shares how a misplaced grad‑school transcript sparked a blockchain solution for identity verification, how his company now serves millions of users and hundreds of enterprises, and why the deepfake era makes what they do more critical than ever. We dive into the tech, the regulatory hurdles, Web2 clients, Web3 opportunities, and what's ahead for verified credentials across employment, health and education.Key Learnings (with timestamps)00:00:00 – Mark Cuban's warning about digital misinformation and deepfakes; the trust crisis we face online.00:02:00 – Zain's personal trigger: grad‑school transcript error led to founding TransCrypts.00:04:00 – What TransCrypts solves: giving individuals ownership of verified credentials (job, school, income).00:06:00 – How the system works: integration with HR systems, issuance of on‑chain credentials.00:07:00 – Focus and expansion: employment verification is central, health and education credentials coming.00:12:00 – Technical and enterprise challenges: selling Web3 identity solutions to Web2 businesses.00:14:00 – The synergy of AI + crypto: identity verification as the defense against AI‑driven fraud.00:17:00 – Pitching and fundraising: cold‑emailing Mark Cuban, building traction, $15 M seed round.00:18:00 – Humanitarian use‑case: helping Ukrainian refugees access medical records via their platform.00:23:00 – If he were starting again today: focus on the solution not the tech; time your raise relative to market cycle.00:34:00 – Roadmap and ambitions: becoming a major employment/income data provider, potential token launch.Connecthttps://www.transcrypts.com/https://www.linkedin.com/company/transcrypts/https://x.com/transcrypts_https://www.linkedin.com/in/zainzaidi99/DisclaimerNothing mentioned in this podcast is investment advice and please do your own research.It would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us ‑ https://www.web3pod.xyz/
Ransomware isn't a lone hacker in a hoodie. It's an entire criminal industry complete with developers, brokers, and money launderers working together like a dark tech startup. And while these groups constantly evolve, so do the tools and partnerships aimed at stopping them before they strike. My guest today is Cynthia Kaiser, former Deputy Assistant Director of the FBI's Cyber Division and now the Head of the Ransomware Research Center at Halcyon. After two decades investigating global cyber threats and briefing top government leaders, she's now focused on prevention and building collaborations across government and industry to disrupt ransomware actors at their source. We talk about how ransomware groups operate, why paying a ransom rarely solves the problem, and what layered defense really means for organizations and individuals. Cynthia also shares how AI is reshaping both sides of the cyber arms race and why she believes hope, not fear, is the most powerful tool for defenders. Show Notes: [01:04] Cynthia Kaiser had a 20-year FBI career and has now transitioned from investigation to prevention at Halcyon. [03:58] The true scale of cyber threats is far larger than most people realize, even within the government. [04:19] Nation-state and criminal activity now overlap, making attribution increasingly difficult. [06:45] Cynthia outlines how ransomware spreads through phishing, credential theft, and unpatched systems. [08:08] Ransomware is an ecosystem of specialists including developers, access brokers, money launderers, and infrastructure providers. [09:55] Discussion of how many ransomware groups exist and the estimated cost of attacks worldwide. [11:37] Ransom payments dropped in 2023, but total business recovery costs remain enormous. [12:24] Paying a ransom can mark a company as an easy target and doesn't guarantee full decryption. [13:11] Example of a decryptor that failed completely and how Halcyon helped a victim recover. [14:35] The so-called "criminal code of ethics" among ransomware gangs has largely disappeared. [16:48] Hospitals continue to be targeted despite claims of moral restraint among attackers. [18:44] Prevention basics still matter including strong passwords, multi-factor authentication, and timely patching. [19:18] Cynthia explains the value of layered defense and incident-response practice drills. [21:22] Even individuals need cyber hygiene like unique passwords, MFA, and updated antivirus protection. [23:32] Deepfakes are becoming a major threat vector, blurring trust in voice and video communications. [25:17] Always verify using a separate communication channel when asked to send money or change payment info. [27:40] Real-world example: credential-stuffing attack against MLB highlights the need for two-factor authentication. [29:55] What to do once ransomware hits includes containment, external counsel, and calling trusted law-enforcement contacts. [32:44] Cynthia recounts being impersonated online and how she responded to protect others from fraud. [34:28] Many victims feel ashamed to report cybercrime, especially among older adults. [36:45] Scams often succeed because they align with real-life timing or emotional triggers. [38:32] Children and everyday users are also at risk from deceptive links and push-fatigue attacks. [39:26] Overview of Halcyon's Ransomware Research Center and its educational, collaborative goals. [42:15] The importance of public-private partnerships in defending hospitals and critical infrastructure. [43:38] How AI-driven behavioral detection gives defenders a new advantage. [44:48] Cynthia shares optimism that technology can reduce ransomware's impact. [45:43] Closing advice includes practicing backups, building layered defenses, and staying hopeful. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest Halcyon Cynthia Kaiser - LinkedIn
In this episode, Dave Chatterjee, Ph.D. sits down with Sandy Kronenberg, Founder and CEO of Netarx, an AI-driven platform designed to detect and prevent synthetic impersonation across video, voice, and email. With deepfake fraud incidents skyrocketing by 3,000 percent and costing organizations an average of $500,000 per attack, Kronenberg and Chatterjee unpack how AI can now help defeat AI—turning defense innovation into a frontline imperative.Together, they explore the evolution of deepfake technology, the psychology of digital deception, and how organizations can safeguard their people and data from real-time manipulation. Through the Commitment–Preparedness–Discipline (CPD) framework, Dr. Chatterjee emphasizes the importance of leadership discipline, continuous monitoring, and technology integration in establishing a high-performance cybersecurity culture in the era of generative AI threats.Time Stamps• 00:49 — Dave introduces the topic and deepfake threat surge.• 02:37 — Sandy shares his professional journey and early exposure to cyber fraud.• 07:28 — Discussion on the human layer and OSI model limitations.• 09:55 — Integrating deepfake detection within enterprise security architecture.• 13:01 — How AI models ingest 50+ signals for real-time identity validation.• 17:48 — Zoom and video call trust issues in remote business settings.• 19:40 — Why siloed tools fail—importance of cross-channel correlation.• 23:30 — Continuous learning loops: retraining AI models against new deepfake generators.• 26:59 — The rise of Trust Officers and Trust Operations in corporate governance.• 32:15 — HR, finance, and brand use cases for disinformation security.• 35:18 — Balancing training and AI automation.• 37:16 — Expanding defense to email and multimodal verification.• 41:18 — Closing takeaways on readiness and adoption strategy.To access and download the entire podcast summary with discussion highlights - https://www.dchatte.com/episode-95-defending-digital-trust-battling-the-deepfake-surge-with-ai-powered-detection/Connect with Host Dr. Dave Chatterjee LinkedIn: https://www.linkedin.com/in/dchatte/ Website: https://dchatte.com/Books PublishedThe DeepFake ConspiracyCybersecurity Readiness: A Holistic and High-Performance ApproachArticles PublishedRamasastry, C. and Chatterjee, D. (2025). Trusona: Recruiting For The Hacker Mindset, Ivey Publishing, Oct 3, 2025.Chatterjee, D. and Leslie, A. (2024). “Ignorance is not bliss: A human-centered whole-of-enterprise approach to cybersecurity preparedness,” Business Horizons, Accepted on Oct 29, 2024.
What do the laws say about deepfakes – especially sexually explicit ones, like what happened in Lafourche Parish? How does Title IX fit into it? We'll break it down with Loyola law professor Dane Ciolino.
* Ronal Serpas gives his perspective as a former NOPD chief on a Border Patrol task force coming to New Orleans * What do the laws say about deepfakes – especially sexually explicit ones, like what happened in Lafourche Parish?
Fr. Ripperger's Press https://sentradpress.com/Fr. Ripperger's order is found at https://dolorans.org/ For more by Fr. Ripperger & please visit http://sensustraditionis.org/
“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence (others), under a living relationship with God that anchors what's real.Key IdeasPower needs guardrails: like aviation checklists and redundancies, AI calls for safety, oversight, and clear human control.Limits of machines: AI optimizes; it doesn't intuit, repent, or take responsibility—persons do.Edge cases matter: “no-win” moments (e.g., deer vs. car) reveal why human moral criteria must shape algorithms.Deception risk: voice/video/text imitation raises the bar for validation; adopt healthy skepticism and confirm identity more often.Back to reality: prioritize embodied relationships and parish life; let the Church help form attention, virtue, and trust.Links & References (none explicitly cited in this episode)CTA If this helped, please leave a review or share this episode with a friend.Questions or thoughts? Email FatherAndJoe@gmail.comTags Father and Joe, Joe Rockey, Father Boniface Hicks, artificial intelligence, AI fear, safety and safeguards, aviation analogy, self-driving cars, edge cases, moral intuition, ethics, deepfakes, identity verification, phishing, deception, truth, discernment, prudence, attention, presence, relationships, parish life, Church, spiritual formation, responsibility, human dignity, technology as tool, relationship with God, relationship with self, relationship with others, Benedictine spirituality, Catholic podcast, practical spirituality
On this episode of After Earnings, Ann sits down with Beth Gaspich, the Chief Financial Officer of NiCE - a decades-old enterprise software company looking to transform itself into an AI-driven customer experience platform.They discuss how NiCE aims to redefine contact centers through automation and conversational AI, NiCE's nearly $1 billion acquisition of Cognigy, and the company's approach to fraud detection and deepfake prevention. 00:00 — Beth Gaspich joins 02:00 — The history of NiCE 05:00 — AI and automation in contact centers 06:00 — Deepfake detection and data privacy 10:00 — Market perception vs. fundamentals 11:30 — Inside NiCE's nearly $1 B Cognigy acquisition 13:00 — Balancing profitability and growth 15:20 — Acquisition and capital allocation strategy 18:00 — How NiCE evaluates AI tuck-ins 19:00 — Beth's career and leadership path 22:00 — How a CFO keeps up with AI 23:40 — The next generation of finance talent$NICE Learn more about your ad choices. Visit megaphone.fm/adchoices
Katelyn Chedraoui, CNET's AI Reporter, joins Megan Lynch. They discuss a new Open AI video generator, SORA, that has remarkably good deepfake videos.
Bienvenidos a las Pastillas de Nación Combi. Aquí volvemos a colgar segmentos de programas pasados que consideramos que siguen vigentes. . De un momento a otro llegó el momento en el que nos damos cuenta que cualquiera con una computadora común puede generar videos y fotos falsas de personas reales y que eso representa un serio problema. Aquí comentamos un caso concreto que se dio en un colegio de Lima. . Si desean escuchar los episodios enteros, búsquenlos en nuestra cuenta en Ivoox o en la de Spotify.
Mark Schaefer begins the show with a significant announcement about the future of The Marketing Companion. Co-host Sandy Carter kicks off a discussion about how Deep Fakes are overwhelming even the most famous brands and company leaders. Mark and Sandy also cover the impact of AI speed on marketing, the rise of the humanoids and more. Mark Schaefer is a strategy consultant, college educator, keynote speaker, and the author of 10 books including "KNOWN," "Belonging to the Brand," and "Audacious: How Humans Win in an AI Marketing World." His annual marketing retreat is The Uprising. For more of Mark's insights every week, subscribe to his award-winning blog. Mark also offers classes in personal branding and professional speaking. Sandy Carter is COO of Unstoppable Domains and has held marketing leadership positions at IBM and AWS. She is the author of AI First, Human Always.
Can the bright, young minds of Gen Z's future doctors, lawyers, and teachers read clocks? Could an AI-generated video destroy your marriage? Today, the guys answer these questions by testing their Gen Z interns ability to read the time and watching AI video pranks online. Spoiler alert: the future is looking rather dim. In other news, Sawyer is back from his friend's bachelor party with a gnarly hangover and some concerning stories to tell. Support the show: https://www.klbjfm.com/mattandbobfm/See omnystudio.com/listener for privacy information.
Ex-BBC journalist Alan Kasujja unpacks 13 years inside one of the world's most influential newsrooms. From the impact of deepfakes on democracy, to the hidden biases shaping Western coverage of Africa, to his personal journey back home after more than a decade abroad, Kasujja speaks with honesty and clarity. Consider supporting this podcast via our Momo code 95462 or directly to our phone number: +250 795462739 Visit Sanny Ntayombya's Official Website: https://sannyntayombya.com
Experto en ciberseguridad alertan del impacto de la nueva IA de Meta en WhatsApp y advierten: "La inteligencia artificial puede mentir". La inteligencia artificial ha llegado definitivamente a las aplicaciones que usamos cada día. Y según expertos en ciberseguridad, no lo ha hecho solo para ayudarnos. El despliegue de los asistentes de Meta en WhatsApp, Facebook e Instagram abre la puerta a un nuevo nivel de recopilación de datos personales que, para muchos usuarios, puede pasar desapercibido. Rafael López, portavoz de la firma de seguridad Check Point Software, advierte de que la integración del modelo LLaMA en WhatsApp supone un cambio significativo en la forma en la que se procesan conversaciones, imágenes y patrones de comportamiento dentro de la plataforma. "Meta sabe dónde estás, cómo te sientes y qué te interesa. Su IA interpreta tus conversaciones y actúa en consecuencia", explica. Desde abril, WhatsApp incluye un icono azul que activa el asistente de inteligencia artificial. Su objetivo principal es ayudar al usuario, pero Rafael López insiste en que su capacidad para analizar mensajes, fotos, ubicaciones y hábitos emocionales también plantea un riesgo para la privacidad."No nos quitan datos: los entregamos sin darnos cuenta" El especialista señala que muchos usuarios desconocen qué información comparten realmente cuando interactúan con estos sistemas. No solo se recopilan mensajes o archivos: la IA puede aprender del tono, la urgencia y los intereses detectados en cada conversación. En algunos sectores ya se han detectado consecuencias económicas. Según Check Point, existen compañías, incluidas aerolíneas, que ajustan precios en función del nivel de necesidad que perciben tras analizar el lenguaje del usuario.Deepfakes y estafas: la otra cara de la inteligencia artificial Además del uso corporativo, los expertos alertan de un riesgo creciente: los ciberdelincuentes también están utilizando inteligencia artificial avanzada para suplantaciones, estafas y creación de contenido falso. "La IA siempre responderá, aunque no sea verdad. Por eso hay que verificar todo", señala López. Los ataques ya no se limitan a correos fraudulentos. Deepfakes de figuras conocidas —incluidos creadores digitales populares entre jóvenes— comienzan a aparecer en estafas que buscan generar confianza inmediata.Meta y la privacidad: una frontera cada vez más fina El modelo de negocio de las grandes plataformas vuelve a estar en el punto de mira. Meta ha ofrecido opciones de pago para limitar el uso de datos personales, una práctica cuestionada por expertos en privacidad. Aunque la normativa europea ofrece garantías, López asegura que no es suficiente para impedir que anuncios fraudulentos o sistemas abusivos permanezcan activos semanas antes de ser eliminados. También recuerda que el asistente en WhatsApp puede desactivarse, pero muchos usuarios desconocen cómo hacerlo.Un consejo para usuarios: información, no miedo El mensaje final no es una llamada a abandonar las plataformas, sino a entender lo que implican: "No contamos nuestra vida en voz alta en la calle, pero sí en redes. La clave no es dejar de usarlas: es saber qué compartimos y con quién".
Margherita Bassi -Regular contributor for Smithsonian Magazine, Gizmodo, Popular Science - AI Videos of animals could be dangerous, Did you fall for the Deepfake of NVIDIA CEO? My Insurance company used AI to estimate my car's damage. Going on-prem saved a company a ton of money! Can I add USB ports to my computer? Firewall dangers, Laptop Battery issues, Upgraded from cable to fiber and my network name is the same.
This week's full broadcast of Computer Talk Radio includes - 00:00 - Nerdy news for non-nerds - Windows, Microsoft, AI, New York Times, B of A, Toy Story - 11:00 - Listener Q&A - underrated trick - Julie asks for underrated ideas to improve daily computing - 22:00 - Deepfake voices and lifeless AI - Keith and Benjamin freak as ElevenLabs introduces new voices - 31:00 - Marty Winston's Wisdom - Marty dives into deeper discussion on projectors with Benjamin - 39:00 - Scam Series - handling fake links - Glenn's wife clicked fake giveaway link, Benjamin gives options - 44:00 - Keske on innovations for the blind - Steve and Benjamin cover expanding technology for blind - 56:00 - Dr Doreen Galli - Put Data First - Dr Doreen Galli speaks about experience at Put Data First - 1:07:00 - General ergonomics tips - Benjamin details out general ergonomics across workspace - 1:16:00 - IT Professional Series - 354 - Benjamin says laid-off workers are not alone, gives guidance - 1:24:00 - Listener Q&A - website security - Steve asks why some secure websites give insecure warnings
In this episode of Crime Wire Weekly, hosts Jim Chapman and Kelly Jennings discuss a series of trending crime stories, including the outrage after a 13 year old victim of a deepfake image gets expelled from school. In Miami an 18 year old is found mysteriously dead sparking an FBI investigation aboard a Carnival Cruise Ship. In Las Vegas a trial ensues for a pornstar accused of beheading the father of her (2) children. In Illinois a chiropractor is caught with 180 videos of patients in the nude on his computer. These stories and more today on Crime Wire Weekly!*This is a preview, links to listen to the full podcast by following "Crime Wire Weekly" are below.Timestamps04:25 Illinois Chiropractor Found with 180 Different Patient's Nude Videos.09:00 Mysterious Death of 18 Year Old Aboard Carnival Cruise Ship.14:30 Former Local Fox News Anchor Kills Mother in Kansas.19:10 Louisiana 13 Year old Deepfake Victim Expelled From School.37:00 Sex Offender Caught After Over a Decade on the Run.42:00 Las Vegas Pornstar Trial Underway for Beheading Partner.46:00 Louisiana Man Fakes Limp to Rob Bank.49:55 Texas Woman Tortured for Months Leads to Arrest of (5) People.54:08 Indiana Man Shoots (2) Dogs While Deer Hunting.56:15 Texas Woman Blames Designer High Hills on Wreck That Kills a Man on Date.Links to Follow Crime Wire Weekly https://linktr.ee/crimewireweeklyKelly Jennings is host of “Unspeakable: A True Crime Podcast by Kelly Jennings” https://open.spotify.com/show/3n7BUzKRtMhAEuIuu7f031?si=c98fcf5b7e6848c8 Jim Chapman is host of “Exposed: Scandalous Files of the Elite” https://open.spotify.com/show/3ePQYSPp5oSPDeue8otH1n?si=39142df6e0ed4f77
Teaser ... What is Bob doing in Qatar? ... "Things fall apart; the centre cannot hold." Why Yeats felt this way ... Peter Thiel's Antichrist theory ... Nikita: COVID was a weird time, and nothing has been the same since ... Why Bob feels things are falling apart ... Were things ever in order? ... "Inner emigration": tuning out the world as self-care ... A reaction to a drone strike: "Loud noises don't wake me up" ... Young people are turning away from social media ... When Spain lost all power and cellular coverage ... War as a "major inconvenience" ... Bob's prediction about Ukraine's future retaliation for Pokrovsk ... Nikita's impressions from Europe ... Nonzero Reading Club THIS SATURDAY: Norbert Wiener's God & Golem, Inc. ... Deepfakes and the future of news ... Russia's first humanoid robot falls down ...
In this episode of Crime Wire Weekly, hosts Jim Chapman and Kelly Jennings discuss a series of trending crime stories, including the outrage after a 13 year old victim of a deepfake image gets expelled from school. In Miami an 18 year old is found mysteriously dead sparking an FBI investigation aboard a Carnival Cruise Ship. In Las Vegas a trial ensues for a pornstar accused of beheading the father of her (2) children. In Illinois a chiropractor is caught with 180 videos of patients in the nude on his computer. These stories and more today on Crime Wire Weekly!*This is a preview, links to listen to the full podcast by following "Crime Wire Weekly" are below.Timestamps04:25 Illinois Chiropractor Found with 180 Different Patient's Nude Videos.09:00 Mysterious Death of 18 Year Old Aboard Carnival Cruise Ship.14:30 Former Local Fox News Anchor Kills Mother in Kansas.19:10 Louisiana 13 Year old Deepfake Victim Expelled From School.37:00 Sex Offender Caught After Over a Decade on the Run.42:00 Las Vegas Pornstar Trial Underway for Beheading Partner.46:00 Louisiana Man Fakes Limp to Rob Bank. 49:55 Texas Woman Tortured for Months Leads to Arrest of (5) People.54:08 Indiana Man Shoots (2) Dogs While Deer Hunting.56:15 Texas Woman Blames Designer High Hills on Wreck That Kills a Man on Date.Links to Follow Crime Wire Weekly https://linktr.ee/crimewireweeklyKelly Jennings is host of “Unspeakable: A True Crime Podcast by Kelly Jennings” https://open.spotify.com/show/3n7BUzKRtMhAEuIuu7f031?si=c98fcf5b7e6848c8 Jim Chapman is host of “Exposed: Scandalous Files of the Elite” https://open.spotify.com/show/3ePQYSPp5oSPDeue8otH1n?si=39142df6e0ed4f77Become a supporter of this podcast: https://www.spreaker.com/podcast/exposed-scandalous-files-of-the-elite--6073723/support.
Las imágenes se han convertido en el nuevo idioma digital. En este episodio reflexiono sobre cómo emojis, stickers, filtros y deepfakes están cambiando nuestra forma de comunicarnos y de entender la realidad.Un recorrido por los nuevos códigos visuales que definen la conversación contemporánea.Bloque 1 – De dónde venimos: la imagen como lenguaje universal.Durante siglos, la palabra fue la medida del pensamiento; hoy, la imagen es la medida de la emoción.Bloque 2 – Situación actual: comunicación instantánea y visualidad infinitaHoy la imagen no documenta: performa. No muestra lo que somos, sino lo que queremos que se vea.Bloque 3 – Hacia dónde vamos: nuevos códigos de comunicación visualEl futuro del lenguaje visual será híbrido: mitad humano, mitad algoritmo. ¿Nos comunicaremos con símbolos generados por inteligencia artificial, o seguiremos buscando emoción y sentido en lo que solo puede transmitir una mirada real?Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/tecnoap21--4507454/support.Puedes contactar conmigo para enviarme tus opiniones y comentarios, así como sugerencias y peticiones a: contacto@tecnoap21.comTambién puedes seguir a TecnoAp21 a través de las redes sociales:- X- Threads- Mastodon- BlueSky- LinkedIn- Post.News- Facebook- Instagram
Tara explores the mind-bending rise of AI in entertainment and media. From top-charting country songs by entirely fictional artists like “Caine Walker” and “Breaking Rust,” to viral animal and influencer videos that aren't real, AI is blurring the line between reality and fabrication. They discuss the unsettling implications for art, culture, and personal trust, and examine how AI-generated content could soon shape politics, advertising, and daily life. As technology increasingly imitates reality, Tara asks: what can we truly believe anymore?
Deepfake technology seems to be flooding online spaces and as AI-generated gets more sophisticated, the challenges will grow. Dr Yuki Watanabe from the University of Otago studies interdisciplinary perspectives to examine the profound impacts of deepfake technology. She says we should be proactively shaping the future of this technology with human-centered approach. Dr Watanabe speaks to Emile Donovan.
This B-Side episode discusses the threat of AI-driven fraud with Dominic Forest, chief technology officer at iProov, a London-based technology company specializing in biometric identity verification.Interview by Patricia MirasolAudio editing by Jayson Mariñas#AIDrivenFraud #DeepFakes #BiometricAuthentication #BankingSecurity#BWorldPH
In this episode of Builders Wanted, we're joined by Filip Verley, Chief Innovation Officer at Liminal. Filip sheds light on the challenges companies face with increasing fraud due to advances in generative AI and deepfakes, and the importance of balancing security with customer experience. The conversation covers practical strategies for unifying data across teams, leveraging behavioral signals, and investing in converged identity platforms.-------------------Key Takeaways: Digital fraud is rapidly evolving due to generative AI and deepfakes, making it harder for companies to distinguish between real and fake interactions.Unifying data and aligning company goals around trust and security is essential for effective fraud prevention.Balancing customer experience with security requires smart, context-aware friction and continuous monitoring.-------------------“ The best teams or organizations don't think in an either-or, it's the balance. They always are able to balance and they design these systems to adapt to what they need. It's not just about reducing the fraud, it's making sure that users are protected without slowing them down. Smart friction.” – Filip Verley-------------------Episode Timestamps:*(01:52) - How generative AI and deepfakes are making fraud detection harder *(04:07) - Insights from Liminal's Seminal Report*(16:19) - Why behavioral intent is a game changer for fraud detection*(22:54) - The 4 layers of defense every company needs *(25:52) - Where companies are investing for the biggest impact*(35:13) - Quick hits-------------------Links:Connect with Filip on LinkedInRead Liminal's Seminal ReportConnect with Kailey on LinkedInLearn more about Caspian Studios-------------------SponsorBuilders Wanted is brought to you by Twilio – the Customer Engagement Platform that helps builders turn real-time data into meaningful customer experiences. More than 320,000 businesses trust Twilio to transform signals into connections—and connections into revenue. Ready to build what's next? Learn more at twilio.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The most dangerous AI attacks don't just break your systems, they break your trust in reality. From deepfakes that fooled a company into losing an enormous amount of money to data poisoning that silently corrupts AI models, today's threats are unlike anything cybersecurity teams have faced before. Don't wait for these threats to hit your organisation, get ahead of them now.In this episode, Monica, a hacker turned CISO, keynote speaker, and founder of Monica Talks Cyber, reveals the dark side of AI and how you can protect your organisation while accelerating your security career. What you'll learn: Why deepfakes are just the beginning of AI-based attacks? How data poisoning works (and why it's so dangerous)? Practical defences: from AI supply chain security to human-in-the-loop protocols. How to position yourself as the AI security leader your company needs?Looking to become an influential and effective security leader? Don't know where to start or how to go about it? Follow Monica Verma (LinkedIn) and Monica Talks Cyber (Youtube) for more content on cybersecurity, technology, leadership and innovation, and 10x your career. Subscribe to The Monica Talks Cyber newsletter at https://www.monicatalkscyber.com.
Eric O'Neill, former FBI ghost and author of “Spies, Lies & Cybercrime,” joins host David Puner to take a deep dive into the mindset and tactics needed to defend against today's sophisticated cyber threats. Drawing on O'Neill's experience catching spies and investigating cybercriminals, the conversation explains how thinking like an attacker can help organizations and individuals stay ahead. The episode covers actionable frameworks, real-world stories, and practical advice for building cyber resilience in an age of AI-driven scams and industrialized ransomware.
A middle schooler in Lafourche Parish was arrested for allegedly creating nude deepfakes of his classmates. We get the details on what happened from WWL Louisiana reporter Alyssa Curtis.
* A middle schooler was arrested for making nude deepfakes of classmates * Brian Kelly is suing LSU over his big buyout * Could a Border Patrol task force be coming to New Orleans? * Will voters even care about the shutdown when the midterms finally happen? * What's cheaper: renting or buying? * How the Archdiocese is moving to resolve its long-running bankruptcy case
* The Archdiocese is putting over 150 parishes, charities, and other entities in bankruptcy. What does that actually mean? * A Lafourche middle schooler was arrested for allegedly creating nude deepfakes of his classmates
In today's Tech3 from Moneycontrol, Groww steals the spotlight with a blockbuster market debut, as shares jump 31% on listing day. Capillary trims its IPO size amid improving cash flows. India's tech industry pushes back against MeitY's draft deepfake rules, warning of overbroad definitions. The EV rivalry heats up with Ather outperforming Ola Electric on key metrics, and Figma opens its first India office in Bengaluru, joining global tech majors expanding their footprint here.
When a social media video makes you go "Wow," pause. That shock factor is a big clue it could be an AI fake. Learn more about your ad choices. Visit megaphone.fm/adchoices
AI can replicate faces, voices, and personalities—but it can't duplicate the image of God. In this conversation, Dr. Kathy Koch and Wayne Stender explore Denmark's bold new proposal to give citizens ownership of their likeness and what it means for parents raising kids in a world where identity can be digitally faked. Discover how to help children stay grounded in truth, not technology.
Recent advances in artificial intelligence have drawn a lot of media attention. But little of this has focused on how this new technology may affect democracy. Spencer Overton joins host Alex Lovit to discuss how AI may push the United States away from becoming an inclusive, multiracial democracy—or toward it. Spencer Overton is the Patricia Roberts Harris Research Professor of Law and the director of the Multiracial Democracy Project at George Washington University Law School. https://ssrn.com/abstract=4754903 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5196382 Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
Have you ever wondered if the person you are arguing with in a social media comment section is really a person at all? Perry Carpenter is a cybersecurity expert, author of FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, and host of The Faik Files podcast. I asked Perry to explain the phenomenon of social media bots and troll farms; who might be commanding these faceless humans and robots? What might their goal be? And how do we tell the difference between a genuine comment or post and one manufactured with a particular objective in mind? Listen to THE FAIK FILES podcast Get a copy of FAIK Become a Patron to support our show and get early ad-free episodes and bonus content Or subscribe to American Hysteria on Apple Podcasts Get some of our new merch at americanhysteria.com, all profits go to The Sameer Project, a Palestinian-led mutual aid group who are on the ground in Gaza delivering food and supplies to displaced families. Leave us a message on the Urban Legends Hotline Producer and Editor: Miranda Zickler Associate Producer: Riley Swedelius-Smith Additional editing by AJ McKinley Hosted by Chelsey Weber-Smith Learn more about your ad choices. Visit megaphone.fm/adchoices
Live from Authenticate 2025, Jeff Steadman and Jim McDonald sit down with Dr. Tina Srivastava, an IDPro board member and co-founder of Badge Inc., for a crucial discussion on the rapidly evolving landscape of identity and authentication.Tina shares her insights on the conference, the evolution from physical hacks to sophisticated AI-driven threats like supercharged phishing, and the current challenges facing the industry. The conversation delves into the complexities of synced Passkeys, the critical vulnerability of account recovery processes, and the slow pace of regulation in keeping up with technology.As a board member for IDPro, Tina highlights the immense value of the practitioner-focused community, the supportive culture within its Slack channels, and makes an exciting announcement about the creation of new member-driven committees to shape the future of the organization. They explore the concept of the "AI arms race" and why identity professionals cannot afford to wait for the next big thing, emphasizing that collaboration and information sharing through communities like IDPro are essential to staying ahead of adversaries.Connect with Tina: https://www.linkedin.com/in/tina-s-8291438a/Find out more about IDPro: https://www.idpro.org/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at http://idacpodcast.comChapters00:00 Introduction and Greetings00:16 Highlights from Authenticate 202501:39 FIDO Feud Rematch Discussion03:17 Guest Introduction: Tina Srivastava03:46 Conference Insights and AI Challenges06:16 Regulatory Environment and Passkeys09:11 Phishing and AI Supercharged Attacks12:28 QR Codes and Accessibility Issues13:09 The Importance of Phishing Resistant Authentication22:24 IDPro Community and Practitioner Support25:18 Community Support and Engagement26:26 IDPro's Role in Identity Events27:48 Future Directions for IDPro29:19 Introducing Committees in IDPro30:39 AI and Identity Verification37:07 The Importance of Information Sharing45:35 Public Speaking and Personal Growth50:58 Conclusion and Final ThoughtsKeywordsIDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Tina Srivastava, IDPro, Authenticate 2025, Passkeys, AI, Artificial Intelligence, Cybersecurity, Phishing, Deepfakes, Authentication, Account Recovery, Biometrics, Identity and Access Management, IAM, NIST, Regulation, Identity Verification, Synced Passkeys, FIDO Alliance
The Hidden Lightness with Jimmy Hinton – A teenage girl in New Jersey has filed a lawsuit against an artificial intelligence company after discovering fake nude images of herself circulating online—images she never took, and moments that never happened. The cruel irony is that much of this technology is publicly available—easy to access, easy to use, and largely unregulated...
After spending years tracking Al Qaeda, former analyst Gina Bennett talks to SpyTalk about her storied career, her new spy novel and the rising threat of "politicized misogyny." Gina BennettIf Two of Them Are Dead Follow Jeff Stein on Twitter:https://twitter.com/SpyTalkerFollow Karen Greenberg on Twitterhttps://x.com/KarenGreenberg3Follow Michael Isikoff on Twitter:https://twitter.com/isikoff Follow SpyTalk on Twitter:https://twitter.com/talk_spySubscribe to SpyTalk on Substack https://www.spytalk.co/Take our listener survey where you can give us feedback.http://survey.podtrac.com/start-survey.aspx?pubid=BffJOlI7qQcF&ver=short Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.