Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Bridget is joined by producer Joey Patt to cover the week's tech news that you might have missed. Meg Thee Stallion targeted by racist, sexist deepfakes: https://www.nbcnews.com/tech/tech-news/megan-thee-stallion-responds-sexually-explicit-deepfake-x-rcna156424 Trump Administration planning executive order preventing AI regulation by states: https://www.politico.com/news/2025/11/19/white-house-prepares-executive-order-to-block-state-ai-laws-00660719 Women are under-represented in the building trades, where they often face harassment or worse. A young welder's murder highlights the risks: https://www.linkedin.com/posts/sarah-kotzen_tradeswomen-workplacesafety-nawic-activity-7395254105088323584-NNd4/ Youth sports are costing parents a fortune, and private equity firms are making it worse: https://www.levernews.com/wall-street-is-paywalling-your-kids-sports/ AI-powered teddy bear talks to kids about knives, bondage, and starting fires: https://www.cnn.com/2025/11/19/tech/folotoy-kumma-ai-bear-scli-intl If you’re listening on Spotify, you can leave a comment there to let us know what you thought about these stories, or email us at hello@tangoti.com Follow Bridget and TANGOTI on social media! || instagram.com/bridgetmarieindc/ || tiktok.com/@bridgetmarieindc || youtube.com/@ThereAreNoGirlsOnTheInternetSee omnystudio.com/listener for privacy information.
It is that time of year. Time to reflect… to look back… to realize how little we've learned, and shake our heads in sadness. 123456 is still the most used password according to a new study - and people still think that Taylor Swift is trying to sell them stuff on social media. We're looking at weak passwords and deepfake celebrities for 2025 on this edition of The Checklist, brought to you by SecureMac. Check out our show notes: SecureMac.com/Checklist And get in touch with us: Checklist@Securemac.com
Taylor Swift has been named the most dangerous celebrity based off deep fakes made of her. Crazy rumor that Timothee Chalamet is a secret rapper in the UK. Going to see Wicked for Good tomorrow here’s what to watch. Make sure to also keep up to date with ALL our podcasts we do below that have new episodes every week: The Thought Shower Let's Get Weird Crisis on Infinite Podcasts See omnystudio.com/listener for privacy information.
Kopfsalat - Der "Freunde fürs Leben" Podcast über Depressionen
In dieser Folge des Podcasts „Kopfsalat“ von Freunde fürs Leben e.V. spricht Moderator Sven Haeusler erneut mit der HateAid-Mitgründerin Anna-Lena von Hodenberg, diesmal über die systematischen Hintergründe digitaler Gewalt. Die Medienexpertin erklärt, wie Algorithmen, Desinformation und die Logiken sozialer Plattformen Hass verstärken, warum rechtsextreme Akteure digitale Angriffe strategisch nutzen und wie KI-gestützte Deepfakes unsere Wahrnehmung zusätzlich verunsichern. Digitale Gewalt ist für HateAid kein Zufall, sondern Teil eines ökonomischen und politischen Systems, das von diesen Entwicklungen profitiert. Gleichzeitig macht Anna-Lena von Hodenberg deutlich, warum sie trotz dieser Entwicklungen zuversichtlich bleibt und wie wir digitale Räume besser geschützt gestalten können. Kurzbeschreibung: Die Organisation HateAid unterstützt Betroffene mit Beratung, Beweissicherung und Prozesskostenhilfe und setzt sich dafür ein, dass Plattformen nicht länger Hass und Desinformation verstärken. Anna-Lena von Hodenberg betont, wie wichtig es ist, Vorfälle ernst zu nehmen, nicht allein zu bleiben und illegale Inhalte zu melden. Auch Außenstehende können helfen, indem sie Betroffenen öffentlich den Rücken stärken. Die sechste Staffel wird gefördert durch die Deutsche DepressionsLiga e.V. und die DAK Gesundheit. Moderation und Redaktion: Sven Haeusler - http://www.svenson-suite.de/cms/ Audio Editor: Julius Hofstädter Musik und Jingle: Bela Brauckmann - https://www.belabrauckmann.com/ Produktion und Redaktion: Diana Doko, Jasmin Hollatz und Vera Colditz von Freunde fürs Leben e.V. – https://www.frnd.de/ Idee: Sonja Koppitz - https://www.sonjakoppitz.de/ Foto: Benno Kraehahn - https://kraehahn.com/portfolio.html Coverdesign: Stefanie Butscheidt - http://www.grafische-requisite.de/kontakt/
In this episode I sit down with Zain Zaidi, co‑founder and CEO of TransCrypts, to explore his personal story and the mission driving his startup. He shares how a misplaced grad‑school transcript sparked a blockchain solution for identity verification, how his company now serves millions of users and hundreds of enterprises, and why the deepfake era makes what they do more critical than ever. We dive into the tech, the regulatory hurdles, Web2 clients, Web3 opportunities, and what's ahead for verified credentials across employment, health and education.Key Learnings (with timestamps)00:00:00 – Mark Cuban's warning about digital misinformation and deepfakes; the trust crisis we face online.00:02:00 – Zain's personal trigger: grad‑school transcript error led to founding TransCrypts.00:04:00 – What TransCrypts solves: giving individuals ownership of verified credentials (job, school, income).00:06:00 – How the system works: integration with HR systems, issuance of on‑chain credentials.00:07:00 – Focus and expansion: employment verification is central, health and education credentials coming.00:12:00 – Technical and enterprise challenges: selling Web3 identity solutions to Web2 businesses.00:14:00 – The synergy of AI + crypto: identity verification as the defense against AI‑driven fraud.00:17:00 – Pitching and fundraising: cold‑emailing Mark Cuban, building traction, $15 M seed round.00:18:00 – Humanitarian use‑case: helping Ukrainian refugees access medical records via their platform.00:23:00 – If he were starting again today: focus on the solution not the tech; time your raise relative to market cycle.00:34:00 – Roadmap and ambitions: becoming a major employment/income data provider, potential token launch.Connecthttps://www.transcrypts.com/https://www.linkedin.com/company/transcrypts/https://x.com/transcrypts_https://www.linkedin.com/in/zainzaidi99/DisclaimerNothing mentioned in this podcast is investment advice and please do your own research.It would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us ‑ https://www.web3pod.xyz/
Ransomware isn't a lone hacker in a hoodie. It's an entire criminal industry complete with developers, brokers, and money launderers working together like a dark tech startup. And while these groups constantly evolve, so do the tools and partnerships aimed at stopping them before they strike. My guest today is Cynthia Kaiser, former Deputy Assistant Director of the FBI's Cyber Division and now the Head of the Ransomware Research Center at Halcyon. After two decades investigating global cyber threats and briefing top government leaders, she's now focused on prevention and building collaborations across government and industry to disrupt ransomware actors at their source. We talk about how ransomware groups operate, why paying a ransom rarely solves the problem, and what layered defense really means for organizations and individuals. Cynthia also shares how AI is reshaping both sides of the cyber arms race and why she believes hope, not fear, is the most powerful tool for defenders. Show Notes: [01:04] Cynthia Kaiser had a 20-year FBI career and has now transitioned from investigation to prevention at Halcyon. [03:58] The true scale of cyber threats is far larger than most people realize, even within the government. [04:19] Nation-state and criminal activity now overlap, making attribution increasingly difficult. [06:45] Cynthia outlines how ransomware spreads through phishing, credential theft, and unpatched systems. [08:08] Ransomware is an ecosystem of specialists including developers, access brokers, money launderers, and infrastructure providers. [09:55] Discussion of how many ransomware groups exist and the estimated cost of attacks worldwide. [11:37] Ransom payments dropped in 2023, but total business recovery costs remain enormous. [12:24] Paying a ransom can mark a company as an easy target and doesn't guarantee full decryption. [13:11] Example of a decryptor that failed completely and how Halcyon helped a victim recover. [14:35] The so-called "criminal code of ethics" among ransomware gangs has largely disappeared. [16:48] Hospitals continue to be targeted despite claims of moral restraint among attackers. [18:44] Prevention basics still matter including strong passwords, multi-factor authentication, and timely patching. [19:18] Cynthia explains the value of layered defense and incident-response practice drills. [21:22] Even individuals need cyber hygiene like unique passwords, MFA, and updated antivirus protection. [23:32] Deepfakes are becoming a major threat vector, blurring trust in voice and video communications. [25:17] Always verify using a separate communication channel when asked to send money or change payment info. [27:40] Real-world example: credential-stuffing attack against MLB highlights the need for two-factor authentication. [29:55] What to do once ransomware hits includes containment, external counsel, and calling trusted law-enforcement contacts. [32:44] Cynthia recounts being impersonated online and how she responded to protect others from fraud. [34:28] Many victims feel ashamed to report cybercrime, especially among older adults. [36:45] Scams often succeed because they align with real-life timing or emotional triggers. [38:32] Children and everyday users are also at risk from deceptive links and push-fatigue attacks. [39:26] Overview of Halcyon's Ransomware Research Center and its educational, collaborative goals. [42:15] The importance of public-private partnerships in defending hospitals and critical infrastructure. [43:38] How AI-driven behavioral detection gives defenders a new advantage. [44:48] Cynthia shares optimism that technology can reduce ransomware's impact. [45:43] Closing advice includes practicing backups, building layered defenses, and staying hopeful. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest Halcyon Cynthia Kaiser - LinkedIn
What do the laws say about deepfakes – especially sexually explicit ones, like what happened in Lafourche Parish? How does Title IX fit into it? We'll break it down with Loyola law professor Dane Ciolino.
* Ronal Serpas gives his perspective as a former NOPD chief on a Border Patrol task force coming to New Orleans * What do the laws say about deepfakes – especially sexually explicit ones, like what happened in Lafourche Parish?
deepfakes están cambiando nuestra forma de comunicarnos y de entender la realidad. Un recorrido por los nuevos códigos visuales que definen la conversación contemporánea. Bloque 1 – De dónde venimos: la imagen como lenguaje universal. Durante siglos, la palabra fue la medida del pensamiento; hoy, la imagen es la medida de la emoción. Bloque 2 – Situación actual: comunicación instantánea y visualidad infinita Hoy la imagen no documenta: performa. No muestra lo que somos, sino lo que queremos que se vea. Bloque 3 – Hacia dónde vamos: nuevos códigos de comunicación visual El futuro del lenguaje visual será híbrido: mitad humano, mitad algoritmo. ¿Nos comunicaremos con símbolos generados por inteligencia artificial, o seguiremos buscando emoción y sentido en lo que solo puede transmitir una mirada real?
Fr. Ripperger's Press https://sentradpress.com/Fr. Ripperger's order is found at https://dolorans.org/ For more by Fr. Ripperger & please visit http://sensustraditionis.org/
“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence (others), under a living relationship with God that anchors what's real.Key IdeasPower needs guardrails: like aviation checklists and redundancies, AI calls for safety, oversight, and clear human control.Limits of machines: AI optimizes; it doesn't intuit, repent, or take responsibility—persons do.Edge cases matter: “no-win” moments (e.g., deer vs. car) reveal why human moral criteria must shape algorithms.Deception risk: voice/video/text imitation raises the bar for validation; adopt healthy skepticism and confirm identity more often.Back to reality: prioritize embodied relationships and parish life; let the Church help form attention, virtue, and trust.Links & References (none explicitly cited in this episode)CTA If this helped, please leave a review or share this episode with a friend.Questions or thoughts? Email FatherAndJoe@gmail.comTags Father and Joe, Joe Rockey, Father Boniface Hicks, artificial intelligence, AI fear, safety and safeguards, aviation analogy, self-driving cars, edge cases, moral intuition, ethics, deepfakes, identity verification, phishing, deception, truth, discernment, prudence, attention, presence, relationships, parish life, Church, spiritual formation, responsibility, human dignity, technology as tool, relationship with God, relationship with self, relationship with others, Benedictine spirituality, Catholic podcast, practical spirituality
On this episode of After Earnings, Ann sits down with Beth Gaspich, the Chief Financial Officer of NiCE - a decades-old enterprise software company looking to transform itself into an AI-driven customer experience platform.They discuss how NiCE aims to redefine contact centers through automation and conversational AI, NiCE's nearly $1 billion acquisition of Cognigy, and the company's approach to fraud detection and deepfake prevention. 00:00 — Beth Gaspich joins 02:00 — The history of NiCE 05:00 — AI and automation in contact centers 06:00 — Deepfake detection and data privacy 10:00 — Market perception vs. fundamentals 11:30 — Inside NiCE's nearly $1 B Cognigy acquisition 13:00 — Balancing profitability and growth 15:20 — Acquisition and capital allocation strategy 18:00 — How NiCE evaluates AI tuck-ins 19:00 — Beth's career and leadership path 22:00 — How a CFO keeps up with AI 23:40 — The next generation of finance talent$NICE Learn more about your ad choices. Visit megaphone.fm/adchoices
Katelyn Chedraoui, CNET's AI Reporter, joins Megan Lynch. They discuss a new Open AI video generator, SORA, that has remarkably good deepfake videos.
Bienvenidos a las Pastillas de Nación Combi. Aquí volvemos a colgar segmentos de programas pasados que consideramos que siguen vigentes. . De un momento a otro llegó el momento en el que nos damos cuenta que cualquiera con una computadora común puede generar videos y fotos falsas de personas reales y que eso representa un serio problema. Aquí comentamos un caso concreto que se dio en un colegio de Lima. . Si desean escuchar los episodios enteros, búsquenlos en nuestra cuenta en Ivoox o en la de Spotify.
Mark Schaefer begins the show with a significant announcement about the future of The Marketing Companion. Co-host Sandy Carter kicks off a discussion about how Deep Fakes are overwhelming even the most famous brands and company leaders. Mark and Sandy also cover the impact of AI speed on marketing, the rise of the humanoids and more. Mark Schaefer is a strategy consultant, college educator, keynote speaker, and the author of 10 books including "KNOWN," "Belonging to the Brand," and "Audacious: How Humans Win in an AI Marketing World." His annual marketing retreat is The Uprising. For more of Mark's insights every week, subscribe to his award-winning blog. Mark also offers classes in personal branding and professional speaking. Sandy Carter is COO of Unstoppable Domains and has held marketing leadership positions at IBM and AWS. She is the author of AI First, Human Always.
Can the bright, young minds of Gen Z's future doctors, lawyers, and teachers read clocks? Could an AI-generated video destroy your marriage? Today, the guys answer these questions by testing their Gen Z interns ability to read the time and watching AI video pranks online. Spoiler alert: the future is looking rather dim. In other news, Sawyer is back from his friend's bachelor party with a gnarly hangover and some concerning stories to tell. Support the show: https://www.klbjfm.com/mattandbobfm/See omnystudio.com/listener for privacy information.
Ex-BBC journalist Alan Kasujja unpacks 13 years inside one of the world's most influential newsrooms. From the impact of deepfakes on democracy, to the hidden biases shaping Western coverage of Africa, to his personal journey back home after more than a decade abroad, Kasujja speaks with honesty and clarity. Consider supporting this podcast via our Momo code 95462 or directly to our phone number: +250 795462739 Visit Sanny Ntayombya's Official Website: https://sannyntayombya.com
Experto en ciberseguridad alertan del impacto de la nueva IA de Meta en WhatsApp y advierten: "La inteligencia artificial puede mentir". La inteligencia artificial ha llegado definitivamente a las aplicaciones que usamos cada día. Y según expertos en ciberseguridad, no lo ha hecho solo para ayudarnos. El despliegue de los asistentes de Meta en WhatsApp, Facebook e Instagram abre la puerta a un nuevo nivel de recopilación de datos personales que, para muchos usuarios, puede pasar desapercibido. Rafael López, portavoz de la firma de seguridad Check Point Software, advierte de que la integración del modelo LLaMA en WhatsApp supone un cambio significativo en la forma en la que se procesan conversaciones, imágenes y patrones de comportamiento dentro de la plataforma. "Meta sabe dónde estás, cómo te sientes y qué te interesa. Su IA interpreta tus conversaciones y actúa en consecuencia", explica. Desde abril, WhatsApp incluye un icono azul que activa el asistente de inteligencia artificial. Su objetivo principal es ayudar al usuario, pero Rafael López insiste en que su capacidad para analizar mensajes, fotos, ubicaciones y hábitos emocionales también plantea un riesgo para la privacidad."No nos quitan datos: los entregamos sin darnos cuenta" El especialista señala que muchos usuarios desconocen qué información comparten realmente cuando interactúan con estos sistemas. No solo se recopilan mensajes o archivos: la IA puede aprender del tono, la urgencia y los intereses detectados en cada conversación. En algunos sectores ya se han detectado consecuencias económicas. Según Check Point, existen compañías, incluidas aerolíneas, que ajustan precios en función del nivel de necesidad que perciben tras analizar el lenguaje del usuario.Deepfakes y estafas: la otra cara de la inteligencia artificial Además del uso corporativo, los expertos alertan de un riesgo creciente: los ciberdelincuentes también están utilizando inteligencia artificial avanzada para suplantaciones, estafas y creación de contenido falso. "La IA siempre responderá, aunque no sea verdad. Por eso hay que verificar todo", señala López. Los ataques ya no se limitan a correos fraudulentos. Deepfakes de figuras conocidas —incluidos creadores digitales populares entre jóvenes— comienzan a aparecer en estafas que buscan generar confianza inmediata.Meta y la privacidad: una frontera cada vez más fina El modelo de negocio de las grandes plataformas vuelve a estar en el punto de mira. Meta ha ofrecido opciones de pago para limitar el uso de datos personales, una práctica cuestionada por expertos en privacidad. Aunque la normativa europea ofrece garantías, López asegura que no es suficiente para impedir que anuncios fraudulentos o sistemas abusivos permanezcan activos semanas antes de ser eliminados. También recuerda que el asistente en WhatsApp puede desactivarse, pero muchos usuarios desconocen cómo hacerlo.Un consejo para usuarios: información, no miedo El mensaje final no es una llamada a abandonar las plataformas, sino a entender lo que implican: "No contamos nuestra vida en voz alta en la calle, pero sí en redes. La clave no es dejar de usarlas: es saber qué compartimos y con quién".
Margherita Bassi -Regular contributor for Smithsonian Magazine, Gizmodo, Popular Science - AI Videos of animals could be dangerous, Did you fall for the Deepfake of NVIDIA CEO? My Insurance company used AI to estimate my car's damage. Going on-prem saved a company a ton of money! Can I add USB ports to my computer? Firewall dangers, Laptop Battery issues, Upgraded from cable to fiber and my network name is the same.
This week's full broadcast of Computer Talk Radio includes - 00:00 - Nerdy news for non-nerds - Windows, Microsoft, AI, New York Times, B of A, Toy Story - 11:00 - Listener Q&A - underrated trick - Julie asks for underrated ideas to improve daily computing - 22:00 - Deepfake voices and lifeless AI - Keith and Benjamin freak as ElevenLabs introduces new voices - 31:00 - Marty Winston's Wisdom - Marty dives into deeper discussion on projectors with Benjamin - 39:00 - Scam Series - handling fake links - Glenn's wife clicked fake giveaway link, Benjamin gives options - 44:00 - Keske on innovations for the blind - Steve and Benjamin cover expanding technology for blind - 56:00 - Dr Doreen Galli - Put Data First - Dr Doreen Galli speaks about experience at Put Data First - 1:07:00 - General ergonomics tips - Benjamin details out general ergonomics across workspace - 1:16:00 - IT Professional Series - 354 - Benjamin says laid-off workers are not alone, gives guidance - 1:24:00 - Listener Q&A - website security - Steve asks why some secure websites give insecure warnings
In this episode of Crime Wire Weekly, hosts Jim Chapman and Kelly Jennings discuss a series of trending crime stories, including the outrage after a 13 year old victim of a deepfake image gets expelled from school. In Miami an 18 year old is found mysteriously dead sparking an FBI investigation aboard a Carnival Cruise Ship. In Las Vegas a trial ensues for a pornstar accused of beheading the father of her (2) children. In Illinois a chiropractor is caught with 180 videos of patients in the nude on his computer. These stories and more today on Crime Wire Weekly!*This is a preview, links to listen to the full podcast by following "Crime Wire Weekly" are below.Timestamps04:25 Illinois Chiropractor Found with 180 Different Patient's Nude Videos.09:00 Mysterious Death of 18 Year Old Aboard Carnival Cruise Ship.14:30 Former Local Fox News Anchor Kills Mother in Kansas.19:10 Louisiana 13 Year old Deepfake Victim Expelled From School.37:00 Sex Offender Caught After Over a Decade on the Run.42:00 Las Vegas Pornstar Trial Underway for Beheading Partner.46:00 Louisiana Man Fakes Limp to Rob Bank.49:55 Texas Woman Tortured for Months Leads to Arrest of (5) People.54:08 Indiana Man Shoots (2) Dogs While Deer Hunting.56:15 Texas Woman Blames Designer High Hills on Wreck That Kills a Man on Date.Links to Follow Crime Wire Weekly https://linktr.ee/crimewireweeklyKelly Jennings is host of “Unspeakable: A True Crime Podcast by Kelly Jennings” https://open.spotify.com/show/3n7BUzKRtMhAEuIuu7f031?si=c98fcf5b7e6848c8 Jim Chapman is host of “Exposed: Scandalous Files of the Elite” https://open.spotify.com/show/3ePQYSPp5oSPDeue8otH1n?si=39142df6e0ed4f77
Teaser ... What is Bob doing in Qatar? ... "Things fall apart; the centre cannot hold." Why Yeats felt this way ... Peter Thiel's Antichrist theory ... Nikita: COVID was a weird time, and nothing has been the same since ... Why Bob feels things are falling apart ... Were things ever in order? ... "Inner emigration": tuning out the world as self-care ... A reaction to a drone strike: "Loud noises don't wake me up" ... Young people are turning away from social media ... When Spain lost all power and cellular coverage ... War as a "major inconvenience" ... Bob's prediction about Ukraine's future retaliation for Pokrovsk ... Nikita's impressions from Europe ... Nonzero Reading Club THIS SATURDAY: Norbert Wiener's God & Golem, Inc. ... Deepfakes and the future of news ... Russia's first humanoid robot falls down ...
In this episode of Crime Wire Weekly, hosts Jim Chapman and Kelly Jennings discuss a series of trending crime stories, including the outrage after a 13 year old victim of a deepfake image gets expelled from school. In Miami an 18 year old is found mysteriously dead sparking an FBI investigation aboard a Carnival Cruise Ship. In Las Vegas a trial ensues for a pornstar accused of beheading the father of her (2) children. In Illinois a chiropractor is caught with 180 videos of patients in the nude on his computer. These stories and more today on Crime Wire Weekly!*This is a preview, links to listen to the full podcast by following "Crime Wire Weekly" are below.Timestamps04:25 Illinois Chiropractor Found with 180 Different Patient's Nude Videos.09:00 Mysterious Death of 18 Year Old Aboard Carnival Cruise Ship.14:30 Former Local Fox News Anchor Kills Mother in Kansas.19:10 Louisiana 13 Year old Deepfake Victim Expelled From School.37:00 Sex Offender Caught After Over a Decade on the Run.42:00 Las Vegas Pornstar Trial Underway for Beheading Partner.46:00 Louisiana Man Fakes Limp to Rob Bank. 49:55 Texas Woman Tortured for Months Leads to Arrest of (5) People.54:08 Indiana Man Shoots (2) Dogs While Deer Hunting.56:15 Texas Woman Blames Designer High Hills on Wreck That Kills a Man on Date.Links to Follow Crime Wire Weekly https://linktr.ee/crimewireweeklyKelly Jennings is host of “Unspeakable: A True Crime Podcast by Kelly Jennings” https://open.spotify.com/show/3n7BUzKRtMhAEuIuu7f031?si=c98fcf5b7e6848c8 Jim Chapman is host of “Exposed: Scandalous Files of the Elite” https://open.spotify.com/show/3ePQYSPp5oSPDeue8otH1n?si=39142df6e0ed4f77Become a supporter of this podcast: https://www.spreaker.com/podcast/exposed-scandalous-files-of-the-elite--6073723/support.
Las imágenes se han convertido en el nuevo idioma digital. En este episodio reflexiono sobre cómo emojis, stickers, filtros y deepfakes están cambiando nuestra forma de comunicarnos y de entender la realidad.Un recorrido por los nuevos códigos visuales que definen la conversación contemporánea.Bloque 1 – De dónde venimos: la imagen como lenguaje universal.Durante siglos, la palabra fue la medida del pensamiento; hoy, la imagen es la medida de la emoción.Bloque 2 – Situación actual: comunicación instantánea y visualidad infinitaHoy la imagen no documenta: performa. No muestra lo que somos, sino lo que queremos que se vea.Bloque 3 – Hacia dónde vamos: nuevos códigos de comunicación visualEl futuro del lenguaje visual será híbrido: mitad humano, mitad algoritmo. ¿Nos comunicaremos con símbolos generados por inteligencia artificial, o seguiremos buscando emoción y sentido en lo que solo puede transmitir una mirada real?Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/tecnoap21--4507454/support.Puedes contactar conmigo para enviarme tus opiniones y comentarios, así como sugerencias y peticiones a: contacto@tecnoap21.comTambién puedes seguir a TecnoAp21 a través de las redes sociales:- X- Threads- Mastodon- BlueSky- LinkedIn- Post.News- Facebook- Instagram
Tara explores the mind-bending rise of AI in entertainment and media. From top-charting country songs by entirely fictional artists like “Caine Walker” and “Breaking Rust,” to viral animal and influencer videos that aren't real, AI is blurring the line between reality and fabrication. They discuss the unsettling implications for art, culture, and personal trust, and examine how AI-generated content could soon shape politics, advertising, and daily life. As technology increasingly imitates reality, Tara asks: what can we truly believe anymore?
This B-Side episode discusses the threat of AI-driven fraud with Dominic Forest, chief technology officer at iProov, a London-based technology company specializing in biometric identity verification.Interview by Patricia MirasolAudio editing by Jayson Mariñas#AIDrivenFraud #DeepFakes #BiometricAuthentication #BankingSecurity#BWorldPH
Deepfake technology seems to be flooding online spaces and as AI-generated gets more sophisticated, the challenges will grow. Dr Yuki Watanabe from the University of Otago studies interdisciplinary perspectives to examine the profound impacts of deepfake technology. She says we should be proactively shaping the future of this technology with human-centered approach. Dr Watanabe speaks to Emile Donovan.
In this episode of Builders Wanted, we're joined by Filip Verley, Chief Innovation Officer at Liminal. Filip sheds light on the challenges companies face with increasing fraud due to advances in generative AI and deepfakes, and the importance of balancing security with customer experience. The conversation covers practical strategies for unifying data across teams, leveraging behavioral signals, and investing in converged identity platforms.-------------------Key Takeaways: Digital fraud is rapidly evolving due to generative AI and deepfakes, making it harder for companies to distinguish between real and fake interactions.Unifying data and aligning company goals around trust and security is essential for effective fraud prevention.Balancing customer experience with security requires smart, context-aware friction and continuous monitoring.-------------------“ The best teams or organizations don't think in an either-or, it's the balance. They always are able to balance and they design these systems to adapt to what they need. It's not just about reducing the fraud, it's making sure that users are protected without slowing them down. Smart friction.” – Filip Verley-------------------Episode Timestamps:*(01:52) - How generative AI and deepfakes are making fraud detection harder *(04:07) - Insights from Liminal's Seminal Report*(16:19) - Why behavioral intent is a game changer for fraud detection*(22:54) - The 4 layers of defense every company needs *(25:52) - Where companies are investing for the biggest impact*(35:13) - Quick hits-------------------Links:Connect with Filip on LinkedInRead Liminal's Seminal ReportConnect with Kailey on LinkedInLearn more about Caspian Studios-------------------SponsorBuilders Wanted is brought to you by Twilio – the Customer Engagement Platform that helps builders turn real-time data into meaningful customer experiences. More than 320,000 businesses trust Twilio to transform signals into connections—and connections into revenue. Ready to build what's next? Learn more at twilio.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The most dangerous AI attacks don't just break your systems, they break your trust in reality. From deepfakes that fooled a company into losing an enormous amount of money to data poisoning that silently corrupts AI models, today's threats are unlike anything cybersecurity teams have faced before. Don't wait for these threats to hit your organisation, get ahead of them now.In this episode, Monica, a hacker turned CISO, keynote speaker, and founder of Monica Talks Cyber, reveals the dark side of AI and how you can protect your organisation while accelerating your security career. What you'll learn: Why deepfakes are just the beginning of AI-based attacks? How data poisoning works (and why it's so dangerous)? Practical defences: from AI supply chain security to human-in-the-loop protocols. How to position yourself as the AI security leader your company needs?Looking to become an influential and effective security leader? Don't know where to start or how to go about it? Follow Monica Verma (LinkedIn) and Monica Talks Cyber (Youtube) for more content on cybersecurity, technology, leadership and innovation, and 10x your career. Subscribe to The Monica Talks Cyber newsletter at https://www.monicatalkscyber.com.
Eric O'Neill, former FBI ghost and author of “Spies, Lies & Cybercrime,” joins host David Puner to take a deep dive into the mindset and tactics needed to defend against today's sophisticated cyber threats. Drawing on O'Neill's experience catching spies and investigating cybercriminals, the conversation explains how thinking like an attacker can help organizations and individuals stay ahead. The episode covers actionable frameworks, real-world stories, and practical advice for building cyber resilience in an age of AI-driven scams and industrialized ransomware.
A middle schooler in Lafourche Parish was arrested for allegedly creating nude deepfakes of his classmates. We get the details on what happened from WWL Louisiana reporter Alyssa Curtis.
* The Archdiocese is putting over 150 parishes, charities, and other entities in bankruptcy. What does that actually mean? * A Lafourche middle schooler was arrested for allegedly creating nude deepfakes of his classmates
* A middle schooler was arrested for making nude deepfakes of classmates * Brian Kelly is suing LSU over his big buyout * Could a Border Patrol task force be coming to New Orleans? * Will voters even care about the shutdown when the midterms finally happen? * What's cheaper: renting or buying? * How the Archdiocese is moving to resolve its long-running bankruptcy case
In today's Tech3 from Moneycontrol, Groww steals the spotlight with a blockbuster market debut, as shares jump 31% on listing day. Capillary trims its IPO size amid improving cash flows. India's tech industry pushes back against MeitY's draft deepfake rules, warning of overbroad definitions. The EV rivalry heats up with Ather outperforming Ola Electric on key metrics, and Figma opens its first India office in Bengaluru, joining global tech majors expanding their footprint here.
An artificial intelligence-generated deepfake of Chinese actress Wen Zhengrong's face and voice was used by unscrupulous merchants to impersonate her in livestream sales, prompting increased calls for stronger and more tailored regulation and penalties from internet platforms and the law.近日,有不良商家利用人工智能生成的深度伪造内容,盗用中国女演员温峥嵘的人脸和声音进行直播带货。这一事件引发了公众强烈反响,互联网平台和法律层面均出现加强针对性监管与处罚的呼声。The discovery was made last week when Wen appeared to simultaneously host three different early morning livestream rooms on social media, wearing different outfits and promoting different products. The Wen "clones" looked and sounded strikingly similar to Wen, a feat that quickly ignited online discussion.上周,有网友发现温峥嵘疑似同时在社交媒体的三个不同直播间进行早间直播。画面中的“温峥嵘”身着不同服装、推广不同产品,其外貌和声音与本人高度相似,这一现象迅速引发网络热议。According to a China Media Group report on Wednesday, the forged images were produced either by clipping past videos and screen recordings or by taking earlier livestream footage of Wen and running it through AI-based deep synthesis, including voice alteration.据中央广播电视总台周三报道,这些伪造影像的制作方式主要有两种:一是剪辑过往视频和屏幕录制片段;二是提取温峥嵘早期直播素材,通过基于人工智能的深度合成技术(包括语音修改)进行处理。"These AI tactics confuse the public. My image and likeness have been infringed, and it is deeply hurtful," Wen said in the video report. She added that if viewers who trust her were misled into buyingcounterfeit goods, "I would feel truly sad."温峥嵘在视频报道中表示:“这些人工智能手段误导了公众,我的肖像权和人格权受到了侵犯,这让我深感痛心。”她还提到,若信任自己的观众因此被误导购买到假冒伪劣产品,“我会感到非常难过”。Li Ya, a partner at Zhongwen Law Firm in Beijing, told China Daily that such conduct was suspected of violating Wen's right of portrait and may also harm her right of reputation.北京中文律师事务所合伙人李亚向《中国日报》表示,此类行为涉嫌侵犯温峥嵘的肖像权,还可能损害其名誉权。Using someone's image for profit without authorization infringes on portrait rights, he said. "If sellers speak in her name and make false or exaggerated claims, that will negatively impact a public figure's reputation."李亚指出,未经授权使用他人肖像牟利构成肖像权侵权。“如果商家以她的名义进行虚假宣传或夸大宣传,将对公众人物的名誉造成负面影响。”Wen's team said that once the fake clips began circulating, they filed reports around the clock, flagging about 50 impersonation accounts in one day, according to CMG.据中央广播电视总台报道,温峥嵘团队表示,虚假片段传播后,他们立即展开24小时不间断投诉,单日就举报了约50个仿冒账号。Some livestreaming accounts were taken down, they said, but others quickly reappeared in new forms. Wen's staff noted that certain merchants can fabricate content by extracting brief footage and relying on AI functions built into video-editing apps, while the team faces a much higher burden to preserve evidence and defend their rights.温峥嵘团队称,部分直播账号已被下架,但另有一些账号很快以新形式重新出现。工作人员表示,部分商家只需提取简短素材,借助视频编辑软件内置的人工智能功能就能伪造内容,而团队维权时却面临着证据固定难、维权成本高的困境。Li said it is unrealistic to expect victims alone to safeguard their rights.李亚表示,仅靠受害者自身维权并不现实。"Rule-breaking merchants can open new accounts at will and face almost no cost for infringement," he added.李亚补充道:“违规商家可随意注册新账号,侵权成本几乎为零。”He noted that social platforms have a duty to deploy technology to detect whether AI tools are being used improperly in livestreams or short videos, and to penalize offending accounts as well as the operating companies and teams behind them, in order to prevent harm to third parties.他强调,社交平台有责任运用技术手段,检测直播或短视频中是否存在人工智能工具的不当使用行为。同时,平台需对违规账号及其背后的运营公司和团队进行处罚,以防范对第三方造成损害。In September, new regulations on labeling AI-generated synthetic content, released by the Cyberspace Administration of China and other agencies, took effect. The rules require clear "AI-generated" labels on synthetic faces and videos.今年9月,国家互联网信息办公室等部门发布的《生成式人工智能服务管理暂行办法》正式生效,其中明确要求对合成人脸、合成视频等内容标注“AI生成”字样。In practice, however, some merchants hide labels in obscure locations or mask them, allowing infringements to slip through reviews from the social platforms.但在实际执行中,部分商家将标注隐藏在隐蔽位置或进行遮挡,导致侵权行为逃过平台审核。"If a video lacks the necessary authorization and fails to show an AI-generation label, using someone else's face or voice in a livestream is atextbook case of infringement," Li said. "Victims can sue the merchant and even the platform, which bears responsibility for keeping online order."“若视频未经授权,且未标注AI生成标识,在直播中使用他人人脸或声音即属典型侵权行为。”李亚说,“受害者可起诉商家,甚至追究平台责任,因为平台负有维护网络秩序的义务。”On Thursday, Douyin's e-commerce safety and trust center said on its official social account that it had launched a special campaign in October targeting infringements. Those included cases where merchants or content creators, without permission, spliced videos, blended text, or used AI and digital effects to mimic celebrities for sales. The short-video platform's response to these actions included cutting livestreams, removing or banning involved products, freezing transaction proceeds, and ordering business suspensions, it said.周四,抖音电商安全与信任中心在其官方社交账号发布声明称,平台已于10月启动专项治理行动,重点打击各类侵权行为。其中包括商家或内容创作者未经授权,通过拼接视频、混合文本、运用人工智能及数字特效仿冒名人进行带货的行为。声明指出,平台针对此类行为采取的措施包括中断直播、下架或封禁相关商品、冻结交易款项、责令暂停经营等。Since the campaign began, the platform has acted against 11,000 creator accounts involved in impersonations and taken down more than 6,700 products, it said. In addition, more than 10,000 infringing videos faking Wen's likeness and voice for marketing were removed.声明显示,专项行动开展以来,平台已处置1.1万个涉仿冒行为的创作者账号,下架商品超6700件,同时清理了1万余条仿冒温峥嵘人脸及声音进行营销的侵权视频。Li Liang, a deputy president of Douyin Group, said on his account on Thursday that such AI-involved impersonation "seriously undermines the credibility of creators, merchants, and the platform in the eyes of consumers, and runs counter to our long-term interests".抖音集团副总裁李亮周四在其账号上表示,此类涉人工智能的仿冒行为“严重损害了创作者、商家和平台在消费者心中的公信力,与我们的长远利益相悖”。Li denied that the livestreams impersonating Wen took place on Douyin, adding that AI content infringement detection is an "industry-wide challenge, and malicious impersonators constantly engage in a technical cat-and-mouse with platforms."李亮否认涉温峥嵘的仿冒直播发生在抖音平台,并指出AI内容侵权检测是“全行业面临的挑战,恶意仿冒者不断与平台进行技术博弈”。"We will keep investing to meet the challenge and protect the lawful rights of creators, merchants, and consumers," he said.李亮称:“我们将持续加大投入应对这一挑战,切实保护创作者、商家和消费者的合法权益。”Li Wei, deputy dean of the School of New Media at Peking University, said AI-enabled deepfakes are "the most typical trap" in AI-assisted content production, involving infringement, copyright, ethics, and online abuse, with serious consequences.北京大学新媒体研究院副院长李玮表示,人工智能驱动的深度伪造技术是“人工智能辅助内容生产中最典型的陷阱”,涉及侵权、版权、伦理及网络滥用等多重问题,后果严重。She called for more systematic,multistakeholder cyberspace governance, especially stronger legal safeguards and platform oversight, and urged more tailored penalty tools to curb such practices.她呼吁构建更系统、多方参与的网络空间治理体系,尤其要强化法律保障和平台监管力度,并推动制定更具针对性的处罚措施,以遏制此类行为。counterfeit goods假冒伪劣产品textbook case of infringement典型侵权行为multistakeholder多方参与
Sende uns Deine NachrichtIn dieser Episode sprechen Norman Müller und Christian Bennefeld über die Schattenseiten der KI-Transformation. Es geht um digitale Authentizität, Prompt-Injektionen in alltäglichen Dokumenten und das Risiko vernetzter Agentensysteme. Christian erklärt, wie sich über scheinbar harmlose PDFs Bewerbungsverfahren manipulieren lassen und warum das Model Context Protocol ohne harte Authentifizierung zum Einfallstor wird. Für Unternehmen heißt das. Zero Trust ernst nehmen, Guard-Modelle vorschalten, Kontexte trennen und Agentenzugriffe streng absichern.Zudem diskutieren wir den gesellschaftlichen Vertrauensverlust durch Deepfakes, Wasserzeichen und Provenienzstandards sowie die Grenzen von Regulierung. Der Mittelstand steht zwischen Hype und Haftung. Sinnvolle Agenten-Use-Cases brauchen klare ROI Logik und ein robustes Sicherheitsdesign. Zum Schluss gibt es Literaturtipps aus der Praxis und eine persönliche Antwort auf die Frage, was KI im besten Fall verändern sollte. Zeit zum Nachdenken.Hier findest du die Shownotes, das Video zum Podcast und den Zugang zu unserer Podcast-Community:www.bundesverband.aiSupport the show________________ Du möchtest noch mehr? Abonniere den Podcast bei Apple oder Spotify, folge für noch mehr exklusive Inhalte (wie z.B. Videoaufzeichnungen und Hintergrundinformationen) unserer Podcast-Community https://geniusalliance.substack.com und vernetze dich mit Norman auf LinkedIn https://www.linkedin.com/in/muellernorman Bitte unterstütze unsere Arbeit und schreibe uns eine Podcast-Bewertung bei Apple und Spotify. Damit hilfst du uns, weiterhin spannende Gäste in den Podcast einzuladen, von denen wir alle lernen können.
When a social media video makes you go "Wow," pause. That shock factor is a big clue it could be an AI fake. Learn more about your ad choices. Visit megaphone.fm/adchoices
Subscribe to our channel: / @optispan Get Our Newsletter (It's Free): https://www.optispan.life/DISCLAIMER: The information provided on the Optispan podcast is intended solely for general educational purposes and is not meant to be, nor should it be construed as, personalized medical advice. No doctor-patient relationship is established by your use of this channel. The information and materials presented are for informational purposes only and are not a substitute for professional medical advice, diagnosis, or treatment. We strongly advise that you consult with a licensed healthcare professional for all matters concerning your health, especially before undertaking any changes based on content provided by this channel. The hosts and guests on this channel are not liable for any direct, indirect, or other damages or adverse effects that may arise from the application of the information discussed. Medical knowledge is constantly evolving; therefore, the information provided should be verified against current medical standards and practices.More places to find us:Twitter: https://x.com/Optispan_IncTwitter: / mkaeberlein Linkedin: / optispan Instagram: / optispan_ TikTok: / optispan https://www.optispan.life/Hi, I'm Matt Kaeberlein. I spent the first few decades of my career doing scientific research into the biology of aging, trying to understand the finer details of how humans age in order to facilitate translational interventions that promote healthspan and improve quality of life. Now I want to take some of that knowledge out of the lab and into the hands of people who can really use it.On this podcast I talk about all things aging and healthspan, from supplements and nutrition to the latest discoveries in longevity research. My goal is to lift the veil on the geroscience and longevity world and help you apply what we know to your own personal health trajectory. I care about quality science and will always be honest about what I don't know. I hope you'll find these episodes helpful!
AI can replicate faces, voices, and personalities—but it can't duplicate the image of God. In this conversation, Dr. Kathy Koch and Wayne Stender explore Denmark's bold new proposal to give citizens ownership of their likeness and what it means for parents raising kids in a world where identity can be digitally faked. Discover how to help children stay grounded in truth, not technology.
Recent advances in artificial intelligence have drawn a lot of media attention. But little of this has focused on how this new technology may affect democracy. Spencer Overton joins host Alex Lovit to discuss how AI may push the United States away from becoming an inclusive, multiracial democracy—or toward it. Spencer Overton is the Patricia Roberts Harris Research Professor of Law and the director of the Multiracial Democracy Project at George Washington University Law School. https://ssrn.com/abstract=4754903 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5196382 Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
Keywordscybersecurity, technology, AI, IoT, Intel, startups, security culture, talent development, career advice SummaryIn this episode of No Password Required, host Jack Clabby and Kayleigh Melton engage with Steve Orrin, the federal CTO at Intel, discussing the evolving landscape of cybersecurity, the importance of diverse teams, and the intersection of technology and security. Steve shares insights from his extensive career, including his experiences in the startup scene, the significance of AI and IoT, and the critical blind spots in cybersecurity practices. The conversation also touches on nurturing talent in technology and offers valuable advice for young professionals entering the field. TakeawaysIoT is now referred to as the Edge in technology.Diverse teams bring unique perspectives and solutions.Experience in cybersecurity is crucial for effective team building.The startup scene in the 90s was vibrant and innovative.Understanding both biology and technology can lead to unique career paths.AI and IoT are integral to modern cybersecurity solutions.Organizations often overlook the importance of security in early project stages.Nurturing talent involves giving them interesting projects and autonomy.Young professionals should understand the hacker mentality to succeed in cybersecurity.Customer feedback is essential for developing effective security solutions. TitlesThe Edge of Cybersecurity: Insights from Steve OrrinNavigating the Intersection of Technology and Security Sound bites"IoT is officially called the Edge.""We're making mainframe sexy again.""Surround yourself with people smarter than you." Chapters00:00 Introduction to Cybersecurity and the Edge01:48 Steve Orrin's Role at Intel04:51 The Evolution of Security Technology09:07 The Startup Scene in the 90s13:00 The Intersection of Biology and Technology15:52 The Importance of AI and IoT20:30 Blind Spots in Cybersecurity25:38 Nurturing Talent in Technology28:57 Advice for Young Cybersecurity Professionals32:10 Lifestyle Polygraph: Fun Questions with Steve
Have you ever wondered if the person you are arguing with in a social media comment section is really a person at all? Perry Carpenter is a cybersecurity expert, author of FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, and host of The Faik Files podcast. I asked Perry to explain the phenomenon of social media bots and troll farms; who might be commanding these faceless humans and robots? What might their goal be? And how do we tell the difference between a genuine comment or post and one manufactured with a particular objective in mind? Listen to THE FAIK FILES podcast Get a copy of FAIK Become a Patron to support our show and get early ad-free episodes and bonus content Or subscribe to American Hysteria on Apple Podcasts Get some of our new merch at americanhysteria.com, all profits go to The Sameer Project, a Palestinian-led mutual aid group who are on the ground in Gaza delivering food and supplies to displaced families. Leave us a message on the Urban Legends Hotline Producer and Editor: Miranda Zickler Associate Producer: Riley Swedelius-Smith Additional editing by AJ McKinley Hosted by Chelsey Weber-Smith Learn more about your ad choices. Visit megaphone.fm/adchoices
Live from Authenticate 2025, Jeff Steadman and Jim McDonald sit down with Dr. Tina Srivastava, an IDPro board member and co-founder of Badge Inc., for a crucial discussion on the rapidly evolving landscape of identity and authentication.Tina shares her insights on the conference, the evolution from physical hacks to sophisticated AI-driven threats like supercharged phishing, and the current challenges facing the industry. The conversation delves into the complexities of synced Passkeys, the critical vulnerability of account recovery processes, and the slow pace of regulation in keeping up with technology.As a board member for IDPro, Tina highlights the immense value of the practitioner-focused community, the supportive culture within its Slack channels, and makes an exciting announcement about the creation of new member-driven committees to shape the future of the organization. They explore the concept of the "AI arms race" and why identity professionals cannot afford to wait for the next big thing, emphasizing that collaboration and information sharing through communities like IDPro are essential to staying ahead of adversaries.Connect with Tina: https://www.linkedin.com/in/tina-s-8291438a/Find out more about IDPro: https://www.idpro.org/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at http://idacpodcast.comChapters00:00 Introduction and Greetings00:16 Highlights from Authenticate 202501:39 FIDO Feud Rematch Discussion03:17 Guest Introduction: Tina Srivastava03:46 Conference Insights and AI Challenges06:16 Regulatory Environment and Passkeys09:11 Phishing and AI Supercharged Attacks12:28 QR Codes and Accessibility Issues13:09 The Importance of Phishing Resistant Authentication22:24 IDPro Community and Practitioner Support25:18 Community Support and Engagement26:26 IDPro's Role in Identity Events27:48 Future Directions for IDPro29:19 Introducing Committees in IDPro30:39 AI and Identity Verification37:07 The Importance of Information Sharing45:35 Public Speaking and Personal Growth50:58 Conclusion and Final ThoughtsKeywordsIDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Tina Srivastava, IDPro, Authenticate 2025, Passkeys, AI, Artificial Intelligence, Cybersecurity, Phishing, Deepfakes, Authentication, Account Recovery, Biometrics, Identity and Access Management, IAM, NIST, Regulation, Identity Verification, Synced Passkeys, FIDO Alliance
The Hidden Lightness with Jimmy Hinton – A teenage girl in New Jersey has filed a lawsuit against an artificial intelligence company after discovering fake nude images of herself circulating online—images she never took, and moments that never happened. The cruel irony is that much of this technology is publicly available—easy to access, easy to use, and largely unregulated...
After spending years tracking Al Qaeda, former analyst Gina Bennett talks to SpyTalk about her storied career, her new spy novel and the rising threat of "politicized misogyny." Gina BennettIf Two of Them Are Dead Follow Jeff Stein on Twitter:https://twitter.com/SpyTalkerFollow Karen Greenberg on Twitterhttps://x.com/KarenGreenberg3Follow Michael Isikoff on Twitter:https://twitter.com/isikoff Follow SpyTalk on Twitter:https://twitter.com/talk_spySubscribe to SpyTalk on Substack https://www.spytalk.co/Take our listener survey where you can give us feedback.http://survey.podtrac.com/start-survey.aspx?pubid=BffJOlI7qQcF&ver=short Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Mennyire hatékony a kormány AI-val fűtött hergelése és számháborúja a valóság ellen?És kik azok a “szuperpolgármesterek”, akiket a Fidesz hasztalanul próbált Botka László nyakára ültetni Szegeden?És hogyan hódítható el a Fidesztől a kötelezően megnyerendő Csongrád 2-es körzet?Mindezeket értékeli Ruff Bálint és Vida Kamilla a szegedi Vétóban.Élő Vétó Kaposváron: https://partizan.funcode.hu/events/136480—A közösség lehetőség, a közösség felelősség.Támogasd a Partizánt!https://www.partizan.hu/tamogatas—Csatlakozz a Partizán közösségéhez, maradjunk kapcsolatban, és értesülj elsőként eseményeinkről, akcióinkról!https://csapat.partizanmedia.hu/forms/maradjunk-kapcsolatban—Legyél önkéntes!Csatlakozz a Partizán önkéntes csapatához:https://csapat.partizanmedia.hu/forms/csatlakozz-te-is-a-partizan-onkenteseihez—Iratkozz fel tematikus hírleveleinkre!Szerkesztőségi Hírlevélhttps://csapat.partizanmedia.hu/forms/iratkozz-fel-a-partizan-szerkesztoinek-hirlevelereHeti Feledyhttps://csapat.partizanmedia.hu/forms/partizan-heti-feledyVétóhttps://csapat.partizanmedia.hu/forms/iratkozz-fel-a-veto-hirlevelere—Írj nekünk!Ha van egy sztorid, tipped vagy ötleted:szerkesztoseg@partizan.huBizalmas információ esetén:partizanbudapest@protonmail.com(Ahhoz, hogy titkosított módon tudj írni, regisztrálj te is egy protonmail-es címet.)Támogatások, események, webshop, egyéb ügyek:info@partizan.hu
Olivia and Shauna devote their last Spooky Season episode to the modern horror that is Generative AI, the inescapable technology that is not only replacing working writers and artists, but also must steal their work in order to do so. Recently, an AI "actress" going by the name Tilly Norwood stirred up a lot of controversy in Hollywood, so for this episode, the Junkies are joined by Alaina Lightfoot, an actor, writer, director, and producer based in Los Angeles who has appeared in commercials, web series, and several films which have been featured in festivals including Studio City Film Fest and The Great Film Festival 2025. Alaina and the Junkies discuss the impact AI performers, should the technology ever become truly viable, will have on show business and the people working in it, especially actors just getting started in their careers. Is AI inevitable, or will humanity win in the end? Follow Alaina Lightfoot on Instagram: @alainalightfoot We have affordable and rewarding Patreon tiers! Be the first to hear new and uncensored content, if you dare! Click here: https://www.patreon.com/popculturejunkiepodcast/posts Apple Podcast: https://podcasts.apple.com/us/podcast/pop-culture-junkie/id1536737728 Spotify: https://open.spotify.com/show/7k2pUxzNDBXNCHzFM7EL8W Website: www.popculturejunkie.comFacebook: PopCultureJunkiePodcastInstagram: @pop.culturejunkieThreads: @pop.culturejunkieBluesky: @pop-culture-junkie.bsky.socialEmail: junkies@popculturejunkie.com Shauna on Instagram: @shaunatrinidad Shauna on Threads: @shaunatrinidad Olivia on Instagram: @livimariez
In this episode of Crazy Wisdom, host Stewart Alsop talks with Richard Easton, co-author of GPS Declassified: From Smart Bombs to Smartphones, about the remarkable history behind the Global Positioning System and its ripple effects on technology, secrecy, and innovation. They trace the story from Roger Easton's early work on time navigation and atomic clocks to the 1973 approval of the GPS program, the Cold War's influence on satellite development, and how civilian and military interests shaped its evolution. The conversation also explores selective availability, the Gulf War, and how GPS paved the way for modern mapping tools like Google Maps and Waze, as well as broader questions about information, transparency, and the future of scientific innovation. Learn more about Richard Easton's work and explore early GPS documents at gpsdeclassified.com, or pick up his book GPS Declassified: From Smart Bombs to Smartphones.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart Alsop introduces Richard Easton, who explains the origins of GPS, its 12-hour satellite orbits, and his father Roger Easton's early time navigation work.05:00 – Discussion on atomic clocks, the hydrogen maser, and how technological skepticism drove innovation toward the modern GPS system.10:00 – Miniaturization of receivers, the rise of smartphones as GPS devices, and early mapping tools like Google Maps and Waze.15:00 – The Apollo missions' computer systems and precision landings lead back to GPS development and the 1973 approval of the joint program office.20:00 – The Gulf War's use of GPS, selective availability, and how civilian receivers became vital for soldiers and surveyors.25:00 – Secrecy in satellite programs, from GRAB and POPPY to Eisenhower's caution after the U-2 incident, and the link between intelligence and innovation.30:00 – The myth of the Korean airliner sparking civilian GPS, Reagan's policy, and the importance of declassified documents.35:00 – Cold War espionage stories like Gordievsky's defection, the rise of surveillance, and early countermeasures to GPS jamming.40:00 – Selective availability ends in 2000, sparking geocaching and civilian boom, with GPS enabling agriculture and transport.45:00 – Conversation shifts to AI, deepfakes, and the reliability of digital history.50:00 – Reflections on big science, decentralization, and innovation funding from John Foster to SpaceX and Starlink.55:00 – Universities' bureaucratic bloat, the future of research education, and Richard's praise for the University of Chicago's BASIC program.Key InsightsGPS was born from competing visions within the U.S. military. Richard Easton explains that the Navy and Air Force each had different ideas for navigation satellites in the 1960s. The Navy wanted mid-Earth orbits with autonomous atomic clocks, while the Air Force preferred ground-controlled repeaters in geostationary orbit. The eventual compromise in 1973 created the modern GPS structure—24 satellites in six constellations—which balanced accuracy, independence, and resilience.Atomic clocks made global navigation possible. Roger Easton's early insight was that improving atomic clock precision would one day enable real-time positioning. The hydrogen maser, developed in 1960, became the breakthrough technology that made GPS feasible. This innovation turned a theoretical idea into a working global system and also advanced timekeeping for scientific and financial applications.Civilian access to GPS was always intended. Contrary to popular belief, GPS wasn't a military secret turned public after the Korean airliner tragedy in 1983. Civilian receivers, such as TI's 4100 model, were already available in 1981. Reagan's 1983 announcement merely reaffirmed an existing policy that GPS would serve both military and civilian users.The Gulf War proved GPS's strategic value. During the 1991 conflict, U.S. and coalition forces used mostly civilian receivers after the Pentagon lifted “selective availability,” which intentionally degraded accuracy. GPS allowed troops to coordinate movement and strikes even during sandstorms, changing modern warfare.Secrecy and innovation were deeply intertwined. Easton recounts how classified projects like GRAB and POPPY—satellites disguised as scientific missions—laid technical groundwork for navigation systems. The crossover between secret defense projects and public science fueled breakthroughs but also obscured credit and understanding.Ending selective availability unleashed global applications. When the distortion feature was turned off in May 2000, GPS accuracy improved instantly, leading to new industries—geocaching, precision agriculture, logistics, and smartphone navigation. This marked GPS's shift from a defense tool to an everyday utility.Innovation's future may rely on decentralization. Reflecting on his father's era and today's landscape, Easton argues that bureaucratic “big science” has grown sluggish. He sees promise in smaller, independent innovators—helped by AI, cheaper satellites, and private space ventures like SpaceX—continuing the cycle of technological transformation that GPS began.
The Hidden Lightness with Jimmy Hinton – Deepfakes and AI video tools like Sora 2 are reshaping reality, blurring the line between truth and illusion. As hyper-realistic videos spread online, children and families face new dangers of exploitation and deception. Yet, within the darkness, AI also helps catch predators and protect the innocent—reminding us that real connection still defeats artificial reality...
Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.