Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Artificial intelligence is transforming the global information ecosystem at breathtaking speed. In this timely conversation, Julia Haas, Head of the OSCE Representative on Freedom of the Media's AI & Freedom of Expression project, examines what this means for journalism, democratic governance, and human rights.We discuss the rise of deepfakes and AI-driven disinformation, the concentration of power in big tech platforms, and the economic vulnerabilities of modern newsrooms. How do we preserve information integrity without enabling censorship? How can regulation enhance accountability without strengthening state control? And as media organizations increasingly adopt AI tools, how can trust be protected?Julia argues that safeguarding media freedom in the age of AI is not merely a technological challenge—it is a democratic test. Multilateral cooperation, principled regulation, and stronger public-interest infrastructure will be essential if innovation is to reinforce, rather than erode, open societies.Learn more on GlobalGovernanceForum.org
New research suggests Australians are dangerously overconfident about detecting AI deepfake scams, even as the technology becomes harder to spot. Experts warn scammers hijack trust and instinct, and are calling on people to pause, verify and reject suspicious messages. - یک تحقیق تازه نشان می دهد آسترالیایی ها در تشخیص فریبکاریها و کلاهبرداری های "دیپ فیک" ساخته شده با هوش مصنوعی، بیش از حد مطمئین اند. در حالی که این تکنالوژی هر روز طبیعی تر و شناسایی آن سخت تر میشود. متخصصین هشدار می دهند کلاهبرداران از اعتماد و احساسات مردم سوءاستفاده کرده و می گویند بهترین کار این است که مکث و بررسی کرده و پیام های مشکوک را رد کنید.
This episode of Vermont Viewpoint was published 02/25/2026.
Kreide.KI.Klartext. Der Podcast mit Diana Knodel und Gert Mengel
In dieser Episode sprechen wir darüber, warum Demokratiebildung und Künstliche Intelligenz untrennbar zusammengehören – und welche Rolle Schule dabei spielt.
Viele bekannte Synchron-Sprecherinnen und Sprecher, die normalerweise auch in Netflix-Produktionen zu hören sind, stehen derzeit nicht im Tonstudio, so die Einschätzung von Anna-Sophia Lumpe vom Verband Deutscher Sprecher:innen (VDS). Die Sprecherinnen und Sprecher nutzen diese Verweigerung vor allem auch als Protestsymbol, das sich gegen eine neue Klausel zum Einsatz von Künstlicher Intelligenz richtet. Netflix fordert darin die Zustimmung, Sprachaufnahmen für das Training entsprechender Systeme nutzen zu dürfen – ohne dafür zu zahlen und ohne den Betroffenen eine Wahl zu lassen. Denn wer die neuen Vertragsmodalitäten nicht unterzeichnen will, bekommt keine Alternativ-Option und kann folglich auch nicht für den Streamer arbeiten. Aus Sicht der Synchron-Schauspieler und Schauspielerinnen bergen die neuen KI-Kontrakte nicht nur aufgrund der Vergütung Brisanz, sondern auch, weil sie nicht genau wissen, was mit ihrem Stimm-Material zukünftig genau im KI-System gemacht wird, sprich wozu es verwendet, wohin es entwickelt wird. Wehren können sich die fraglichen Protagonisten bei einmal gegebener Zustimmung nur schwer, weil sie gleich eine Rechteabtretung an der eigenen Stimme für 50 Jahre beinhaltet. Bei vielen im Verband klingt daher die Befürchtung mit durch, ob kurz oder lang unkalkulierbarer Weise an der Abschaffung des eigenen Arbeitsplatzes mitzuwirken, perspektivisch mit dem eigenen Material also synthetisch generierte KI-Stimmen und Deepfakes auszubilden, die sie irgendwann ersetzen könnten - Sie sehen die gesamte Synchronisationskultur in Gefahr. Zwar betont Netflix aktuell, dies nicht tun zu wollen, klar geregelt sind die fraglichen Aspekte in den neuen KI-Klauseln allerdings nicht – die Sprecherinnen und Sprecher hätten nach jetzigem Stand aufgrund der fehlenden dezidierten Leitplanken folglich wenig Handhabe, wenn dies dennoch passieren sollte.
Deepfake porn is a billion-click industry built on stolen faces, while the people making it hide theirs behind screens. Hosted by journalist Sam Cole, Understood: Deepfake Porn Empire traces the decades-long rise of synthetic porn, the targets who are fighting back, and the global investigation that led to its Canadian kingpin.Understood takes you deep inside the seismic shifts reshaping our world right now. From online porn and crypto chaos to the rise of tech oligarchs, deepfake AI, and the broken promises of the internet — we explore the stories that define our digital age with hosts and characters embedded in the heart of the action. More episodes of Deepfake Porn Empire are available wherever you get your podcasts, and here: https://link.mgln.ai/DPExEBD
The Advisory Board | Expert Franchising Advice for Franchise Leaders
Episode summaryThis week on The Franchise Advisory Board Podcast, Dave Hansen sits down with Dr. Alan Lasky, SVP at Reliable Background Screening (and former almost-songwriter-for-the-Jacksons… casually) to tackle a topic that's getting way more complicated: how to reduce hiring risk in the age of AI, deepfakes, and “resume inflation.”Big thanks to ClientTether, our episode sponsor, for helping franchise brands automate and standardize the processes that keep operators consistent, compliant, and sane.Episode highlightsAI in hiring: embrace it… but don't outsource your judgmentAlan's clear: this isn't an anti-AI episode. AI belongs in modern hiring—but it has to be used responsibly. The core risk? Bias and compliance exposure can sneak in when AI tools are unmonitored, unmeasured, or used without clear guardrails.Key safeguards discussed:Keep a human review in the loop before AI outputs influence decisionsBe transparent with candidates that AI is being used (even “AI note-takers”)Build internal policies and training so interviewers know what to watch forThe new threat: deepfakes and fake candidatesThe numbers are trending in a scary direction:Gartner projection: by 2028, 1 in 4 job applicants could be fakeReports cited from SHRM/Forbes: 70% of candidates misrepresent themselves, with “resume inflation” accelerating via AI toolsReliable (and the broader screening industry) is responding with identity verification approaches that combine:ID upload + guided selfie video (blink/turn prompts)biometric matching to confirm the candidate is real and consistentbehind-the-scenes handling designed to stay sensitive to EEOC/ADA concernsHiring best practices that actually hold upA few practical “do-this-now” moves that came up repeatedly:Compare resume vs LinkedIn vs interview story for consistencyUse skills assessments, ideally proctored or monitored when remoteSet explicit candidate guidelines for AI use (what's allowed vs not)Train interviewers to spot red flags like inconsistencies, delays, and mismatchUse social media checks carefully—ideally filtered through a screening partner to avoid pulling in protected-class infoCompliance is getting messier: states and citiesAI regulations are already active in places like Colorado, California, Illinois, New Jersey, and New York City, and Alan notes 20+ bills are moving through the pipeline. The theme across these rules:don't discriminatedocument your policykeep a human elementdisclose AI usageOn top of that, municipal laws are adding another layer (example discussed: shifting lookback windows in certain cities), making “multi-state + multi-unit + remote hiring” a true complexity party.Adverse action: the “right to dispute” mattersWhen a background check surfaces something negative, employers need to follow adverse action practices and give candidates the chance to dispute inaccuracies—because false positives happen (aliases, shared names, court data errors, etc.). Some states are now requiring more specific disclosure about why a decision was made and how it relates to the job.Franchisors, franchisees, and joint employer riskFor brands wanting to share hiring best practices systemwide: yes, you can educate—but do it smart.Keep it informationalAdd “consult legal counsel” languageBe careful not to cross lines that create joint-employer exposureThe vibe-check takeawayAI is speeding up hiring—but it's also speeding up fraud, mistakes, and legal risk. The winning play isn't “avoid AI.” It's standardize the process, document your policy, verify identity, and keep humans accountable for final decisions.
Deepfake porn is a billion-click industry built on stolen faces, while the people making it hide theirs behind screens. Hosted by journalist Sam Cole, Understood: Deepfake Porn Empire traces the decades-long rise of synthetic porn, the targets who are fighting back, and the global investigation that led to its Canadian kingpin.Understood takes you deep inside the seismic shifts reshaping our world right now. From online porn and crypto chaos to the rise of tech oligarchs, deepfake AI, and the broken promises of the internet — we explore the stories that define our digital age with hosts and characters embedded in the heart of the action. More episodes of Deepfake Porn Empire are available wherever you get your podcasts, and here: https://link.mgln.ai/DPExGGG
Teknologien har for lengst flyttet inn i redaksjonene, der AI-systemer nå skriver, klipper, oversetter og analyserer innhold i et svimlende tempo.Hva skjer med journalistikken, økonomien og tilliten når teknologien drar i alle spaker samtidig? I denne episoden går vi rett inn i maskinrommet for å finne ut hva som fungerer, hva som skremmer, og hva som faktisk gir verdi i det nye medielandskapet.Vi tar deg med bak kulissene i to av Norges største mediehus for å forklare hvordan de bruker teknologi til å avsløre sannheten i saker som Epstein-filene. Du får høre om kampen for tillit, erfaringene fra et teknologitungt OL og strategiene som skal sikre at redaktørstyrte medier overlever frem mot 2030.Ukens gjester er Pål Nedregotten, direktør for teknologi og produktutvikling i NRK, og Espen Sundve, Chief Product Officer i Schibsted Media. Programleder er Christian Brosstad, Atea. Hosted on Acast. See acast.com/privacy for more information.
Instagram, TikTok, Facebook – der Feed zeigt Bilder und Videos. Doch nicht alles, was der Nutzer sieht, ist echt sondern Deepfakes.
Fraudsters are increasingly using deepfake videos of CEOs and other company executives to trick firms out of millions of dollars. And with the evolution of AI, these videos are becoming ever-more sophisticated and convincing. We speak to two CEOs who have been deepfaked: the head of the Bombay stock exchange and the boss of password security company LastPass. And we hear how criminals used deepfake videos to trick British engineering firm Arup into handing over $25 million. How easy is it to make these videos? Ed Butler visits a cybersecurity company which shows him how it can be done, using readily available software. Ed's hosts make a deepfake of him and we compare the real Ed to the fake Ed. We also put figures on the size of this problem and explain how much it's costing businesses.If you'd like to get in touch with the team, our email address is businessdaily@bbc.co.ukPresenter: Ed Butler Producer: Gideon Long Sound Mix: Toby JamesBusiness Daily is the home of in-depth audio journalism devoted to the world of money and work. From small startup stories to big corporate takeovers, global economic shifts to trends in technology, we look at the key figures, ideas and events shaping business.Each episode is a 17-minute deep dive into a single topic, featuring expert analysis and the people at the heart of the story.Recent episodes explore the weight-loss drug revolution, the growth in AI, the cost of living, why bond markets are so powerful, China's property bubble, and Gen Z's experience of the current job market.We also feature in-depth interviews with company founders and some of the world's most prominent CEOs. These include Google's Sundar Pichai, Wikipedia founder Jimmy Wales, and the CEO of Starbucks, Brian Niccol.(Picture: An image of a man in a cap being deepfaked. Credit: Getty Images)
Crack open a Liquid Death and join us as we analyze the high-tech, low-life world of the new Running Man. We're looking at the cold, clinical design of the Hunters (giving us major THX 1138 vibes) and the terrifyingly relevant "Deepfake" technology used by the Network. Support the show: Make sure to visit our affiliate sponsor Live Bearded. Grab some premium beard care and support the pod by using our link! https://livebearded.com/2GEEKS.
Arkanix Stealer – the new AI info-stealer experiment AI-assisted hacker breached 600 Fortinet firewalls in 5 weeks Russia stepping up hybrid attacks, preparing for confrontation with West Get links to all of today's news in our show notes here: https://cisoseries.com/cybersecurity-news-arkanix-was-poc-600-fortinet-firewalls-breach-russia-heightens-tension/ Thanks to today's episode sponsor, Adaptive Security This episode is brought to you by Adaptive Security, the first security awareness platform built to stop AI-powered social engineering. Deepfakes aren't science fiction anymore; they're a daily threat. Quick tip: if your voicemail greeting is your real voice, switch it to the default robot voice. A few seconds of audio can be enough to clone you. Adaptive helps teams spot and stop these AI-powered social engineering attacks. Learn more at adaptivesecurity.com.
Themen: Clevere Dörfer 1: Darup [00:19Min.] | Serum - Brauche ich das und wenn ja, welches? [05:59Min.] | Betrug mit Deepfakes [12:07Min.] | Grapefruit [17:07Min.] | Vagusnerv [23:05Min.]
Podcast ONE: 20 de febrero de 2026 ¿Cómo están cambiando Photoshoot y Lyria 3 la creación de contenido? @vincent_quezada y @zoomdigitaltv lo analizan en #one_digital #PodcastONE. Escucha aquí el Podcast ONE: 20 de febrero de 2026 Facebook Live Podcast ONE: 20 de febrero de 2026 One Digital: IA, Videojuegos clásicos y el futuro del contenido en 2026 — análisis profundo de Photoshoot, Lyria 3, Diablo II: Resurrected y más Vincent Quezada y Pablo Berruecos ofrecen un análisis exhaustivo de cómo herramientas como Photoshoot y Lyria 3 de Google están revolucionando la creación de contenido para pymes, mientras exploran el impacto de Diablo II: Resurrected, la obsolescencia programada, la censura en redes sociales y el futuro de la historia digital. También discuten herramientas como Block Club, Goya (UNAM) y el repositorio ia.unam.mx, además de reflexionar sobre los 20 años de One Digital. 1. Photoshoot: El estudio fotográfico profesional accesible Vincent Quezada detalla cómo Photoshoot, una herramienta de Google Labs, permite a las pymes y creadores generar imágenes profesionales sin necesidad de equipos costosos o conocimientos técnicos avanzados: “Photoshoot es una revolución para las pymes. Imagina que tienes un café en una bolsa genérica y quieres promocionarlo en Instagram. Tomas una foto con tu teléfono, la subes a Photoshoot, seleccionas una plantilla de ‘estilo cotidiano’ o ‘estudio profesional’, y en segundos obtienes una imagen lista para redes sociales, con fondo limpio, iluminación perfecta y un estilo que parece salido de un estudio fotográfico.” Características técnicas y ejemplos prácticos Proceso paso a paso: Subida de imagen: Cualquier foto, incluso de baja calidad (ej.: tomada con un teléfono básico). Selección de plantilla: Opciones como “estudio profesional” (fondo blanco y luz uniforme) o “vida cotidiana” (ambiente natural, como una mesa de madera con objetos cotidianos). Generación con IA: El modelo de Google aplica ajustes automáticos para mejorar la calidad, el fondo y el estilo. Descarga o integración: La imagen generada puede descargarse directamente o guardarse en Google Drive para su uso en campañas. Función de referencia de estilo: Permite subir dos imágenes y aplicar el estilo de una sobre el contenido de la otra. Por ejemplo, puedes tomar una foto de un producto y aplicar el estilo visual de una marca reconocida. Integración con Google Apps: Compatible con perfiles de negocio de Google, lo que facilita la gestión de campañas publicitarias. Ejemplo concreto: Un instructor de yoga puede subir una foto de su espacio de trabajo y Photoshoot generará una imagen profesional con un ambiente relajado, ideal para promocionar sus servicios en redes sociales. Impacto en el mercado Vincent Quezada compara Photoshoot con herramientas existentes como Canva o Adobe Photoshop: “Photoshoot no solo compite con Canva en términos de accesibilidad, sino que va un paso más allá al automatizar completamente el proceso de edición. Mientras que en Canva aún necesitas ajustar manualmente elementos como el fondo o la iluminación, Photoshoot lo hace por ti con un solo clic. Además, al ser gratuita (por ahora), elimina la barrera económica para las pymes.” “Esto es un salto significativo desde el lanzamiento de MidJourney en 2015. Antes, las herramientas de IA se enfocaban en generar textos o materiales de campaña genéricos, pero ahora Google está entrando fuerte en la creación de imágenes de producto, un proceso históricamente costoso y técnico para las pymes.” — Vincent Quezada 2. Lyria 3: Música generativa para todos Vincent Quezada explica cómo Lyria 3, el modelo de música generativa más avanzado de Google, permite a cualquier usuario crear pistas musicales originales con letras y carátulas personalizadas, usando solo texto o imágenes como entrada: “Lyria 3 es como tener un compositor personal. Puedes describirle a la IA el estilo que deseas —por ejemplo, ‘una balada romántica con guitarra acústica y sonidos de olas’— y en segundos obtienes una pista musical completa, con letra, melodía y carátula. Es ideal para creadores de contenido que necesitan música de fondo para sus videos en TikTok, Instagram o YouTube Shorts.” Funcionalidades clave Generación a partir de texto o imágenes: Describe el estilo musical (ej.: “rock de los 80”, “música clásica con beats electrónicos”). Sube una imagen y Lyria 3 creará una pista inspirada en su ambiente o colores. Personalización avanzada: Define el género, estado de ánimo (ej.: “melancólico”, “energético”) y detalles sensoriales (ej.: “sonidos de lluvia”, “guitarra acústica”). Genera letras basadas en el prompt del usuario (ej.: “una canción sobre el amor en tiempos de IA”). SynthID: Marca de agua digital que identifica el contenido generado por IA, garantizando transparencia y evitando problemas de derechos de autor. Duración y uso: Pistas de hasta 30 segundos, ideales para intros, cortinillas o videos cortos. Disponible para suscriptores de Gmail (mayores de 18 años) en múltiples idiomas, incluyendo español. Comparación con la competencia Vincent Quezada compara Lyria 3 con otras herramientas de música generativa como Suno y Udio: Herramienta Ventajas Limitaciones Lyria 3 Genera carátulas automáticas. Integración con YouTube y Google Apps. SynthID para transparencia. Límite de 30 segundos por pista. Requiere suscripción a Gmail. Suno Permite pistas de hasta 2 minutos. Interfaz sencilla. No genera carátulas automáticas. Calidad de audio inferior. Udio Opciones de personalización avanzadas. No integra con plataformas como YouTube. Sin marca de agua para transparencia. “Lyria 3 no busca reemplazar a los artistas, sino democratizar la creación musical. Ahora, cualquier persona puede tener una pista original para sus proyectos sin necesidad de equipos costosos o conocimientos técnicos.” — Vincent Quezada 3. Diablo II: Resurrected — Reinventando un clásico Vincent Quezada dedica una sección completa a analizar el relanzamiento de Diablo II: Resurrected, destacando sus mejoras técnicas, mecánicas de juego y el impacto cultural de esta edición conmemorativa: “Diablo II: Resurrected no es solo un remaster, es una reinvención que honra el legado del juego original mientras incorpora tecnología moderna. Blizzard ha logrado mantener la esencia del juego de 2000, pero con gráficos en 4K, audio 7.1 y mecánicas actualizadas que lo hacen accesible para nuevas generaciones.” Novedades Técnicas Gráficos y audio: Resolución 4K y 60 cuadros por segundo. Audio 7.1 envolvente y cinemáticas actualizadas. Conserva la jugabilidad original, pero con mejoras visuales que respetan el estilo oscuro del juego. Nuevo personaje: El Conjurador: Clase adicional que manipula el poder del infierno. Habilidades únicas como invocar demonios o usar magia oscura. Integración en la narrativa del juego como una historia prohibida. Modo cooperativo: Hasta 8 jugadores simultáneos. Sistema de progresión compartida y eventos exclusivos para grupos. Cross-platform: Compatible con Xbox, PlayStation, PC y Nintendo Switch. Sincronización de progreso entre plataformas. Contenido adicional: Incluye la expansión “Lord of Destruction” y el DLC “Terror’s Reign”. Nuevos jefes, áreas secretas y objetos legendarios. Impacto Cultural y Crítica Vincent Quezada reflexiona sobre cómo Diablo II: Resurrected ha influido en el género de los juegos de rol y acción: “Diablo II no solo estableció los cimientos de los juegos de colección de objetos (loot), sino que también introdujo mecánicas como árboles de habilidades complejas, dificultades escalables y economías de intercambio entre jugadores. Su impacto trasciende el juego en sí: inspiró a toda una generación de desarrolladores y sigue siendo un referente 25 años después.” También menciona la obsolescencia programada en la industria de los videojuegos: “Es curioso cómo los juegos clásicos como Diablo II siguen siendo relevantes, mientras que muchos juegos modernos están diseñados para volverse obsoletos en unos pocos años, ya sea por servidores que cierran o por microtransacciones que los hacen injugables sin inversión constante.” 4. Obsolescencia programada: productos diseñados para fallar Pablo Berruecos, al unirse al programa, profundiza en el tema de la obsolescencia programada, analizando cómo los productos electrónicos y tecnológicos están diseñados para tener una vida útil limitada, lo que obliga a los consumidores a reemplazarlos con frecuencia: “Hace unos días, un foco que tenía 35 años dejó de funcionar. No era un foco cualquiera: era de los primeros modelos con filamentos gruesos, diseñados para durar décadas. Hoy, los focos modernos están fabricados con filamentos más delgados y materiales que se degradan rápidamente, porque las empresas prefieren venderte un nuevo foco cada dos años en lugar de uno que dure toda la vida.” Ejemplos concretos Focos y bombillas: Antiguamente, los focos duraban 30-40 años. Hoy, su vida útil rara vez supera los 2 años. Las empresas redujeron la calidad de los materiales para aumentar las ventas recurrentes. Impresoras: Los cartuchos de tinta suelen costar casi lo mismo que una impresora nueva. Muchas impresoras están programadas para dejar de funcionar después de un cierto número de impresiones, incluso si los cartuchos no están vacíos. Relojes y electrónicos: Relojes mecánicos antiguos pueden durar décadas, pero muchos relojes modernos requieren baterías o mantenimiento constante. Dispositivos como teléfonos o laptops están diseñados para volverse lentos después de unos años, incentivando la compra de modelos nuevos. Videojuegos y consolas: Juegos modernos a menudo requieren conexiones a servidores que pueden cerrar, dejando el juego inutilizable. Consolas como la Xbox 360 sufrieron problemas como el “anillo rojo de la muerte”, diseñados para fallar después de un cierto período de uso. Reflexión crítica Pablo Berruecos cuestiona el modelo económico detrás de la obsolescencia programada: “La obsolescencia programada no es un accidente, es un modelo de negocio. Las empresas prefieren venderte el mismo producto una y otra vez en lugar de fabricar algo que dure. Esto no solo es insostenible para el planeta, sino que también limita la innovación real, porque ¿por qué invertir en calidad si puedes vender más unidades con menor durabilidad?” 5. Censura en redes sociales y manipulación digital Pablo Berruecos analiza cómo plataformas como TikTok, X (Twitter) y YouTube censuran contenido sensible, mientras permiten la proliferación de fake news y deepfakes. También discute el papel de la IA en la manipulación de la información: “Si subes un video que muestra pruebas de corrupción o crímenes de guerra, como los archivos de Jeffrey Epstein, las plataformas lo eliminan en horas. Pero si es un deepfake o una fake news, se viraliza sin control. Esto crea un desequilibrio peligroso, donde la verdad es censurada y la mentira se propaga.” Ejemplos de censura Archivos de Jeffrey Epstein: Plataformas como YouTube y X han eliminado videos que muestran pruebas del caso Epstein. Sitios como Jeftob.world archivan estos videos con marcas de censura, pero siguen siendo accesibles para quienes buscan la verdad. Guerra en Ucrania: Videos que documentan crímenes de guerra son eliminados bajo políticas de “contenido gráfico”. Algoritmos de IA en plataformas como TikTok priorizan contenido entretenido sobre información crítica. Deepfakes y manipulación: Herramientas de IA permiten clonar voces y rostros para crear videos falsos que se viralizan. Ejemplo: Videos deepfake de políticos o celebridades diciendo cosas que nunca dijeron. Alternativas y soluciones Pablo Berruecos menciona herramientas y plataformas que buscan contrarrestar la censura: Block Club: Convierte teléfonos viejos en “nodos” para publicar contenido sin depender de algoritmos centralizados. Permite crear redes descentralizadas donde la información no puede ser censurada fácilmente. Repositorios académicos: Plataformas como ia.unam.mx ofrecen acceso a información verificada y recursos educativos. El asistente GoIA ayuda a localizar recursos mediante preguntas en lenguaje natural. “La censura en redes sociales no es solo un problema técnico, es un problema de poder. Quienes controlan las plataformas deciden qué información llega al público, y eso es peligroso en una era donde la IA puede manipular la realidad.” — Pablo Berruecos 6. Historia digital y el riesgo de la manipulación Pablo Berruecos advierte sobre los riesgos de que la historia digital sea manipulada mediante IA y plataformas como Wikipedia: “Imagina que un gobierno o corporación decide cambiar un hecho histórico en Wikipedia. ¿Quién lo verifica? Con la IA, es posible reescribir la historia a conveniencia, creando una versión alternativa de los hechos que la gente terminará aceptando como verdadera.” Problemas y soluciones Manipulación en Wikipedia: Cualquiera puede editar Wikipedia, lo que permite a grupos con intereses específicos alterar la información. Ejemplo: Cambios en biografías de políticos o eventos históricos para favorecer una narrativa. Procpedia y proyectos similares: Iniciativas como Procpedia buscan crear una enciclopedia descentralizada donde la información no pueda ser manipulada fácilmente. Sin embargo, aún enfrentan desafíos técnicos y de adopción masiva. Repositorios académicos: Plataformas como ia.unam.mx ofrecen acceso a información verificada y recursos educativos. Incluyen artículos, tesis, videos y podcasts sobre temas tecnológicos y científicos. El papel de la UNAM Se destaca el trabajo de la UNAM en la creación de recursos digitales confiables: “La UNAM ha lanzado un repositorio de IA que no solo ofrece recursos educativos, sino que también utiliza un asistente llamado Goya para ayudar a los usuarios a encontrar información de manera intuitiva. Esto es crucial en un mundo donde la desinformación se propaga rápidamente.” 7. 20 Años de One Digital: Evolución y desafíos Vincent Quezada y Pablo Berruecos reflexionan sobre los 20 años de One Digital, destacando su evolución desde un portal de noticias hasta un programa de análisis tecnológico crítico: “Hace 20 años, publicábamos 1 o 2 notas al día con fotos de 400 píxeles. Hoy competimos con reels de 24 horas y algoritmos que priorizan el contenido efímero. Pero nuestro valor sigue siendo el mismo: ofrecer análisis crítico y archivo permanente de la evolución tecnológica.” — Pablo Berruecos Logros y desafíos Logros: Críticas independientes a marcas y productos tecnológicos. Cobertura de tendencias en cine, videojuegos (como Diablo II: Resurrected) y gastronomía. Análisis de herramientas como Photoshoot y Lyria 3, y su impacto en la creación de contenido. Desafíos: Monetización vs. independencia editorial en un mundo dominado por algoritmos. Adaptarse a un entorno donde el contenido efímero (stories, reels) domina el engagement. Competir con plataformas que priorizan el entretenimiento sobre la información verificada. El futuro de One Digital Vincent Quezada cierra con una reflexión sobre el futuro del programa: “One Digital siempre ha sido un espacio para hablar de tecnología con honestidad y profundidad. En un mundo donde la IA y las redes sociales dominan la narrativa, nuestro compromiso es seguir ofreciendo análisis crítico y herramientas para que la audiencia pueda navegar el futuro digital con conciencia.” Conclusión: Innovación con ética y responsabilidad El episodio cierra con un llamado a usar la tecnología de forma ética y responsable. Vincent Quezada resume: “Herramientas como Photoshoot y Lyria 3 demuestran el potencial de la IA para empoderar a creadores y pymes. Pero también debemos ser conscientes de los riesgos: desde la obsolescencia programada hasta la manipulación de la información. El futuro digital depende de cómo usemos estas herramientas hoy.” Escucha el episodio completo y únete a la conversación con #PodcastONE. El cargo Podcast ONE: 20 de febrero de 2026 apareció primero en OneDigital.
AI-generated or AI-altered content is all over the internet now, but most of us admit we don't always know it when we see it... How artificial intelligence is making it harder to trust our own eyes (at 12:19) --- Around Town: Part fundraiser, part treasure hunt... Christian Clearing House is accepting donations for their annual Garage Sale (at 22:23) --- HSBB Preview: Two regular season games remain for the Trojans to get momentum ahead of the tournament (at 31:25) --- A special collection of recipes for the first 'Fish Friday' of Lent from Kyra's Kitchen (at 45:40)
Im Podcast spricht der Digital-Forensiker über neue Technologien im Gerichtssaal, die Sorge vor Deepfakes und die Frage, warum seine Studierenden immer häufiger einen Job bei Bosch oder Siemens finden.
Valerie Ziegler, a high school teacher in San Francisco, and Joel Breakstone, executive director of Stanford's Digital Inquiry Group, talk about digital literacy in the classroom. Many self-described "screenagers," they say, can no longer tell real from fake. Together, Ziegler and Breakstone are at the forefront of a movement to prepare young people for a world of influencers, algorithmic manipulation, and artificial intelligence, an effort recently profiled in the New York Times.
Voice used to be AI's forgotten modality — awkward, slow, and fragile. Now it's everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.Neil breaks down today's dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it's popular: it's modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.Finally, we tackle voice cloning: where it's genuinely useful, what it means for deepfakes and privacy, and why watermarking isn't a silver bullet.If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.Neil ZeghidourLinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/X/Twitter - https://x.com/neilzeghGradiumWebsite - https://gradium.aiX/Twitter - https://x.com/GradiumAIMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFirstMarkWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Intro(01:21) Voice AI's big moment — and why we're still early(03:34) Why voice lagged behind text/image/video(06:06) The convergence era: transformers for every modality(07:40) Beyond Her: always-on assistants, wake words, voice-first devices(11:01) Voice vs text: where voice fits (even for coding)(12:56) Neil's origin story: from finance to machine learning(18:35) Neural codecs (SoundStream): compression as the unlock(22:30) Kyutai: open research, small elite teams, moving fast(31:32) Why big labs haven't “won” voice AI4(34:01) On-device voice: where it works, why compact models matter(46:37) The last mile: real-world robustness, pronunciation, uptime(41:35) Benchmarking voice: why metrics fail, how they actually test(47:03) Cascades vs speech-to-speech: trade-offs + what's next(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos(1:00:50) New languages + dialects: what transfers, what doesn't(1:02:54 Hardware & compute: why voice isn't a 10,000-GPU game(1:07:27) What data do you need to train voice models?(1:09:02) Deepfakes + privacy: why watermarking isn't a solution(1:12:30) Voice + vision: multimodality, screen awareness, video+audio(1:14:43) Voice cloning vs voice design: where the market goes(1:16:32) Paris/Europe AI: talent density, underdog energy, what's next
Research Update: 8 papers on AI in Education you need to know for 2026 In this episode, Ray and Dan provide a rapid-fire rundown of the most significant research papers hitting the AI in Education space so far in 2026. After a series of news-heavy episodes, the hosts catch up on the data behind synthetic avatars, grading accuracy, and the psychological biases we hold against AI. Key highlights include: Synthetic Lecturers: Exploring stakeholder perspectives on digital twins and the emotional reaction to the term Deepfake in academia. The Grading Gap: Why ChatGPT tends to be more sycophantic and generous with weak work compared to human instructors. The Disclosure Penalty: New findings from 16 experiments showing why humans devalue creative writing the moment they know AI is involved. Prompting Hacks: The "Groundhog Day" method
Zijn we bang voor de toekomst, of voor ons eigen onvermogen om te veranderen? Terwijl de wereld vreest voor de macht van AI, stelt futurist Aragorn Meulendijks een confronterende vraag: waarom durven we onszelf niet te overstijgen? In deze aflevering duiken we in de realiteit van ondernemerschap en leiderschap. Geen gepolijste succesverhalen, maar een eerlijk gesprek over lef, strategische keuzes en de harde lessen van groei in een razendsnel veranderende wereld.
Deepfake voice technology is rapidly advancing, but how well do current detection systems handle differences in language and writing style? Most existing work focuses on robustness to acoustic variations such as background noise or compression, while largely overlooking how linguistic variation shapes both deepfake generation and detection. Yet language matters: psycholinguistic features such as sentence structure, complexity, and word choice influence how models synthesize speech, which in turn affects how detectors score and flag audio. In this talk, we will ask questions such as: "If we change the way a person writes, while keeping their voice the same, will a deepfake detector still reach the same decision?" and "Are some text-to-speech and voice cloning models more vulnerable to shifts in writing style than others?" We will then discuss implications for designing robust deepfake voice detectors and for advancing more trustworthy speech AI in an era of increasingly synthetic media. About the speaker: Thai Le is an Assistant Professor of Computer Science at the Indiana University's Luddy School of Informatics, Computing, and Engineering. He obtained his doctoral degree from the college of Information Science and Technology at the Pennsylvania State University with an Excellent Research Award and a DAAD Fellowship. His research focuses on the trustworthiness of AI/ML models, with a mission to enhance the robustness, safety, and transparency of AI technology in various sociotechnical contexts. Le has published nearly 50 peer-reviewed research works with two best paper presentation awards. He is a pioneer in collecting and investigating so-called text perturbations in the wild, which has been utilized by users and researchers worldwide to study and understand effects of humans' adversarial behaviors on their daily usage with AI/ML models. His works have also been featured in ScienceDaily, DefenseOne, and Engineering and Technology Magazine.
Grunwald, Maria www.deutschlandfunk.de, Interviews
Get our AI Video Guide: https://clickhubspot.com/dth Episode 97: How close are we to a world where AI-generated videos are indistinguishable from reality? Matt Wolfe (https://x.com/mreflow) and Joe Fier (linkedin.com/in/joefier) dive deep into Seedance 2.0—ByteDance's new AI video model that could outpace giants like Sora and Veo. Joe, a marketing and business expert known for his hands-on approach and insights into AI's rapid evolution, helps to break down the five most fascinating developments in the AI space this week. They tackles game-changing AI advances: Seedance 2.0's mind-blowing video generation for ads and motion graphics, the rollout of Google's Veo 3.1 in Google Ads, the GPT-5.3 Codex Spark coding model built on specialized inference chips, Gemini's DeepThink model for scientific research, and the early rollout of ChatGPT ads. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Seedance 2.0 arrives – AI video generation blurs reality, ad creation moves fast. (03:03) Google's Veo 3.1 powers video ads, advertisers can now generate clips directly from image uploads. (05:33) Comparison of Runway, Kling, Veo, and Sora—head-to-head prompt showdown. (07:00) Motion graphics and explainers—AI's take on the creative industry. (08:35) US vs. China—Copyright, IP, and training data debates. (12:10) Deepfake and video authenticity—why we now default to skepticism. (13:30) Google's edge in visual AI via YouTube's massive corpus. (14:39) The next frontier: Longer, more consistent video generation. (15:14) Where do humans fit in? Taste, storytelling, and creative direction. (18:30) GPT-5.3 Codex Spark—coding models on Cerebras inference chips, demo generating a website in 18 seconds. (24:34) AI tool comparisons—Codex vs. Cursor vs. Claude Code. (25:12) Speed as the key bottleneck breaker in creative and technical workflows. (28:02) Google's Gemini DeepThink—state-of-the-art research, advanced coding and physics capabilities. (32:52) Gemini demo attempt—3D-printable STL file and solving the three-body problem. (33:20) ChatGPT rolls out ads—impact on monetization and user trust. (40:02) Google's ad history—how “sponsored” is becoming harder to distinguish. (44:02) Democratizing AI access via ad-supported models. (45:03) Matt Schumer's viral article—why AI is moving even faster than most people realize. (51:11) Tools that build tools—AGI's path and the new role for humans. (53:12) Real-world skills and taste—where humanity still wins (for now). (54:01) Final thoughts—wake up, pay attention, and stay on the leading edge. — Mentions: Seedance 2.0: https://www.seedance.com/ ByteDance: https://www.bytedance.com/ CapCut: https://www.capcut.com/ Veo: https://deepmind.google/models/veo/ Runway: https://runwayml.com/ ChatGPT Codex: https://chatgpt.com/codex Matt Schumer's Viral Article: https://www.mattshumer.com/blog/ai-changes-everything Super Bowl Claude Commercial: https://www.anthropic.com/news/super-bowl-ad Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
AP's Lisa Dwyer reports on more legal action in Europe involving Grok.
AI is evolving fast—and so are the risks that come with it. In this episode of Leader Generation, Tessa Burg talks with Mod Op's EVP of PR, Chris Harihar, to unpack a growing issue most brands aren't fully prepared for: AI-driven brand misrepresentation. From deepfakes to manipulated logos and inappropriate brand placements, the conversation explores how generative AI tools are creating new reputational threats in ways that feel chaotic, fast-moving and hard to control. Chris introduces Mod Op's new AI Risk Intelligence capability, designed to help brands proactively identify and address harmful AI-generated content before it spirals. They dig into real examples—including manipulated executive deepfakes and brand misuse across platforms like Sora and Grok—and explain why this isn't just a cybersecurity issue, but a reputational one that belongs squarely in the PR and communications world. If you're a CMO, brand leader, or marketer wondering how exposed your company might be—or how to get ahead of risks that didn't exist a year ago—this episode offers clarity, practical thinking, and a smart path forward. It's a timely conversation about protecting your brand while still embracing the power of AI. Leader Generation is hosted by Tessa Burg and brought to you by Mod Op. About Chris Harihar: Chris Harihar is the EVP of Public Relations at Mod Op. With deep expertise in business and tech media relations, Chris counsels clients at a high level while maintaining hands-on involvement in media relations and content strategy. He has developed and run highly successful programs for leading B2B and tech brands, from Verizon Media/Yahoo and DoubleVerify to Signal AI, IDG (now Foundry) and WeTransfer. Chris can be reached on LinkedIn or at Chris.Harihar@ModOp.com. About Tessa Burg: Tessa is the Chief Technology Officer at Mod Op and Host of the Leader Generation podcast. She has led both technology and marketing teams for 15+ years. Tessa initiated and now leads Mod Op's AI/ML Pilot Team, AI Council and Innovation Pipeline. She started her career in IT and development before following her love for data and strategy into digital marketing. Tessa has held roles on both the consulting and client sides of the business for domestic and international brands, including American Greetings, Amazon, Nestlé, Anlene, Moen and many more. Tessa can be reached on LinkedIn or at Tessa.Burg@ModOp.com.
Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox interviews Matt Kunkel, CEO and Co-Founder at LogicGate, about the company's governance, risk, and compliance (GRC) platform and current market trends. Matt recounts his path into regulatory risk and compliance work that led to founding LogicGate and launching its Risk Cloud platform in 2015. A major focus is AI governance. Tom and Matt explore how and why senior management is asking compliance teams to provide governance frameworks despite the absence of a single standard (e.g., NIST/ISO/SOC). Matt explains organizations need scalable processes to triage and route large volumes of AI usage requests, apply guardrails based on data sensitivity and criticality, and avoid becoming a bottleneck to innovation. He emphasizes training and culture to address employee misuse, highlighting risks of exposing proprietary data and the need to define what information is acceptable to input into AI models. The discussion turns to LogicGate's culture and how it has been sustained during rapid, organic growth (no acquisitions). Matt outlines LogicGate's six values: Be as One, Embrace Your Curiosity, Empower Customers, Raise the Bar, Own It, and Do the Right Thing. For evaluating AI and modernizing compliance programs, he frames value in three outcomes: making money, reducing costs, or reducing risk, and describes LogicGate's value realization framework that translates efficiency and ROI into business terms. He also describes Risk Cloud as an orchestration layer for compliance programs and anticipates more “intentional AI” and selective use of agentic capabilities rather than fully autonomous end-to-end program execution. Key highlights: From Consulting to GRC: Coding, Madoff Investigation, and Founding LogicGate Why AI Is Supercharging the “G” in GRC LogicGate's Culture Playbook: Values That Scale with Hypergrowth How to Evaluate AI Tools in Compliance: Proving Value, ROI, and “Intentional AI” Cybersecurity in 2026: AI-Powered Social Engineering, Deepfakes, and Risk Mapping What's Next for GRC by 2030: Agents, Responsible AI, and Tech as the Glue Resources: Matt Kunkel on LinkedIn LogicGate Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.
PJ talks to Pat Buckley TD about an amendment to Coco's Law in the light of the AI nude deepfakes that have swept across the online world Hosted on Acast. See acast.com/privacy for more information.
A stunningly realistic fake clip of movie stars Tom Cruise and Brad Pitt having a fist-fight about Jeffrey Epstein is causing a meltdown in Hollywood. Plus, the thwarted return of ISIS brides.See omnystudio.com/listener for privacy information.
Gefälschte Videoaufnahmen, sogenannte Deepfakes, sind keine Seltenheit mehr: Sie zeigen zum Beispiel Donald Trump in der Papstrobe oder Mona Vetsch, die für zweifelhafte Finanzseiten wirbt. Doch jetzt erreichen sie eine neue Dimension.Deepfakes sehen immer echter aus und die Maschen der Betrüger werden immer perfider. So auch im Fall von Markus. Kurz nachdem er einen unbekannten Facetime-Anruf annimmt, erhält er ein Video zugeschickt. Es zeigt ihn beim Masturbieren. Die Betrüger hatten das Video mit KI so manipuliert, dass die Szene echt wirkte. Dann drohen sie ihm, es zu verschicken, wenn er nicht zahlt.Auch an einer Schweizer Schule wurde kürzlich ein Fall bekannt, in dem Oberstufenschüler KI-generierte Nacktbilder von Mitschülerinnen über Snapchat verbreiteten.Wie funktionieren Deepfakes? Was bedeuten solche Aufnahmen für die Betroffenen? Und was können Behörden dagegen tun? Das erklärt Oliver Zihlmann, Leiter des Tamedia Recherchedesks in einer neuen Folge des täglichen Podcasts «Apropos».Host: Alexandra AreggerProduzentin: Valeria MazzeoMehr zu DeepfakesDie Recherche von Oliver Zihlmann zum Fall von MarkusDer KI-Nacktbild-Skandal an einer Schweizer SchuleSo ist die Rechtslage in der Schweiz bei Deepfakes Unser Tagi-Spezialangebot für Podcast-Hörer:innen: tagiabo.chHabt ihr Feedback, Ideen oder Kritik zu «Apropos»? Schreibt uns an podcasts@tamedia.ch Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Non-consensual deepfake porn is becoming increasingly pervasive, and it didn't just come out of nowhere. These deepfakes were created and curated by people, on platforms, inside online subcultures. And they were allowed to spread, while governments dragged their feet, tech companies shrugged, and the targets — almost always women — paid the price.Tech journalist Sam Cole has been covering deepfake porn since its inception. In this season of Understood, she follows the trail all the way to the source, tracing an investigation across three countries and four newsrooms into the very real person behind the world's largest deepfake porn website: Mr. Deepfakes himself.
Hannah sine feriebilder ble manipulert og misbrukt. Så tok hun saken i egne hender. (Foto: Illustrasjonsbilde/ Ismail Burak Akkan). Hør alle episodene i appen NRK Radio
Lawyers have always relied on tools—but AI is different. It doesn't just assist with tasks; it makes decisions, applies judgment, and shapes outcomes. In episode #602 of the Lawyerist Podcast, Stephanie Everett talks with Damien Riehl about what ethical responsibility looks like when AI starts doing legal work on its own. Their conversation examines how AI systems embed values, why verification matters more than transparency, and how lawyers can responsibly use tools they don't fully understand. They also explore what legal expertise looks like in an AI-powered future—and why intuition, trust, and integrity may matter more than ever as machines take over the “widgets” of legal work. Listen to our other episodes on Ethics and Responsibility in AI. EP. 582 Deepfakes, Data, and Duty: Navigating AI Ethics in Law, with Merisa Bowers Apple | Spotify | LTN EP. 543 What Lawyers Need to Know About the Ethics of Using AI, with Hilary Gerzhoy Apple | Spotify | LTN Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 00:00 – Introduction 05:55 – Meet Damien Riehl 08:10 – Why AI Is a Different Kind of Legal Tool 11:05 – When AI Starts Doing Legal Work 14:30 – Ethics, Values, and AI Judgment 18:45 – Foundation Models vs. Legal-Specific AI 21:15 – The “Duck Test” and Trusting AI Output 24:45 – Trust but Verify: Reviewing AI Work 28:40 – What Lawyers Are Underestimating About AI 31:10 – What Still Requires Human Judgment 34:30 – Intuition, Trust, and Integrity in Law 37:40 – What This Means for Billing and the Future 40:40 – Closing Thoughts
The future of cybersecurity is not coming. It is already here. AI is writing code faster than humans. Deepfakes can impersonate your boss. Quantum computers threaten the encryption that protects everything we trust. And most organizations are still playing catch up.In this episode of BarCode, Chris sits down with Jim West, a 30 plus year cybersecurity veteran who has seen every wave of the industry. From building machines in the early days of dial up to advising on quantum risk and AI driven defense, Jim breaks down what is hype, what is real, and what is about to change everything. This is not theory. This is what comes next.If you want to understand how to think like an attacker, adapt like a defender, and prepare for a world where machines outpace humans, this conversation is your briefing.Welcome to the future of security.00:00 Introduction to Jim West and His Expertise04:59 Jim's Origin Story and Early Career10:36 The Importance of Certifications in Cybersecurity17:16 The Rise of Quantum Computing in Cybersecurity27:05 Preparing for Quantum Day and Its Implications28:28 Exploring Quantum Computing and Qiskit28:58 AI's Role in Cybersecurity Threats30:45 The Evolution of Deepfake Technology31:45 Quantum Computing as a Service33:09 The Intersection of AI and Quantum Computing34:34 Future Scenarios: AI and Quantum in Cyber Warfare38:39 AI's Impact on Society and Human Interaction39:24 The Creative Potential of AI46:41 Balancing AI and Human Interaction52:46 Unique Bar Experiences and Future Ventures[Facebook – Jim West Author] – https://www.facebook.com/jimwestauthorOfficial author page where Jim West shares updates about his books, cybersecurity insights, speaking engagements, and creative projects.[LinkedIn – Jim West] – https://www.linkedin.com/in/jimwest1Professional networking profile highlighting his cybersecurity leadership, certifications, conference speaking, mentoring, and industry experience.[Official Author Site – Jim West] – https://jimwestauthor.com/Personal website featuring his published works, cybersecurity thought leadership, creative projects, and links to his social platforms.[BookAuthority – 100 Best Cybersecurity Books of All Time] – https://bookauthority.orgA curated book recommendation platform that recognized Jim West's work among the “100 Best Cybersecurity Books of All Time,” reflecting industry impact and credibility.[ISACA (Information Systems Audit and Control Association)] – https://www.isaca.orgA global professional association focused on IT governance, risk management, and cybersecurity, where Jim West has spoken at multiple regional and international events.[GRC (Governance, Risk, and Compliance) Conference – San Diego] – https://www.grcconference.comA cybersecurity conference centered on governance, risk management, and compliance practices, referenced in relation to industry speaking engagements.[EC-Council (International Council of E-Commerce Consultants)] – https://www.eccouncil.orgA cybersecurity certification organization known for programs such as CEH (Certified Ethical Hacker) and events like Hacker Halted, where Jim West has participated and spoken.
00:00 Introduction to Boys Club Live 00:44 The viral Vogue clip 03:46 Market Talk 07:13 Shoutout to Octant 11:29 AI Etiquette and Social Contracts 15:19 Gigi Claudid: Training our AI agent 20:49 Norwegian Athlete's Emotional Confession 23:34 Unpacking Relationship Drama 24:44 Messy Olympics: Scandals in Sports 25:32 Partner Shoutout: Anchorage Digital 27:27 Podcast Recommendation: The Rest is History 29:40 Interview with Tatum Hunter: Internet Culture Insights 30:06 Deepfakes and AI Ethics 38:43 Personal Surveillance and Trust Issues 48:52 TikTok's Mental Health Rabbit Hole 52:16 Shill Minute: Best Cookie in Crown Heights 53:08 Introduction to Octant: Innovating Funding Models 54:52 Funding Ethereum: Grants and Sustainability 56:50 Octant V2: Revolutionizing Community Funding 58:43 Sustainable Growth and the Future of Ethereum 01:05:56 The Intersection of Venture Capital and Sustainable Funding 01:11:25 Guest Nick Devor of Barrons on Prediction Markets 01:12:50 Gambling and Insider Trading in Prediction Markets 01:23:01 CFTC Challenges and the Future of Regulation 01:26:11 Free Groceries: A Marketing Strategy 01:29:50 Conclusion and Final Thoughts
Now that artificial can make very convincing copies of people's voices, technology companies are emerging to help detect AI-created media and fraud.
Feb 10, 2026 – This year marks a turning point, as deepfakes reach new heights in realism and influence. FS Insider interviews Dr. Siwei Lyu, director of the Institute for AI and Data Sciences, about the rapid evolution and growing dangers of deepfakes...
Check out host Bidemi Ologunde's new show: The Work Ethic Podcast, available on Spotify and Apple Podcasts.Email: bidemiologunde@gmail.comIn this episode, host Bidemi Ologunde breaks down the week of Feb 2–8, 2026, when an ancient idea, the Olympic Truce, collided with modern reality: AI-built platforms leaking identities, satellites and cyber defenses becoming battlefield "terrain," sanctions escalating into lawfare, and ceasefire language clashing with ongoing violence. What happens when "trust" becomes the scarcest resource online? Who controls connectivity in war zones: states or private networks? When do sanctions stop being diplomacy and start reshaping international justice? And in an era of drones, deepfakes, and cyberattacks, what does a "truce" even mean?On the Bid Picture Podcast, I talk about big ideas, and Lembrih is one of them. Born from Ghanaian roots, Lembrih is building an ethical marketplace for Black and African artisans: makers of heritage-rich products often overlooked online. The vision is simple: shop consciously, empower communities, and share the stories behind the craft. Lembrih is live on Kickstarter now, and your pledge helps build the platform. Visit lembrih.com, or search “Lembrih” on Kickstarter.Support the show
In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - Eduardo's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos
CONTENT WARNING: We're diving into the tough but important topic of parenting in an AI-shaped world, and while younger kids probably shouldn't listen in, this could be a great conversation to share with your middle or high schoolers so it feels more like learning together than an interrogation. My hope is that this equips you to parent well as we raise kids in a world shaped by AI. Today we continue our conversation on parenting AI with a look at deepfakes and s**tortion. These are big topics that have an outsized impact on children. We need to know what they are, how they happen, and what to do if our child is targeted. The goal is to be a present, and informed, parent so that your children they need to grow.Show Notes: https://bit.ly/4qfOCiG
The thinning of the soul needs the robustness of Truth. __________ For additional resources, or to download and share this commentary, visit breakpoint.org.
Com a popularização das apostas online no Brasil, também cresceram os golpes, as fraudes de identidade e o uso de deepfakes para enganar jogadores. No episódio de hoje do Podcast Canaltech, a repórter Jaqueline Sousa conversa com Krist Galloway, head de iGaming da Sumsub, sobre os principais riscos desse mercado. Durante a entrevista, ele explica como criminosos usam tecnologia para criar aplicativos falsos, anúncios enganosos com celebridades e esquemas de lavagem de dinheiro. O executivo também detalha como a biometria, a inteligência artificial e a análise de transações ajudam a identificar contas suspeitas. O episódio aborda ainda o papel da regulamentação, os desafios dos sites ilegais, o combate ao vício em apostas e o impacto de tecnologias como o Pix nesse cenário. Você também vai conferir: sem tirar do bolso: celular poderá ser controlado apenas pela voz, SpaceX pode lançar celular com conexão direta à Starlink e cientistas criam chip mais fino que um fio de cabelo. Este podcast foi roteirizado e apresentado por Fernada Santos e contou com reportagens de Marcelo Fischer, Nathan Vieira e Raphael Giannotti, sob coordenação de Anaísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Leandro Gomes e a arte da capa é de Erick Teixeira.See omnystudio.com/listener for privacy information.
Today, we're going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. To do this, I sat down with Verge reporter Jess Weatherbed, who covers creative tools for us — a space that's been totally upended by generative AI. We've been talking about how the photos and videos taken by our phones are getting more and more processed for years on The Verge. Here in 2026, we're in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood onto social platforms at scale. So Jess and I discussed the limitations of AI labeling standards like C2PA, and why social media execs like Instagram boss Adam Mosseri are now sounding the alarm. Links: This system can sort real pictures from AI fakes — why aren't we using it? | The Verge You can't trust your eyes to tell you what's real, says Instagram | The Verge Instagram's boss is missing the point about AI on the platform | The Verge Sora is showing us how broken deepfake detection is | The Verge Reality still matters | The Verge No one's ready for this | The Verge What is a photo, @WhiteHouse edition | The Verge Google Gemini is getting better at identifying AI fakes | The Verge Let's compare Apple, Google & Samsung's definitions of 'photo' | The Verge The Pixel 8 and the what-is-a-photo apocalypse | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Feb 4, 2026: In this episode of Future-Ready Today, I explore a fundamental shift in the workplace: the transition from a task economy to a trust economy. As artificial intelligence moves from "future tech" to "daily tool," the basic mechanics of how we hire, manage, and let go of people are under intense pressure. We aren't just dealing with new software; we're dealing with a breakdown in identity and accountability. I dive deep into five stories shaping this week's headlines: The Deepfake Candidate: Why identity verification is becoming the most critical new skill in HR. California's Algorithmic Guardrails: The new legislative push to ensure humans—not code—remain responsible for firing decisions. The "Job Apocalypse" Debate: Analyzing Ben Horowitz's take on why new work emerges even as old categories vanish. The $818 Billion Admin Tax: How poorly designed organizations are drowning in emails, and why AI might be the only way out. The AI Layoff Script: Why "technology made us do it" is becoming the new corporate excuse, and how leaders can maintain credibility during transitions. The Bottom Line: The future of work won't be won by the companies with the most AI. It will be won by the companies that use technology to remove "administrative garbage" while doubling down on human accountability.
The Barbell Mamas Podcast | Pregnancy, Postpartum, Pelvic Health
Ever feel like every scroll brings a new rule for your body? We sit down with Dr. Emily Fender, a health communication scientist whose research tracks how women's health messages spread across TikTok, Instagram, and YouTube—and why the loudest claims aren't always the most useful. Together, we break down a simple lens you can use anywhere online: threat versus efficacy. Are you being scared into attention, or actually given steps and resources to act? That distinction shows up in everything from contraception myths to perinatal mental health, where severity gets clicks but supportive guidance often goes missing.We dig into cycle syncing and the difference between evidence, overreach, and personalized training. You'll hear why rigid phase-based rules can backfire, creating shame and cost barriers, and how athletes worry these narratives label women as fragile for half the month. We zoom out to the bigger system: incentives that reward certainty, influencer marketing that sells protocols, and even expertise drift when clinicians post outside their lane. Then we get practical about risk communication—turning relative risk into absolute numbers, spotting absolute statements, and demanding receipts when someone says “studies show.”We also scout the horizon with AI. Some tools can surface studies and highlight exact evidence, but they can't replace synthesis or context. Deepfakes and confident summaries raise the bar for skepticism, so we share a quick checklist to stress test posts before you share or act: scope, sources, statistics, and a simple “does this make sense” pass. Use social media for community, discovery, and momentum—then ground your choices in evidence, your values, and your lived experience. If you've been craving fewer rules and more clarity, this conversation offers a calmer, smarter way to navigate women's health online. Subscribe, share with a friend who lifts, and leave a review to tell us the one claim you want decoded next.___________________________________________________________________________Don't miss out on any of the TEA coming out of the Barbell Mamas by subscribing to our newsletter You can also follow us on Instagram and YouTube for all the up-to-date information you need about pelvic health and female athletes. Interested in our programs? Check us out here!
In this episode of Friday Night Live on 30 January 2026, Stefan Molyneux looks at the Epstein document release and how deepfake tech affects what people accept as real. He talks with a caller about staying skeptical amid all the digital noise, building real connections, and owning up to one's choices. Molyneux pushes the caller to deal with the paralysis tied to family issues, stressing that sharp thinking is key to cutting through media tricks.GET FREEDOMAIN MERCH! https://shop.freedomain.com/SUBSCRIBE TO ME ON X! https://x.com/StefanMolyneuxFollow me on Youtube! https://www.youtube.com/@freedomain1GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!https://peacefulparenting.com/Join the PREMIUM philosophy community on the web for free!Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!See you soon!https://freedomain.locals.com/support/promo/UPB2025
AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time.We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything.What we cover:Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotionHow the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truthWhat happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymoreWhy this breaks institutional trust forever — Once credibility is destroyed, it doesn't come backHow Russia's playbook became America's playbook — PsyOps tactics are now domestic policyWhat to do when you can't believe your own eyes — Practical skepticism in an age of slopChapters:00:00 — Intro: The Deep Fake Problem in Minneapolis02:37 — Why Immigrants Are Being Targeted With Fake Narratives04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version07:18 — Alex Prettie Must Killed While Filming ICE Agents09:44 — Nikita Armstrong's Tears Were Added By AI11:45 — The Putin Playbook: Flood the Zone With Confusion14:13 — How Deep Fakes Break Institutional Trust Forever17:37 — This Isn't Politics—It's Basic Human Decency19:26 — Trump's 35% Approval Rating and What It Means22:03 — What You Can Do When You Can't Trust Your EyesSafety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group
Erin and Alyssa dig into the latest news from the Twin Cities— the senseless tragedy of Alex Pretti's death, and the inspiring resolve of the Minnesotans who continue to stand up for each other. With Greg Bovino's “demotion,” are things about to take a turn for the better, or is this cynical political window-dressing from Team Trump? Then, Melania Trump's movie premiere at the White House's janky new makeshift room, and Paris Hilton's fight on Capitol Hill to ban AI-generated deep fake porn. And of course, we wrap up with Sani-Petty. Alex Pretti's Friends and Family Denounce ‘Sickening Lies' About His Life (NYT 1/25)Republican calls are growing for a deeper investigation into fatal Minneapolis shooting of Alex Pretti (PBS 1/26)Scoop: Stephen Miller behind misleading claim that Alex Pretti wanted to "massacre" agents (AXIOS 1/27)Trump Defends Noem as She Faces Bipartisan Criticism (WSJ 1/27)Democrats Vow Not to Fund ICE After Shooting, Imperiling Spending Deal (NYT 1/24)Melania's $75 Million Movie Premiered in a Makeshift Theater (The Daily Beast 1/24)‘They sold my pain for clicks': Paris Hilton urges lawmakers to act on nonconsensual deepfakes (The 19th 1/22) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The ThoughtCrime crew discusses the most essential topics of the weed, including: -What do they make of Mattel's first-ever autistic Barbie doll? -Does AI mean that Hollywood actors are obsolete forever? -Who is "Amelia" and why is she the new avatar of European nationalism? Watch every episode ad-free on members.charliekirk.com! Get new merch at charliekirkstore.com!Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.