Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Artificial intelligence is transforming the global information ecosystem at breathtaking speed. In this timely conversation, Julia Haas, Head of the OSCE Representative on Freedom of the Media's AI & Freedom of Expression project, examines what this means for journalism, democratic governance, and human rights.We discuss the rise of deepfakes and AI-driven disinformation, the concentration of power in big tech platforms, and the economic vulnerabilities of modern newsrooms. How do we preserve information integrity without enabling censorship? How can regulation enhance accountability without strengthening state control? And as media organizations increasingly adopt AI tools, how can trust be protected?Julia argues that safeguarding media freedom in the age of AI is not merely a technological challenge—it is a democratic test. Multilateral cooperation, principled regulation, and stronger public-interest infrastructure will be essential if innovation is to reinforce, rather than erode, open societies.Learn more on GlobalGovernanceForum.org
New research suggests Australians are dangerously overconfident about detecting AI deepfake scams, even as the technology becomes harder to spot. Experts warn scammers hijack trust and instinct, and are calling on people to pause, verify and reject suspicious messages. - یک تحقیق تازه نشان می دهد آسترالیایی ها در تشخیص فریبکاریها و کلاهبرداری های "دیپ فیک" ساخته شده با هوش مصنوعی، بیش از حد مطمئین اند. در حالی که این تکنالوژی هر روز طبیعی تر و شناسایی آن سخت تر میشود. متخصصین هشدار می دهند کلاهبرداران از اعتماد و احساسات مردم سوءاستفاده کرده و می گویند بهترین کار این است که مکث و بررسی کرده و پیام های مشکوک را رد کنید.
Viele bekannte Synchron-Sprecherinnen und Sprecher, die normalerweise auch in Netflix-Produktionen zu hören sind, stehen derzeit nicht im Tonstudio, so die Einschätzung von Anna-Sophia Lumpe vom Verband Deutscher Sprecher:innen (VDS). Die Sprecherinnen und Sprecher nutzen diese Verweigerung vor allem auch als Protestsymbol, das sich gegen eine neue Klausel zum Einsatz von Künstlicher Intelligenz richtet. Netflix fordert darin die Zustimmung, Sprachaufnahmen für das Training entsprechender Systeme nutzen zu dürfen – ohne dafür zu zahlen und ohne den Betroffenen eine Wahl zu lassen. Denn wer die neuen Vertragsmodalitäten nicht unterzeichnen will, bekommt keine Alternativ-Option und kann folglich auch nicht für den Streamer arbeiten. Aus Sicht der Synchron-Schauspieler und Schauspielerinnen bergen die neuen KI-Kontrakte nicht nur aufgrund der Vergütung Brisanz, sondern auch, weil sie nicht genau wissen, was mit ihrem Stimm-Material zukünftig genau im KI-System gemacht wird, sprich wozu es verwendet, wohin es entwickelt wird. Wehren können sich die fraglichen Protagonisten bei einmal gegebener Zustimmung nur schwer, weil sie gleich eine Rechteabtretung an der eigenen Stimme für 50 Jahre beinhaltet. Bei vielen im Verband klingt daher die Befürchtung mit durch, ob kurz oder lang unkalkulierbarer Weise an der Abschaffung des eigenen Arbeitsplatzes mitzuwirken, perspektivisch mit dem eigenen Material also synthetisch generierte KI-Stimmen und Deepfakes auszubilden, die sie irgendwann ersetzen könnten - Sie sehen die gesamte Synchronisationskultur in Gefahr. Zwar betont Netflix aktuell, dies nicht tun zu wollen, klar geregelt sind die fraglichen Aspekte in den neuen KI-Klauseln allerdings nicht – die Sprecherinnen und Sprecher hätten nach jetzigem Stand aufgrund der fehlenden dezidierten Leitplanken folglich wenig Handhabe, wenn dies dennoch passieren sollte.
Deepfake porn is a billion-click industry built on stolen faces, while the people making it hide theirs behind screens. Hosted by journalist Sam Cole, Understood: Deepfake Porn Empire traces the decades-long rise of synthetic porn, the targets who are fighting back, and the global investigation that led to its Canadian kingpin.Understood takes you deep inside the seismic shifts reshaping our world right now. From online porn and crypto chaos to the rise of tech oligarchs, deepfake AI, and the broken promises of the internet — we explore the stories that define our digital age with hosts and characters embedded in the heart of the action. More episodes of Deepfake Porn Empire are available wherever you get your podcasts, and here: https://link.mgln.ai/DPExEBD
Deepfake porn is a billion-click industry built on stolen faces, while the people making it hide theirs behind screens. Hosted by journalist Sam Cole, Understood: Deepfake Porn Empire traces the decades-long rise of synthetic porn, the targets who are fighting back, and the global investigation that led to its Canadian kingpin.Understood takes you deep inside the seismic shifts reshaping our world right now. From online porn and crypto chaos to the rise of tech oligarchs, deepfake AI, and the broken promises of the internet — we explore the stories that define our digital age with hosts and characters embedded in the heart of the action. More episodes of Deepfake Porn Empire are available wherever you get your podcasts, and here: https://link.mgln.ai/DPExGGG
Teknologien har for lengst flyttet inn i redaksjonene, der AI-systemer nå skriver, klipper, oversetter og analyserer innhold i et svimlende tempo.Hva skjer med journalistikken, økonomien og tilliten når teknologien drar i alle spaker samtidig? I denne episoden går vi rett inn i maskinrommet for å finne ut hva som fungerer, hva som skremmer, og hva som faktisk gir verdi i det nye medielandskapet.Vi tar deg med bak kulissene i to av Norges største mediehus for å forklare hvordan de bruker teknologi til å avsløre sannheten i saker som Epstein-filene. Du får høre om kampen for tillit, erfaringene fra et teknologitungt OL og strategiene som skal sikre at redaktørstyrte medier overlever frem mot 2030.Ukens gjester er Pål Nedregotten, direktør for teknologi og produktutvikling i NRK, og Espen Sundve, Chief Product Officer i Schibsted Media. Programleder er Christian Brosstad, Atea. Hosted on Acast. See acast.com/privacy for more information.
Instagram, TikTok, Facebook – der Feed zeigt Bilder und Videos. Doch nicht alles, was der Nutzer sieht, ist echt sondern Deepfakes.
Fraudsters are increasingly using deepfake videos of CEOs and other company executives to trick firms out of millions of dollars. And with the evolution of AI, these videos are becoming ever-more sophisticated and convincing. We speak to two CEOs who have been deepfaked: the head of the Bombay stock exchange and the boss of password security company LastPass. And we hear how criminals used deepfake videos to trick British engineering firm Arup into handing over $25 million. How easy is it to make these videos? Ed Butler visits a cybersecurity company which shows him how it can be done, using readily available software. Ed's hosts make a deepfake of him and we compare the real Ed to the fake Ed. We also put figures on the size of this problem and explain how much it's costing businesses.If you'd like to get in touch with the team, our email address is businessdaily@bbc.co.ukPresenter: Ed Butler Producer: Gideon Long Sound Mix: Toby JamesBusiness Daily is the home of in-depth audio journalism devoted to the world of money and work. From small startup stories to big corporate takeovers, global economic shifts to trends in technology, we look at the key figures, ideas and events shaping business.Each episode is a 17-minute deep dive into a single topic, featuring expert analysis and the people at the heart of the story.Recent episodes explore the weight-loss drug revolution, the growth in AI, the cost of living, why bond markets are so powerful, China's property bubble, and Gen Z's experience of the current job market.We also feature in-depth interviews with company founders and some of the world's most prominent CEOs. These include Google's Sundar Pichai, Wikipedia founder Jimmy Wales, and the CEO of Starbucks, Brian Niccol.(Picture: An image of a man in a cap being deepfaked. Credit: Getty Images)
Crack open a Liquid Death and join us as we analyze the high-tech, low-life world of the new Running Man. We're looking at the cold, clinical design of the Hunters (giving us major THX 1138 vibes) and the terrifyingly relevant "Deepfake" technology used by the Network. Support the show: Make sure to visit our affiliate sponsor Live Bearded. Grab some premium beard care and support the pod by using our link! https://livebearded.com/2GEEKS.
Arkanix Stealer – the new AI info-stealer experiment AI-assisted hacker breached 600 Fortinet firewalls in 5 weeks Russia stepping up hybrid attacks, preparing for confrontation with West Get links to all of today's news in our show notes here: https://cisoseries.com/cybersecurity-news-arkanix-was-poc-600-fortinet-firewalls-breach-russia-heightens-tension/ Thanks to today's episode sponsor, Adaptive Security This episode is brought to you by Adaptive Security, the first security awareness platform built to stop AI-powered social engineering. Deepfakes aren't science fiction anymore; they're a daily threat. Quick tip: if your voicemail greeting is your real voice, switch it to the default robot voice. A few seconds of audio can be enough to clone you. Adaptive helps teams spot and stop these AI-powered social engineering attacks. Learn more at adaptivesecurity.com.
Themen: Clevere Dörfer 1: Darup [00:19Min.] | Serum - Brauche ich das und wenn ja, welches? [05:59Min.] | Betrug mit Deepfakes [12:07Min.] | Grapefruit [17:07Min.] | Vagusnerv [23:05Min.]
AI-generated or AI-altered content is all over the internet now, but most of us admit we don't always know it when we see it... How artificial intelligence is making it harder to trust our own eyes (at 12:19) --- Around Town: Part fundraiser, part treasure hunt... Christian Clearing House is accepting donations for their annual Garage Sale (at 22:23) --- HSBB Preview: Two regular season games remain for the Trojans to get momentum ahead of the tournament (at 31:25) --- A special collection of recipes for the first 'Fish Friday' of Lent from Kyra's Kitchen (at 45:40)
Im Podcast spricht der Digital-Forensiker über neue Technologien im Gerichtssaal, die Sorge vor Deepfakes und die Frage, warum seine Studierenden immer häufiger einen Job bei Bosch oder Siemens finden.
Emmanuel Macron s'est rendu en Inde cette semaine pour renforcer le partenariat stratégique entre les deux pays. Les discussions entre Paris et New Delhi ont porté sur l'intelligence artificielle, mais aussi sur la vente possible de plus de cent avions de chasse français Rafale. Un dossier hautement sensible, ciblé par une série de deepfake détournant le visage du président français et du Premier ministre indien, Narendra Modi. Tout commence mardi 17 janvier avec la diffusion sur X (ex-Twitter) d'une vidéo de quarante trois secondes. On voit Emmanuel Macron, drapeaux français en arrière-plan, en pleine déclaration, devant un pupitre décoré avec des fleurs. On pense alors l'entendre parler en anglais. Il semble affirmer que la France ne « pourra pas devenir un partenaire de l'Inde dans le domaine de la défense ». « Nous avons déjà perdu un milliard de dollars sur le marché Rafale en raison de l'inefficacité de l'armée de l'air indienne. Nous ne pouvons pas faire partie d'une organisation défaillante qui ternirait davantage notre réputation ». Ce narratif mensonger fait référence à l'opération militaire Sindoor menée par l'Inde au Pakistan en mai 2025 et où New Delhi aurait perdu au moins un avion Rafale. Un deepfake anglophone En réalité, Emmanuel Macron n'a jamais prononcé ces mots. C'est un deepfake, un hypertrucage généré par intelligence artificielle. Ce type d'outil permet de faire dire n'importe quoi à n'importe qui en quelques clics. La première étape de vérification consiste à identifier l'origine précise de cet extrait. Pour ça, nous avons procédé à plusieurs recherches par image inversée (voir ici comment faire). Cela nous a permis de retrouver la déclaration d'Emmanuel Macron et de Narendra Modi à Bombay, le 17 février 2026. Si on y retrouve bien le même décor, les mêmes gestes, les propos eux, sont totalement différents. L'extrait qui a été manipulé débute à 27'53. D'abord, Emmanuel Macron s'exprime en français et non en anglais. De plus, durant les quinze minutes de son allocution, le président loue un partenariat sans « limites » et en pleine « accélération » avec l'Inde. Il ne parle pas des Rafales, ni ne critique l'armée de l'air indienne. Comptes pro-pakistanais et pro-chinois À l'origine de cette fausse information, on retrouve un compte X anonyme, qui se présente, à tort, comme un lanceur d'alerte. Dans les faits, il publie quotidiennement des deepfake et des infox destinées à dénigrer l'armée indienne et, au contraire, à vanter la puissance du Pakistan. Il a notamment partagé un hypertrucage de Narendra Modi au sujet, une nouvelle fois, des avions français Rafale. Ces infox s'inscrivent dans une vaste opération de désinformation. En effet, depuis la série de bombardements menés par l'armée indienne contre le Pakistan en mai 2025, le Rafale est victime d'une intense campagne de dénigrement dans la région. Derrière tout ça, on retrouve un écosystème de comptes et de médias chinois et pakistanais. Leurs objectifs consistent à ternir la réputation de l'avion fabriquée par Dassault, à promouvoir le matériel de fabrication chinoise et à affaiblir, par la même occasion, le partenariat franco-indien.
Valerie Ziegler, a high school teacher in San Francisco, and Joel Breakstone, executive director of Stanford's Digital Inquiry Group, talk about digital literacy in the classroom. Many self-described "screenagers," they say, can no longer tell real from fake. Together, Ziegler and Breakstone are at the forefront of a movement to prepare young people for a world of influencers, algorithmic manipulation, and artificial intelligence, an effort recently profiled in the New York Times.
Voice used to be AI's forgotten modality — awkward, slow, and fragile. Now it's everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.Neil breaks down today's dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it's popular: it's modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.Finally, we tackle voice cloning: where it's genuinely useful, what it means for deepfakes and privacy, and why watermarking isn't a silver bullet.If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.Neil ZeghidourLinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/X/Twitter - https://x.com/neilzeghGradiumWebsite - https://gradium.aiX/Twitter - https://x.com/GradiumAIMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFirstMarkWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Intro(01:21) Voice AI's big moment — and why we're still early(03:34) Why voice lagged behind text/image/video(06:06) The convergence era: transformers for every modality(07:40) Beyond Her: always-on assistants, wake words, voice-first devices(11:01) Voice vs text: where voice fits (even for coding)(12:56) Neil's origin story: from finance to machine learning(18:35) Neural codecs (SoundStream): compression as the unlock(22:30) Kyutai: open research, small elite teams, moving fast(31:32) Why big labs haven't “won” voice AI4(34:01) On-device voice: where it works, why compact models matter(46:37) The last mile: real-world robustness, pronunciation, uptime(41:35) Benchmarking voice: why metrics fail, how they actually test(47:03) Cascades vs speech-to-speech: trade-offs + what's next(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos(1:00:50) New languages + dialects: what transfers, what doesn't(1:02:54 Hardware & compute: why voice isn't a 10,000-GPU game(1:07:27) What data do you need to train voice models?(1:09:02) Deepfakes + privacy: why watermarking isn't a solution(1:12:30) Voice + vision: multimodality, screen awareness, video+audio(1:14:43) Voice cloning vs voice design: where the market goes(1:16:32) Paris/Europe AI: talent density, underdog energy, what's next
Research Update: 8 papers on AI in Education you need to know for 2026 In this episode, Ray and Dan provide a rapid-fire rundown of the most significant research papers hitting the AI in Education space so far in 2026. After a series of news-heavy episodes, the hosts catch up on the data behind synthetic avatars, grading accuracy, and the psychological biases we hold against AI. Key highlights include: Synthetic Lecturers: Exploring stakeholder perspectives on digital twins and the emotional reaction to the term Deepfake in academia. The Grading Gap: Why ChatGPT tends to be more sycophantic and generous with weak work compared to human instructors. The Disclosure Penalty: New findings from 16 experiments showing why humans devalue creative writing the moment they know AI is involved. Prompting Hacks: The "Groundhog Day" method
Trust on the internet used to be a fairly simple calculation. You looked for familiar names, recognizable brands, maybe a blue checkmark, and you made a judgment call. Today, that math often fails. AI has changed the game. Deepfakes are […] The post Who Do You Trust Online—And Why? appeared first on TechSpective.
Zijn we bang voor de toekomst, of voor ons eigen onvermogen om te veranderen? Terwijl de wereld vreest voor de macht van AI, stelt futurist Aragorn Meulendijks een confronterende vraag: waarom durven we onszelf niet te overstijgen? In deze aflevering duiken we in de realiteit van ondernemerschap en leiderschap. Geen gepolijste succesverhalen, maar een eerlijk gesprek over lef, strategische keuzes en de harde lessen van groei in een razendsnel veranderende wereld.
Deepfake voice technology is rapidly advancing, but how well do current detection systems handle differences in language and writing style? Most existing work focuses on robustness to acoustic variations such as background noise or compression, while largely overlooking how linguistic variation shapes both deepfake generation and detection. Yet language matters: psycholinguistic features such as sentence structure, complexity, and word choice influence how models synthesize speech, which in turn affects how detectors score and flag audio. In this talk, we will ask questions such as: "If we change the way a person writes, while keeping their voice the same, will a deepfake detector still reach the same decision?" and "Are some text-to-speech and voice cloning models more vulnerable to shifts in writing style than others?" We will then discuss implications for designing robust deepfake voice detectors and for advancing more trustworthy speech AI in an era of increasingly synthetic media. About the speaker: Thai Le is an Assistant Professor of Computer Science at the Indiana University's Luddy School of Informatics, Computing, and Engineering. He obtained his doctoral degree from the college of Information Science and Technology at the Pennsylvania State University with an Excellent Research Award and a DAAD Fellowship. His research focuses on the trustworthiness of AI/ML models, with a mission to enhance the robustness, safety, and transparency of AI technology in various sociotechnical contexts. Le has published nearly 50 peer-reviewed research works with two best paper presentation awards. He is a pioneer in collecting and investigating so-called text perturbations in the wild, which has been utilized by users and researchers worldwide to study and understand effects of humans' adversarial behaviors on their daily usage with AI/ML models. His works have also been featured in ScienceDaily, DefenseOne, and Engineering and Technology Magazine.
Grunwald, Maria www.deutschlandfunk.de, Interviews
Get our AI Video Guide: https://clickhubspot.com/dth Episode 97: How close are we to a world where AI-generated videos are indistinguishable from reality? Matt Wolfe (https://x.com/mreflow) and Joe Fier (linkedin.com/in/joefier) dive deep into Seedance 2.0—ByteDance's new AI video model that could outpace giants like Sora and Veo. Joe, a marketing and business expert known for his hands-on approach and insights into AI's rapid evolution, helps to break down the five most fascinating developments in the AI space this week. They tackles game-changing AI advances: Seedance 2.0's mind-blowing video generation for ads and motion graphics, the rollout of Google's Veo 3.1 in Google Ads, the GPT-5.3 Codex Spark coding model built on specialized inference chips, Gemini's DeepThink model for scientific research, and the early rollout of ChatGPT ads. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Seedance 2.0 arrives – AI video generation blurs reality, ad creation moves fast. (03:03) Google's Veo 3.1 powers video ads, advertisers can now generate clips directly from image uploads. (05:33) Comparison of Runway, Kling, Veo, and Sora—head-to-head prompt showdown. (07:00) Motion graphics and explainers—AI's take on the creative industry. (08:35) US vs. China—Copyright, IP, and training data debates. (12:10) Deepfake and video authenticity—why we now default to skepticism. (13:30) Google's edge in visual AI via YouTube's massive corpus. (14:39) The next frontier: Longer, more consistent video generation. (15:14) Where do humans fit in? Taste, storytelling, and creative direction. (18:30) GPT-5.3 Codex Spark—coding models on Cerebras inference chips, demo generating a website in 18 seconds. (24:34) AI tool comparisons—Codex vs. Cursor vs. Claude Code. (25:12) Speed as the key bottleneck breaker in creative and technical workflows. (28:02) Google's Gemini DeepThink—state-of-the-art research, advanced coding and physics capabilities. (32:52) Gemini demo attempt—3D-printable STL file and solving the three-body problem. (33:20) ChatGPT rolls out ads—impact on monetization and user trust. (40:02) Google's ad history—how “sponsored” is becoming harder to distinguish. (44:02) Democratizing AI access via ad-supported models. (45:03) Matt Schumer's viral article—why AI is moving even faster than most people realize. (51:11) Tools that build tools—AGI's path and the new role for humans. (53:12) Real-world skills and taste—where humanity still wins (for now). (54:01) Final thoughts—wake up, pay attention, and stay on the leading edge. — Mentions: Seedance 2.0: https://www.seedance.com/ ByteDance: https://www.bytedance.com/ CapCut: https://www.capcut.com/ Veo: https://deepmind.google/models/veo/ Runway: https://runwayml.com/ ChatGPT Codex: https://chatgpt.com/codex Matt Schumer's Viral Article: https://www.mattshumer.com/blog/ai-changes-everything Super Bowl Claude Commercial: https://www.anthropic.com/news/super-bowl-ad Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
AP's Lisa Dwyer reports on more legal action in Europe involving Grok.
AI is evolving fast—and so are the risks that come with it. In this episode of Leader Generation, Tessa Burg talks with Mod Op's EVP of PR, Chris Harihar, to unpack a growing issue most brands aren't fully prepared for: AI-driven brand misrepresentation. From deepfakes to manipulated logos and inappropriate brand placements, the conversation explores how generative AI tools are creating new reputational threats in ways that feel chaotic, fast-moving and hard to control. Chris introduces Mod Op's new AI Risk Intelligence capability, designed to help brands proactively identify and address harmful AI-generated content before it spirals. They dig into real examples—including manipulated executive deepfakes and brand misuse across platforms like Sora and Grok—and explain why this isn't just a cybersecurity issue, but a reputational one that belongs squarely in the PR and communications world. If you're a CMO, brand leader, or marketer wondering how exposed your company might be—or how to get ahead of risks that didn't exist a year ago—this episode offers clarity, practical thinking, and a smart path forward. It's a timely conversation about protecting your brand while still embracing the power of AI. Leader Generation is hosted by Tessa Burg and brought to you by Mod Op. About Chris Harihar: Chris Harihar is the EVP of Public Relations at Mod Op. With deep expertise in business and tech media relations, Chris counsels clients at a high level while maintaining hands-on involvement in media relations and content strategy. He has developed and run highly successful programs for leading B2B and tech brands, from Verizon Media/Yahoo and DoubleVerify to Signal AI, IDG (now Foundry) and WeTransfer. Chris can be reached on LinkedIn or at Chris.Harihar@ModOp.com. About Tessa Burg: Tessa is the Chief Technology Officer at Mod Op and Host of the Leader Generation podcast. She has led both technology and marketing teams for 15+ years. Tessa initiated and now leads Mod Op's AI/ML Pilot Team, AI Council and Innovation Pipeline. She started her career in IT and development before following her love for data and strategy into digital marketing. Tessa has held roles on both the consulting and client sides of the business for domestic and international brands, including American Greetings, Amazon, Nestlé, Anlene, Moen and many more. Tessa can be reached on LinkedIn or at Tessa.Burg@ModOp.com.
Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox interviews Matt Kunkel, CEO and Co-Founder at LogicGate, about the company's governance, risk, and compliance (GRC) platform and current market trends. Matt recounts his path into regulatory risk and compliance work that led to founding LogicGate and launching its Risk Cloud platform in 2015. A major focus is AI governance. Tom and Matt explore how and why senior management is asking compliance teams to provide governance frameworks despite the absence of a single standard (e.g., NIST/ISO/SOC). Matt explains organizations need scalable processes to triage and route large volumes of AI usage requests, apply guardrails based on data sensitivity and criticality, and avoid becoming a bottleneck to innovation. He emphasizes training and culture to address employee misuse, highlighting risks of exposing proprietary data and the need to define what information is acceptable to input into AI models. The discussion turns to LogicGate's culture and how it has been sustained during rapid, organic growth (no acquisitions). Matt outlines LogicGate's six values: Be as One, Embrace Your Curiosity, Empower Customers, Raise the Bar, Own It, and Do the Right Thing. For evaluating AI and modernizing compliance programs, he frames value in three outcomes: making money, reducing costs, or reducing risk, and describes LogicGate's value realization framework that translates efficiency and ROI into business terms. He also describes Risk Cloud as an orchestration layer for compliance programs and anticipates more “intentional AI” and selective use of agentic capabilities rather than fully autonomous end-to-end program execution. Key highlights: From Consulting to GRC: Coding, Madoff Investigation, and Founding LogicGate Why AI Is Supercharging the “G” in GRC LogicGate's Culture Playbook: Values That Scale with Hypergrowth How to Evaluate AI Tools in Compliance: Proving Value, ROI, and “Intentional AI” Cybersecurity in 2026: AI-Powered Social Engineering, Deepfakes, and Risk Mapping What's Next for GRC by 2030: Agents, Responsible AI, and Tech as the Glue Resources: Matt Kunkel on LinkedIn LogicGate Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.
PJ talks to Pat Buckley TD about an amendment to Coco's Law in the light of the AI nude deepfakes that have swept across the online world Hosted on Acast. See acast.com/privacy for more information.
A stunningly realistic fake clip of movie stars Tom Cruise and Brad Pitt having a fist-fight about Jeffrey Epstein is causing a meltdown in Hollywood. Plus, the thwarted return of ISIS brides.See omnystudio.com/listener for privacy information.
Gefälschte Videoaufnahmen, sogenannte Deepfakes, sind keine Seltenheit mehr: Sie zeigen zum Beispiel Donald Trump in der Papstrobe oder Mona Vetsch, die für zweifelhafte Finanzseiten wirbt. Doch jetzt erreichen sie eine neue Dimension.Deepfakes sehen immer echter aus und die Maschen der Betrüger werden immer perfider. So auch im Fall von Markus. Kurz nachdem er einen unbekannten Facetime-Anruf annimmt, erhält er ein Video zugeschickt. Es zeigt ihn beim Masturbieren. Die Betrüger hatten das Video mit KI so manipuliert, dass die Szene echt wirkte. Dann drohen sie ihm, es zu verschicken, wenn er nicht zahlt.Auch an einer Schweizer Schule wurde kürzlich ein Fall bekannt, in dem Oberstufenschüler KI-generierte Nacktbilder von Mitschülerinnen über Snapchat verbreiteten.Wie funktionieren Deepfakes? Was bedeuten solche Aufnahmen für die Betroffenen? Und was können Behörden dagegen tun? Das erklärt Oliver Zihlmann, Leiter des Tamedia Recherchedesks in einer neuen Folge des täglichen Podcasts «Apropos».Host: Alexandra AreggerProduzentin: Valeria MazzeoMehr zu DeepfakesDie Recherche von Oliver Zihlmann zum Fall von MarkusDer KI-Nacktbild-Skandal an einer Schweizer SchuleSo ist die Rechtslage in der Schweiz bei Deepfakes Unser Tagi-Spezialangebot für Podcast-Hörer:innen: tagiabo.chHabt ihr Feedback, Ideen oder Kritik zu «Apropos»? Schreibt uns an podcasts@tamedia.ch Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Non-consensual deepfake porn is becoming increasingly pervasive, and it didn't just come out of nowhere. These deepfakes were created and curated by people, on platforms, inside online subcultures. And they were allowed to spread, while governments dragged their feet, tech companies shrugged, and the targets — almost always women — paid the price.Tech journalist Sam Cole has been covering deepfake porn since its inception. In this season of Understood, she follows the trail all the way to the source, tracing an investigation across three countries and four newsrooms into the very real person behind the world's largest deepfake porn website: Mr. Deepfakes himself.
Hannah sine feriebilder ble manipulert og misbrukt. Så tok hun saken i egne hender. (Foto: Illustrasjonsbilde/ Ismail Burak Akkan). Hør alle episodene i appen NRK Radio
Lawyers have always relied on tools—but AI is different. It doesn't just assist with tasks; it makes decisions, applies judgment, and shapes outcomes. In episode #602 of the Lawyerist Podcast, Stephanie Everett talks with Damien Riehl about what ethical responsibility looks like when AI starts doing legal work on its own. Their conversation examines how AI systems embed values, why verification matters more than transparency, and how lawyers can responsibly use tools they don't fully understand. They also explore what legal expertise looks like in an AI-powered future—and why intuition, trust, and integrity may matter more than ever as machines take over the “widgets” of legal work. Listen to our other episodes on Ethics and Responsibility in AI. EP. 582 Deepfakes, Data, and Duty: Navigating AI Ethics in Law, with Merisa Bowers Apple | Spotify | LTN EP. 543 What Lawyers Need to Know About the Ethics of Using AI, with Hilary Gerzhoy Apple | Spotify | LTN Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 00:00 – Introduction 05:55 – Meet Damien Riehl 08:10 – Why AI Is a Different Kind of Legal Tool 11:05 – When AI Starts Doing Legal Work 14:30 – Ethics, Values, and AI Judgment 18:45 – Foundation Models vs. Legal-Specific AI 21:15 – The “Duck Test” and Trusting AI Output 24:45 – Trust but Verify: Reviewing AI Work 28:40 – What Lawyers Are Underestimating About AI 31:10 – What Still Requires Human Judgment 34:30 – Intuition, Trust, and Integrity in Law 37:40 – What This Means for Billing and the Future 40:40 – Closing Thoughts
The future of cybersecurity is not coming. It is already here. AI is writing code faster than humans. Deepfakes can impersonate your boss. Quantum computers threaten the encryption that protects everything we trust. And most organizations are still playing catch up.In this episode of BarCode, Chris sits down with Jim West, a 30 plus year cybersecurity veteran who has seen every wave of the industry. From building machines in the early days of dial up to advising on quantum risk and AI driven defense, Jim breaks down what is hype, what is real, and what is about to change everything. This is not theory. This is what comes next.If you want to understand how to think like an attacker, adapt like a defender, and prepare for a world where machines outpace humans, this conversation is your briefing.Welcome to the future of security.00:00 Introduction to Jim West and His Expertise04:59 Jim's Origin Story and Early Career10:36 The Importance of Certifications in Cybersecurity17:16 The Rise of Quantum Computing in Cybersecurity27:05 Preparing for Quantum Day and Its Implications28:28 Exploring Quantum Computing and Qiskit28:58 AI's Role in Cybersecurity Threats30:45 The Evolution of Deepfake Technology31:45 Quantum Computing as a Service33:09 The Intersection of AI and Quantum Computing34:34 Future Scenarios: AI and Quantum in Cyber Warfare38:39 AI's Impact on Society and Human Interaction39:24 The Creative Potential of AI46:41 Balancing AI and Human Interaction52:46 Unique Bar Experiences and Future Ventures[Facebook – Jim West Author] – https://www.facebook.com/jimwestauthorOfficial author page where Jim West shares updates about his books, cybersecurity insights, speaking engagements, and creative projects.[LinkedIn – Jim West] – https://www.linkedin.com/in/jimwest1Professional networking profile highlighting his cybersecurity leadership, certifications, conference speaking, mentoring, and industry experience.[Official Author Site – Jim West] – https://jimwestauthor.com/Personal website featuring his published works, cybersecurity thought leadership, creative projects, and links to his social platforms.[BookAuthority – 100 Best Cybersecurity Books of All Time] – https://bookauthority.orgA curated book recommendation platform that recognized Jim West's work among the “100 Best Cybersecurity Books of All Time,” reflecting industry impact and credibility.[ISACA (Information Systems Audit and Control Association)] – https://www.isaca.orgA global professional association focused on IT governance, risk management, and cybersecurity, where Jim West has spoken at multiple regional and international events.[GRC (Governance, Risk, and Compliance) Conference – San Diego] – https://www.grcconference.comA cybersecurity conference centered on governance, risk management, and compliance practices, referenced in relation to industry speaking engagements.[EC-Council (International Council of E-Commerce Consultants)] – https://www.eccouncil.orgA cybersecurity certification organization known for programs such as CEH (Certified Ethical Hacker) and events like Hacker Halted, where Jim West has participated and spoken.
00:00 Introduction to Boys Club Live 00:44 The viral Vogue clip 03:46 Market Talk 07:13 Shoutout to Octant 11:29 AI Etiquette and Social Contracts 15:19 Gigi Claudid: Training our AI agent 20:49 Norwegian Athlete's Emotional Confession 23:34 Unpacking Relationship Drama 24:44 Messy Olympics: Scandals in Sports 25:32 Partner Shoutout: Anchorage Digital 27:27 Podcast Recommendation: The Rest is History 29:40 Interview with Tatum Hunter: Internet Culture Insights 30:06 Deepfakes and AI Ethics 38:43 Personal Surveillance and Trust Issues 48:52 TikTok's Mental Health Rabbit Hole 52:16 Shill Minute: Best Cookie in Crown Heights 53:08 Introduction to Octant: Innovating Funding Models 54:52 Funding Ethereum: Grants and Sustainability 56:50 Octant V2: Revolutionizing Community Funding 58:43 Sustainable Growth and the Future of Ethereum 01:05:56 The Intersection of Venture Capital and Sustainable Funding 01:11:25 Guest Nick Devor of Barrons on Prediction Markets 01:12:50 Gambling and Insider Trading in Prediction Markets 01:23:01 CFTC Challenges and the Future of Regulation 01:26:11 Free Groceries: A Marketing Strategy 01:29:50 Conclusion and Final Thoughts
Now that artificial can make very convincing copies of people's voices, technology companies are emerging to help detect AI-created media and fraud.
Feb 10, 2026 – This year marks a turning point, as deepfakes reach new heights in realism and influence. FS Insider interviews Dr. Siwei Lyu, director of the Institute for AI and Data Sciences, about the rapid evolution and growing dangers of deepfakes...
Check out host Bidemi Ologunde's new show: The Work Ethic Podcast, available on Spotify and Apple Podcasts.Email: bidemiologunde@gmail.comIn this episode, host Bidemi Ologunde breaks down the week of Feb 2–8, 2026, when an ancient idea, the Olympic Truce, collided with modern reality: AI-built platforms leaking identities, satellites and cyber defenses becoming battlefield "terrain," sanctions escalating into lawfare, and ceasefire language clashing with ongoing violence. What happens when "trust" becomes the scarcest resource online? Who controls connectivity in war zones: states or private networks? When do sanctions stop being diplomacy and start reshaping international justice? And in an era of drones, deepfakes, and cyberattacks, what does a "truce" even mean?On the Bid Picture Podcast, I talk about big ideas, and Lembrih is one of them. Born from Ghanaian roots, Lembrih is building an ethical marketplace for Black and African artisans: makers of heritage-rich products often overlooked online. The vision is simple: shop consciously, empower communities, and share the stories behind the craft. Lembrih is live on Kickstarter now, and your pledge helps build the platform. Visit lembrih.com, or search “Lembrih” on Kickstarter.Support the show
In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - Eduardo's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos
CONTENT WARNING: We're diving into the tough but important topic of parenting in an AI-shaped world, and while younger kids probably shouldn't listen in, this could be a great conversation to share with your middle or high schoolers so it feels more like learning together than an interrogation. My hope is that this equips you to parent well as we raise kids in a world shaped by AI. Today we continue our conversation on parenting AI with a look at deepfakes and s**tortion. These are big topics that have an outsized impact on children. We need to know what they are, how they happen, and what to do if our child is targeted. The goal is to be a present, and informed, parent so that your children they need to grow.Show Notes: https://bit.ly/4qfOCiG
The thinning of the soul needs the robustness of Truth. __________ For additional resources, or to download and share this commentary, visit breakpoint.org.
Com a popularização das apostas online no Brasil, também cresceram os golpes, as fraudes de identidade e o uso de deepfakes para enganar jogadores. No episódio de hoje do Podcast Canaltech, a repórter Jaqueline Sousa conversa com Krist Galloway, head de iGaming da Sumsub, sobre os principais riscos desse mercado. Durante a entrevista, ele explica como criminosos usam tecnologia para criar aplicativos falsos, anúncios enganosos com celebridades e esquemas de lavagem de dinheiro. O executivo também detalha como a biometria, a inteligência artificial e a análise de transações ajudam a identificar contas suspeitas. O episódio aborda ainda o papel da regulamentação, os desafios dos sites ilegais, o combate ao vício em apostas e o impacto de tecnologias como o Pix nesse cenário. Você também vai conferir: sem tirar do bolso: celular poderá ser controlado apenas pela voz, SpaceX pode lançar celular com conexão direta à Starlink e cientistas criam chip mais fino que um fio de cabelo. Este podcast foi roteirizado e apresentado por Fernada Santos e contou com reportagens de Marcelo Fischer, Nathan Vieira e Raphael Giannotti, sob coordenação de Anaísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Leandro Gomes e a arte da capa é de Erick Teixeira.See omnystudio.com/listener for privacy information.
Like this episode? Support Media Storm on Patreon! In January alone, Donald Trump abducted the Venezuelan President, listed himself as President of Venezuela on Wikipedia, almost launched another tariff war after demanding Greenland, directly threatened Colombia, Mexico and Cuba, told Honduran vote counters there'd be “hell to pay” if his favourite candidate didn't win, and dropped bombs on Caribbean boats that killed more than a hundred people. Yet at the World Economic Forum in Davos the same month, he launched his ‘Board of Peace'. Make it make sense! But is Trump's new world order really that new? In a postwar world of covert regime change, privatised ownership of natural resources, and sanctions designed to strangle uncooperative economies, was the international rules-based order just a lie all along? Plus: headlines told us that "Non-consensual sexualised deepfakes were created by the AI chatbot Grok" and that "Grok AI made sexualised images of children". But who gave Grok the prompt to do it? Missing from the headlines, as is so often the case when it comes to stories about sexual abuse against women and girls, is MEN. We discuss why no one can seem to name the problem - so much so, our government used a SNAKE to represent male violence in a recent advert (end snake violence against women and girls!) And we end with our new segment: Holding Onto Hope. The episode is hosted and produced by Mathilda Mallinson (@mathildamall) and Helena Wadia (@helenawadia) The music is by @soundofsamfire Follow us on Instagram, Bluesky, and TikTok Learn more about your ad choices. Visit podcastchoices.com/adchoices
Today, we're going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. To do this, I sat down with Verge reporter Jess Weatherbed, who covers creative tools for us — a space that's been totally upended by generative AI. We've been talking about how the photos and videos taken by our phones are getting more and more processed for years on The Verge. Here in 2026, we're in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood onto social platforms at scale. So Jess and I discussed the limitations of AI labeling standards like C2PA, and why social media execs like Instagram boss Adam Mosseri are now sounding the alarm. Links: This system can sort real pictures from AI fakes — why aren't we using it? | The Verge You can't trust your eyes to tell you what's real, says Instagram | The Verge Instagram's boss is missing the point about AI on the platform | The Verge Sora is showing us how broken deepfake detection is | The Verge Reality still matters | The Verge No one's ready for this | The Verge What is a photo, @WhiteHouse edition | The Verge Google Gemini is getting better at identifying AI fakes | The Verge Let's compare Apple, Google & Samsung's definitions of 'photo' | The Verge The Pixel 8 and the what-is-a-photo apocalypse | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Feb 4, 2026: In this episode of Future-Ready Today, I explore a fundamental shift in the workplace: the transition from a task economy to a trust economy. As artificial intelligence moves from "future tech" to "daily tool," the basic mechanics of how we hire, manage, and let go of people are under intense pressure. We aren't just dealing with new software; we're dealing with a breakdown in identity and accountability. I dive deep into five stories shaping this week's headlines: The Deepfake Candidate: Why identity verification is becoming the most critical new skill in HR. California's Algorithmic Guardrails: The new legislative push to ensure humans—not code—remain responsible for firing decisions. The "Job Apocalypse" Debate: Analyzing Ben Horowitz's take on why new work emerges even as old categories vanish. The $818 Billion Admin Tax: How poorly designed organizations are drowning in emails, and why AI might be the only way out. The AI Layoff Script: Why "technology made us do it" is becoming the new corporate excuse, and how leaders can maintain credibility during transitions. The Bottom Line: The future of work won't be won by the companies with the most AI. It will be won by the companies that use technology to remove "administrative garbage" while doubling down on human accountability.
The Barbell Mamas Podcast | Pregnancy, Postpartum, Pelvic Health
Ever feel like every scroll brings a new rule for your body? We sit down with Dr. Emily Fender, a health communication scientist whose research tracks how women's health messages spread across TikTok, Instagram, and YouTube—and why the loudest claims aren't always the most useful. Together, we break down a simple lens you can use anywhere online: threat versus efficacy. Are you being scared into attention, or actually given steps and resources to act? That distinction shows up in everything from contraception myths to perinatal mental health, where severity gets clicks but supportive guidance often goes missing.We dig into cycle syncing and the difference between evidence, overreach, and personalized training. You'll hear why rigid phase-based rules can backfire, creating shame and cost barriers, and how athletes worry these narratives label women as fragile for half the month. We zoom out to the bigger system: incentives that reward certainty, influencer marketing that sells protocols, and even expertise drift when clinicians post outside their lane. Then we get practical about risk communication—turning relative risk into absolute numbers, spotting absolute statements, and demanding receipts when someone says “studies show.”We also scout the horizon with AI. Some tools can surface studies and highlight exact evidence, but they can't replace synthesis or context. Deepfakes and confident summaries raise the bar for skepticism, so we share a quick checklist to stress test posts before you share or act: scope, sources, statistics, and a simple “does this make sense” pass. Use social media for community, discovery, and momentum—then ground your choices in evidence, your values, and your lived experience. If you've been craving fewer rules and more clarity, this conversation offers a calmer, smarter way to navigate women's health online. Subscribe, share with a friend who lifts, and leave a review to tell us the one claim you want decoded next.___________________________________________________________________________Don't miss out on any of the TEA coming out of the Barbell Mamas by subscribing to our newsletter You can also follow us on Instagram and YouTube for all the up-to-date information you need about pelvic health and female athletes. Interested in our programs? Check us out here!
In this episode of Friday Night Live on 30 January 2026, Stefan Molyneux looks at the Epstein document release and how deepfake tech affects what people accept as real. He talks with a caller about staying skeptical amid all the digital noise, building real connections, and owning up to one's choices. Molyneux pushes the caller to deal with the paralysis tied to family issues, stressing that sharp thinking is key to cutting through media tricks.GET FREEDOMAIN MERCH! https://shop.freedomain.com/SUBSCRIBE TO ME ON X! https://x.com/StefanMolyneuxFollow me on Youtube! https://www.youtube.com/@freedomain1GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!https://peacefulparenting.com/Join the PREMIUM philosophy community on the web for free!Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!See you soon!https://freedomain.locals.com/support/promo/UPB2025
AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time.We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything.What we cover:Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotionHow the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truthWhat happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymoreWhy this breaks institutional trust forever — Once credibility is destroyed, it doesn't come backHow Russia's playbook became America's playbook — PsyOps tactics are now domestic policyWhat to do when you can't believe your own eyes — Practical skepticism in an age of slopChapters:00:00 — Intro: The Deep Fake Problem in Minneapolis02:37 — Why Immigrants Are Being Targeted With Fake Narratives04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version07:18 — Alex Prettie Must Killed While Filming ICE Agents09:44 — Nikita Armstrong's Tears Were Added By AI11:45 — The Putin Playbook: Flood the Zone With Confusion14:13 — How Deep Fakes Break Institutional Trust Forever17:37 — This Isn't Politics—It's Basic Human Decency19:26 — Trump's 35% Approval Rating and What It Means22:03 — What You Can Do When You Can't Trust Your EyesSafety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group
Erin and Alyssa dig into the latest news from the Twin Cities— the senseless tragedy of Alex Pretti's death, and the inspiring resolve of the Minnesotans who continue to stand up for each other. With Greg Bovino's “demotion,” are things about to take a turn for the better, or is this cynical political window-dressing from Team Trump? Then, Melania Trump's movie premiere at the White House's janky new makeshift room, and Paris Hilton's fight on Capitol Hill to ban AI-generated deep fake porn. And of course, we wrap up with Sani-Petty. Alex Pretti's Friends and Family Denounce ‘Sickening Lies' About His Life (NYT 1/25)Republican calls are growing for a deeper investigation into fatal Minneapolis shooting of Alex Pretti (PBS 1/26)Scoop: Stephen Miller behind misleading claim that Alex Pretti wanted to "massacre" agents (AXIOS 1/27)Trump Defends Noem as She Faces Bipartisan Criticism (WSJ 1/27)Democrats Vow Not to Fund ICE After Shooting, Imperiling Spending Deal (NYT 1/24)Melania's $75 Million Movie Premiered in a Makeshift Theater (The Daily Beast 1/24)‘They sold my pain for clicks': Paris Hilton urges lawmakers to act on nonconsensual deepfakes (The 19th 1/22) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The ThoughtCrime crew discusses the most essential topics of the weed, including: -What do they make of Mattel's first-ever autistic Barbie doll? -Does AI mean that Hollywood actors are obsolete forever? -Who is "Amelia" and why is she the new avatar of European nationalism? Watch every episode ad-free on members.charliekirk.com! Get new merch at charliekirkstore.com!Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.