Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Federal government moves to ban apps that create nude deepfake images of children Search for alleged police killer in Victoria's high country enters its second week Drug kingpin Tony Mokbel returns to court to overturn convictions And Gen Z battle refuel anxiety at the pumpSee omnystudio.com/listener for privacy information.
In der Entscheidung des LG Berlin II vom 20.08.2025 (Urt. v. 20.08.2025, 2 O 202/24) ging es um eine mittels KI generierte Stimme, die einer bekannten Stimme zum Verwechseln ähnlich klang. Das Landgericht Berlin II befasste sich mit der Frage der Zulässigkeit, der fiktiven Lizenzgebühr und ob sich eine Berechtigung aufgrund der Nutzung einer KI-Software ergeben kann. Wenngleich die Entscheidung einen wohl nicht alltäglichen Sachverhalt betrifft, können die Kernaussagen von allgemeiner Relevanz sein – jedenfalls regt die Entscheidung zum Nachdenken an.Das weitgreifende Komplettangebot inklusive Formulare zu DSGVO/TTDSG/BDSG im Beratermodul Datenschutzrecht. 4 Wochen gratis nutzen! ottosc.hm/dsgvo
An open letter signed by over 20 AI experts has been sent to the government, expressing the urgent need for AI regulation in Aotearoa New Zealand. The letter points to low trust, as well as potential harms of AI, as serious issues that need to be addressed. The experts have called for regulations and guardrails to support regulatory confidence and innovation, and reduce harm from deepfakes, fraud, and environmental costs, among others. Producer Alex spoke to The University of Canterbury's Dr Cassandra Mudgway about how unregulated AI enables gendered harm against women and vulnerable demographics through Deepfakes and other AI. Content Warning: This story mentions Image Based Sexual Abuse.
ICYMI: Hour Three of ‘Later, with Mo'Kelly' Presents – Chris Merrill filling in ‘Later, for Mo'Kelly' with a look at the bevy of professional athletes turning to OnlyFans to make a living in-between training AND the sad story of a California woman that lost her home after falling victim to an AI deepfake scam - on KFI AM 640…Live everywhere on the iHeartRadio app & YouTube @MrMoKelly
AI deepfakes of soap opera star Steve Burton help swindlers net $81k from a vulnerable woman, as well as her home! A teen's tragic death is being blamed on ChatGPT's "suicide coaching" in a bombshell lawsuit filed by his family. Plus, a husband is given a heavy sentence for letting his wife die in the tub because "she was too fat" to lift. Jennifer Gould reports. See omnystudio.com/listener for privacy information.
Chris Cuomo responds to a new round of listener calls and viewer comments, including allegations of Donald Trump's ties to Jeffrey Epstein, questions about sanctions on Russia, and concerns from veterans about the state of the VA. He also addresses faith-driven messages, criticism over an AOC deepfake post, and comments on his interviews with Tucker Carlson, Benny Johnson, and Matt Taibbi. Cuomo reflects on charges of bias, the misuse of the word “woke,” and why civil dialogue matters in today's politics. He also revisits his longstanding interest in UFO transparency, weighing in on government secrecy and the push for accountability. Follow and subscribe to The Chris Cuomo Project on Apple Podcasts, Spotify, and YouTube for new episodes every Tuesday and Thursday: https://linktr.ee/cuomoproject Join Chris Ad-Free On Substack: http://thechriscuomoproject.substack.com Support our sponsors: http://www.kalshi.com?utm_source=chriscuomo Go to https://surfshark.com/cuomo and use code cuomo at checkout to get 4 extra months of Surfshark VPN! Learn more about your ad choices. Visit podcastchoices.com/adchoices
Německá novinářka Patrizia Schlosser se nebojí témat, kterým se jiní raději vyhýbají: Hledala bývalé členy ultralevicové teroristické skupiny Frakce Rudé armády, mapovala fungování neonacistické scény v Německu i online komunitu mužů natáčejících ženy bez jejich vědomí, aby získaný obsah sdíleli na pornografických platformách.Ve chvíli, kdy začala zkoumat zneužívání a tzv. revenge porn na serveru xHamster, Patrizia Schlosser narazila na globální síť, která mimo zveřejňované záběry nic netušících žen vydělává i obrovské zisky. Aby se dostala k lidem, kteří systematicky natáčejí své oběti, rozhodla se nahrát vlastní „materiál“.Patrizia otevřeně mluví o tom, co znamená být novinářkou v patriarchálním světě, jak těžké je oslovovat oběti, ale i o tom, jaké to bylo, když se její vlastní tvář objevila v deepfake pornografii. O studu, vzteku a o pátrání po zakladatelích největších porno platforem, jejichž zisky mizí v offshorových strukturách.Silný, nepříjemný, ale důležitý díl o zneužívání moci, a také o tom, co dělat, když se sama objevíte v materiálu, kterým se profesně zrovna zabýváte. Epizoda o investigaci, jež se dotkne i toho, co mělo zůstat soukromé.Protagonista je podcast s Pavlou Holcovou. České znění Petr Gojda a Jiří Slavičinský.Přihlaste se k odběru newsletteru Protagonista zde (https://investigace.ecomailapp.cz/public/form/135-944c4287a69f4094fc099a7cf7add962) a buďte mezi prvními, kdo se dozví o nových epizodách a spuštění anglické verze.Podcastová série Protagonista vznikla v roce 2025 v koprodukci české redakce investigace.cz, mezinárodní sítě investigativních novinářů OCCRP, dánské společnosti Dark Riviera a francouzské filmové produkční a distribuční společnosti Sciapode. Série Protagonista je součást projektu War Room Content podpořeného Evropskou unií.
August 28, 2025 ~ Todd Flood, managing partner of Flood Law, talks with Chris, Lloyd, and Jamie about a new Michigan law criminalizing the creation and distribution of "deep fakes."
Utah Attorney General Derek Brown is leading the fight to ask search engines and payment platforms to do more to fight deepfake pornography. Greg and Holly discuss.
Moin aus Osnabrück und herzlich willkommen zur 36. Folge vom Update. Künstliche Intelligenz ist nicht nur ein Werkzeug für Innovationen, sondern auch längst Teil des digitalen Wettrüstens. Cyber-Angreifer:innen nutzen KI, um Attacken gezielter und schneller durchzuführen. Gleichzeitig setzen Verteidiger:innen auf KI, um Bedrohungen in Echtzeit zu erkennen und abzuwehren. Im Gespräch mit Ulf gibt Marcel Bensmann spannende Einblicke zu der Frage, ob KI die Waffe der Angreifer oder das Schild der Verteidiger ist.
This episode features an interview with Gaurav Misra, CEO, Captions, an AI video-generation company that allows you to create and edit talking videos with AI. Gaurav dives into the practical applications and future implications of AI in video, and how these tools can enhance marketing efforts for businesses of all sizes.Key Takeaways:Video capabilities are improving rapidly, and are now at the point where spinning up an AI-generated version of you speaking, is likely better quality than anything you could deliver to camera. These capabilities allow marketers to spin-up and test content very quickly with far less expense than in the past. How people will react to content moving forward, when it will become less and less clear what is real, remains to be seen. Quote: “ Spun up a video and it's like me wearing like a suit… I'm delivering this emotional message, but I'm delivering it so fluently with all these words that I would probably never use actually… and I'm looking at this like, shit, I couldn't be like this on camera. This is such a good delivery, such a good presentation.. It just isn't actually physically possible. And I think we are at that point where I can look at that and be like, wow, I just couldn't do this. It's better than what I could do.”Episode Timestamps: *(03:13) Challenges and Opportunities in Video Content*(08:01) The Future of AI Tools in Creative Work*(24:11) Innovations in Video Generation*(28:28) Real-World Applications and Feedback*(35:27) The Future of Deep Fakes and Content AuthenticitySponsor:Pipeline Visionaries is brought to you by Qualified.com. Qualified helps you turn your website into a pipeline generation machine with PipelineAI. Engage and convert your most valuable website visitors with live chat, chatbots, meeting scheduling, intent data, and Piper, your AI SDR. Visit Qualified.com to learn more.Links:Connect with Ian on LinkedInConnect with Gaurav on LinkedInLearn more about CaptionsLearn more about Caspian Studios
Forget the Eiffel Tower, kids—we're climbing the recruitment rollercoaster instead. Upwork's shopping spree in Holland (Bupty? Buptie? Bupkis?), Denmark's going full Face/Off to keep Nic Cage off Viggo's jawline, and the UK is suddenly allergic to Fridays. Joel's out dropping Cole at college, so Chad is joined by Belgium's royal pain Lieven and Scotland's deep-fried-pizza poet Stephen McGrath. Loud Americans, entitled tourists, and the four-day work week—this one's got more punch than a Glasgow nightclub at 2 a.m.
In this episode of CISO Tradecraft, host G Mark Hardy engages in an insightful conversation with Dave Lewis, Global Advisory CISO from 1Password, about AI governance and its importance in cybersecurity. They discuss AI policy and its implications, the evolving nature of AI and cybersecurity, and the critical need for governance frameworks to manage AI safely and securely. The discussion delves into the visibility challenges, shadow AI, the role of credentials, and the importance of maintaining fundamental security practices amidst rapid technological advancements. They also touch on the potential risks associated with AI, the misconceptions about its impact on jobs, and the need for a balanced approach to leveraging AI in a beneficial manner while safeguarding against its threats. This episode provides valuable guidance for cybersecurity professionals and organizations navigating the complexities of AI governance. Chapters 00:00 Introduction to AI Governance 00:30 Guest Introduction: Dave Lewis 00:49 The Importance of AI Governance 01:42 Challenges in AI Implementation 03:20 AI in the Modern Enterprise 03:49 Shadow AI and Security Concerns 04:49 AI's Impact on Jobs and Industry 05:27 The Gartner Hype Cycle and AI 05:43 AI's Influence on the Stock Market 06:14 Historical Context of AI 06:32 AI and Credential Security 08:29 The Role of Governance in AI 12:47 The Future of AI and Security 18:36 Governance and Policy Recommendations 19:26 AI Governance and Ethical Concerns 20:01 AI Self-Preservation and Human Safety 20:18 Uncontrollable AI Applications 21:17 Vectors of AI Trouble 21:58 AI Hallucinations and Data Security 22:53 AI Vulnerabilities and Exploits 26:29 Deepfakes and AI Misuse 27:33 Historical Cybersecurity Incidents 29:04 Future of AI and Job Security 33:47 Managing AI Identities and Credentials 34:21 Conclusion and Final Thoughts
What happens when your next hire isn't who they claim to be? In this eye-opening episode of The Audit, we dive deep into the alarming world of AI-powered hiring fraud with Justin Marciano and Paul Vann from Validia. From North Korean operatives using deepfakes to infiltrate Fortune 500 companies to proxy interviews becoming the new normal, this conversation exposes the security crisis hiding in plain sight. Key Topics Covered: North Korean operatives stealing US salaries to fund nuclear programs How Figma had to re-verify their entire workforce after infiltration Live demonstrations of deepfake technology (Pickle AI, DeepLiveCam) Why 80-90% of engineers believe interview cheating is rampant Validia's "Truly" tool vs. Cluely's AI interview assistance The future of identity verification in remote work Why behavioral biometrics might be our last defense This isn't just about hiring fraud—it's about the fundamental breakdown of digital trust in an AI-first world. Whether you're a CISO, talent leader, or anyone involved in remote hiring, this episode reveals threats you didn't know existed and solutions you need to implement today. Don't let your next hire be your biggest security breach. Subscribe for more cutting-edge cybersecurity insights that you won't find anywhere else. #deepfakes #cybersecurity #hiring #AI #infosec #northkorea #fraud #identity #remote #validia
SBS Finance Editor Ricardo Gonçalves speaks with George Boubouras from K2 Asset Management about the day's sharemarket action, including why investors are increasingly confident of a September interest rate cut in the US. Plus Hannah Kwon finds out more about the way AI is being used to scam people with Jeannie Paterson from the Melbourne Law School.
Voice clones. Fake video meetings. KYC spoofing. Here's what detection can do today, where it fails, and how to tune thresholds so security doesn't kill conversionDeepfakes moved from novelty to business risk. Reality Defender CEO Ben Colman joins us to break down the current state of deepfake detection—and how enterprises deploy real‑time defense across video, voice, and images without crushing user experience. We dig into detection accuracy, false positives/negatives, latency targets, and when to escalate to human review. If you're defending payments, KYC/liveness, or exec impersonation scenarios, this is a pragmatic playbook you can ship now.
DailyCyber The Truth About Cyber Security with Brandon Krieger
Global Threats, Deepfakes & Quantum Risk | DailyCyber 273 with Evgueni Erchov ~ Watch Now ~In this episode of DailyCyber, I'm joined by Evgueni Erchov, Sr. Director of Research & Threat Intelligence at Cypfer. With more than 25 years of experience in IT security, forensics, blockchain, and cybercrime investigations, Evgueni shares his perspective on the ever-evolving global cyber threat landscape.
In this week's Let's Talk About This, Father McTeigue dives into the new fad of creating chatbots and videos of the dead, talking to AI versions of our ancestors, and what this means for our spiritual lives. He finishes with Weekend Readiness. Show Notes AI Resurrection: Grief and Digital Life After Death Deepfakes of your dead loved ones are a booming Chinese business | MIT 'I love you robo-dad': Meet a family using AI to preserve loved ones after deathDying man spends final weeks creating AI version of himself to keep his wife company World's first robot able to give birth to human baby Twenty-first Sunday in Ordinary Time | USCCB These Trees Survived Hiroshima: Group Plants Their Seeds Worldwide to Preserve Their Memory iCatholic Mobile The Station of the Cross Merchandise - Use Coupon Code 14STATIONS for 10% off | Catholic to the Max Read Fr. McTeigue's Written Works! "Let's Take A Closer Look" with Fr. Robert McTeigue, S.J. | Full Series Playlist Listen to Fr. McTeigue's Preaching! | Herald of the Gospel Sermons Podcast on Spotify Visit Fr. McTeigue's Website | Herald of the Gospel Questions? Comments? Feedback? Ask Father!
This week's cybersecurity updates cover three critical stories: Workday discloses a data breach connected to ongoing Salesforce compromises by the Shiny Hunters group, CEO impersonation scams using deepfake technology surge past $200 million in Q1 losses, and transcription service Otter AI faces a class action lawsuit over alleged mishandling of sensitive meeting data. Drex emphasizes the importance of security awareness training, multi-factor authentication, and establishing "trust but verify" cultures that protect employees who take extra verification steps.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Recently, a false story and AI-created image of Eminem and a student he promised to let rap with him went viral. Now, this story wasn't real but it was shared all over the "local" internet, with other variations including other artists. It really showed how many people, when emotionally motivated - looked right past it. This isn't the first (and it won't be the last) that AI, deepfakes, and false information has spread like wildfire. This got us thinking, maybe we should talk about it? And how do we navigate in this new era? AI-generated stuff is here to stay, but is AI overall a bubble? Nick Mattar, an expert in how people interact with digital media and AI joins us to discuss. Nick is part-time faculty at the Mike Ilitch School of Business at Wayne State and the founder of Marketing 1080. More: https://marketing1080.io/ On LInkedIn: https://www.linkedin.com/in/nick-mattar/ Feedback as always - dailydetroit -at- gmail -dot- com or leave a voicemail 313-789-3211. Follow Daily Detroit on Apple Podcasts: https://podcasts.apple.com/us/podcast/daily-detroit/id1220563942 Or sign up for our newsletter: https://www.dailydetroit.com/newsletter/
To celebrate International Youth Day (August 12), this special compilation episode of AI and the Future of Work brings together inspiring voices with wisdom for both young people starting out and the leaders, parents, and mentors guiding them.In this episode, we revisit key moments from four remarkable guests who share timeless lessons on navigating change, finding meaning in work, embracing vulnerability, and developing the human-centered skills that will matter most in the future.Featuring Guests:
Senator Amy Klobuchar speaks out after a vulgar deepfake video of her goes viral, a Southwest Airlines passenger attacks a gate agent after missing three standby flights, and French President Emmanuel Macron defends his lawsuit against Candace Owens. Plus, two video bloggers are struck when a vehicle crashes into a restaurant, and Florida schools test armed drones designed to respond to school shootings.
We had the chance to sit down with Senator Amy Klobuchar and chat about a variety of topics from the recent New York Times op ed, a recent viral and vulgar deep fake video, does she think Walz will run again for governor and more!
We kick off with a chat between Jon and current interim stand-in host Melanie Ellis [0:00 – 21:08] which takes in lots of things: tax, sponsors, and the glow of a cow to name just a few.Then we meet episode guest Ofer Friedman, Chief Business Development Officer at AU10TIX. We discuss the evolving landscape of identity verification, particularly in the context of gambling and the rise of deepfakes; the challenges posed by organized fraud, the importance of regulatory compliance, and the future of digital identities. The discussion also touches on the need for better user experience in verification processes and the role of technology in combating fraud. And Ofer answers a key question for the industry, authentication and fakes: what if someone has an evil twin? [21:09 – 1:26:40]Choice quotes: "The death of trust.""It's a strategy, not a weapon.""Bovine luminescence."Ofer Friedman on LinkedIn: https://www.linkedin.com/in/ofriedman/As ever, we thank all of our sponsors for their vibrant and excellent support. In no particular order they are: the mighty OddsMatrix Sports Betting Software Solutions – the industry go-to for sportsbook platforms and data feeds. EveryMatrix's coverage is so damn good, they're gaining tier-1 operators all the time. The proof really is in the pudding, and OddsMatrix is so, so sweet… Optimove, who turn customer data into something special, with tools that make businesses just plain work better. Optimove, your support helps us make things that bring people sunshine… Well, I say sunshine. I may mean rain, but it's all weather, am I right? Oh, and tell them you came via us and you get your first month free!Then of course there is Clarion Gaming, providers of the magnificent ICE expo (January '26 in Barcelona) and iGB Live! in London. We love you guys BIGGER THAN THE SKY. The Gambling Files podcast delves into the business side of the betting world. Each week, join Jon Bruford and Fintan Costello as they discuss current hot topics with world-leading gambling experts.Website: https://www.thegamblingfiles.com/Subscribe on Apple Podcasts: https://apple.co/3A57jkRSubscribe on Spotify: https://spoti.fi/4cs6ReF Subscribe on YouTube: https://www.youtube.com/@TheGamblingFilesPodcast Fintan Costello on LinkedIn: https://www.linkedin.com/in/fintancostello/ Jon Bruford on LinkedIn: https://www.linkedin.com/in/jon-bruford-84346636/ Follow the podcast on LinkedIn: https://www.linkedin.com/company/the-gambling-files-podcast/ Sponsorship enquiries:
Identitätsdiebstahl, Betrug und Manipulation - Deepfakes machen zunehmend Probleme. Denn die digitalen Fälschungen werden immer besser und leichter herzustellen. Gesetze, Wachsamkeit und Künstliche Intelligenz helfen dagegen - aber nur in Grenzen. Brose, Maximilian; Schroeder, Carina
O pomysłach prawnych rządu Danii na zmiany w prawie autorskim mówi dr Iga Bałos z Katedry Prawa Własności Intelektualnej, Wydziału Prawa Uniwersytetu Andrzeja Frycza Modrzewskiego w Krakowie, autorka bloga sprawnaedukacja.pl.
Denmark Says: You Own Your Face. Denmark just handed its citizens a legal copyright on themselves — your face, voice, and body are officially yours. Instead of clamping down on deepfake tech like a digital killjoy, the law flips the script and makes it an ownership game: if someone makes a fake of you, you can force them to take it down. Derek and Mike explore how this clever move stacks up against EU and French rules, the wild range of deepfake risks from silly to sinister, and why keeping up with AI might be the ultimate whack-a-mole.What Are You Doing in Denmark podcast:https://instagram.com/waydidpodhttps://facebook.com/waydidpodhello@robe-trotting.comhttps://www.youtube.com/playlist?list=PLFCSH6KqKooZmSx1GJu9CWZYjX8esjl2FDerek Hartman: https://www.instagram.com/derekhartmandk https://youtube.com/c/robetrottinghttps://tiktok.com/@derekhartmandkwww.facebook.com/robetrottingMike Walsh:https://instagram.com/phillymike999
In this episode of the podcast, the hosts discuss various topics related to cybersecurity, including the rise of deepfake technology and its implications for fraud, recent law enforcement actions against ransomware groups, and the importance of cybersecurity guidance for operational technology. They also share personal updates and reflections on their experiences in the field of cybersecurity. Article 1: Deepfake detectors are slowly coming of age, at a time of dire need https://www.theregister.com/2025/08/11/deepfake_detectors_fraud/?fbclid=IwY2xjawMQvfZleHRuA2FlbQIxMABicmlkETFoNm9rWWJNWkV1dUtPSnRkAR66jc0jsJa-HYp8G1s5RKkdhFQxKT6w-AE9U4RIHrmaxM2nb8PEjsqu-28ZRQ_aem_iLr1biCxA0aJQsjHzgimIw Article 2: Justice Department Announces Coordinated Disruption Actions Against BlackSuit (Royal) Ransomware Operations https://www.justice.gov/opa/pr/justice-department-announces-coordinated-disruption-actions-against-blacksuit-royal?fbclid=IwZXh0bgNhZW0CMTAAYnJpZBExaDZva1liTVpFdXVLT0p0ZAEe3InrOmHxL_LBD2QIzW6E_iI_LUkVEWorhy_rMaDCk8QGBYR3XXCbAXkBDrM_aem_s62yC0cBsYnk1tkI2TQajQ Article 3: Latest CISA cyber guidance urges organizations to inventory OT assets https://federalnewsnetwork.com/cybersecurity/2025/08/latest-cisa-cyber-guidance-urges-organizations-to-inventory-ot-assets/?fbclid=IwZXh0bgNhZW0CMTAAYnJpZBExaDZva1liTVpFdXVLT0p0ZAEezhibU0LTk5EH_k9zSMVN0wwWFk0okTg9neUT0j2pXJ2a2kLKtDhvR3qJloM_aem_GdXuQ8Kaupr9u4uLC6zlfg Please LISTEN
* Clancy Dubos on Mayor Cantrell only having herself to blame * Is there a leader in the Saints QB race? * Mayor Cantrell's rise to fame and fall from grace * States are cracking down on kratom. What does the science say about it? * Grocery prices remain high...and tariffs could make them worse * Deepfakes of doctors are becoming more common, harming people
* More states are cracking down on kratom. So what actually is it? What does the science say? * Deepfake videos impersonating real doctors are being used to push fake medical advice. How big of a problem is this becoming?
Deepfake videos impersonating real doctors are being used to push fake medical advice. How big of a problem is this becoming? Tony Anscombe, chief Security Evangelist for cybersecurity company ESET, joins us.
Former economist and investment manager Gareth Morgan was caught off-guard by an AI deepfake using his voice and likeness, and he's warned people to be careful. The investment scam made the rounds on Facebook and Instagram and encouraged Kiwis to invest in a vaguely-defined US-based scheme. Gareth Morgan says his daughter showed him the scam - and it almost had him fooled. "The only giveaway is the backdrop, I don't recognise the house behind me. But everything else - the face, the lip movements, the voice, obviously - I can't tell." LISTEN ABOVESee omnystudio.com/listener for privacy information.
Reimagining Intellectual Property in the Age of Luxury Tech: I'm curating this exclusive side event in Geneva on September 1 during the Luxury Innovation Summit. Limited seats, apply now to join the conversation.On this episode, we discuss how the explosive growth of the influencer economy has created a fascinating new frontier in intellectual property law, where personal brands clash with corporate interests and digital avatars raise unprecedented legal questions.This episode unpacks the high-stakes IP battles reshaping the $20 billion influencer industry, revealing how savvy creators protect their most valuable asset, their identity. Through compelling case studies like Charli D'Amelio's strategic trademark registrations and the legendary "Battle of the Kylies" between Jenner and Minogue, we explore how influencers transform fleeting social media fame into lasting, legally-protected brand equity.But the legal landscape doesn't just apply to human influencers. We venture into the uncanny valley of virtual personalities like Lil Miquella and Noonoouri, examining how these digital beings, composed entirely of intellectual property, navigate contracts, licensing, and disclosure requirements. As luxury brands increasingly embrace these pixel-perfect ambassadors who never age and never sleep, the boundaries between creative assets and personas continue to blur.The global response to these challenges reveals fascinating cultural and legal differences. From Tennessee's groundbreaking AI-ELVIS Act protecting voice rights to China's comprehensive regulations on "deep synthesis" content, we witness how legal frameworks worldwide are evolving to address deepfakes, digital cloning, and the ownership of virtual identities.Whether you're an influencer building your personal brand, a marketer navigating partnership agreements, or simply curious about the legal infrastructure behind social media fame, this episode offers crucial insights into who truly owns your digital presence—and how to protect it. Remember: in the high-stakes world of influence, the law isn't here to rain on your parade; it's here to ensure you own the parade itself.Subscribe now to explore the intersection of intellectual property and digital influence, and join us at the Luxury Innovation Summit 2025 in Geneva this September for our special event on IP in the age of luxury technology.Send us a textSupport the show
Some Catholics no doubt took offense, but no one was seriously harmed by the Pope-in-a-puffer-jacket meme. Far more sinister deepfakes are on the rise, however, with scammers now frequently using widely available technology to bilk the unwary, and political campaigns marshaling AI to sow lies about their opponents. Perhaps the greatest threat is not the deepfakes themselves but that their mere existence can cause us to question the veracity of nearly everything we see and hear online and in the media. So, how concerned should we be about the proliferation of fake media? And what, if anything, is being done to staunch the bleeding? UC Berkeley Professor Hany Farid, a pioneer in the field of digital forensics, joins Editor-in-Chief Pat Joseph live onstage to discuss what the growing onslaught of mis- and disinformation portends for our society and what we can do to manage it.Further reading: Watch the full live conversation with Hany Farid on YouTubeFind out more about California Live! events This episode was produced by Coby McDonald. Special thanks to Hany Farid, Pat Joseph, and Nat Alcantara. Art by Michiko Toki and original music by Mogli Maureal. Additional music from Blue Dot Sessions.Support the show
We're back! Pen Tester and Team Ambush member Morgan Trust walks us through his journey into the cybersecurity field. With a can-do approach, Morgan discusses how he has developed professionally, expanding his expertise across public speaking and competitive hacking. His presentation, "The New Era of Deception: AI, Deep Fakes, and The Dark Web" has hit many a stage with these essential points to keep in mind: - AI is increasingly being used in sophisticated phishing attacks. - Cybersecurity practices should be proactive; be prepared for a situation- Understanding the evolving nature of cyber threats is vital. Enjoy this episode featuring a balance of hobby pursuits, shared experiences in security, and informative points.We want to hear from you! Contact us at unsecurity@frsecure.com and follow us for more! LinkedIn: https://www.linkedin.com/company/frsecure/ Instagram: https://www.instagram.com/frsecureofficial/ Facebook: https://www.facebook.com/frsecure/ BlueSky: https://bsky.app/profile/frsecure.bsky.social About FRSecure: https://frsecure.com/ FRSecure is a mission-driven information security consultancy headquartered in Minneapolis, MN. Our team of experts is constantly developing solutions and training to assist clients in improving the measurable fundamentals of their information security programs. These fundamentals are lacking in our industry, and while progress is being made, we can't do it alone. Whether you're wondering where to start, or looking for a team of experts to collaborate with you, we are ready to serve.
Artie Intel and Micheline Learning report on the latest in artificial intelligence and robotics for The AI Report. In this episode, groundbreaking new tools and medical advances, global policy moves, and pop culture controversies are discussed. First Up, LeBron James’ fight against bizarre deepfake videos, Ghana’s push to lead Africa’s AI race, and new allegations about AI-generated Taylor Swift content linked to Elon Musk’s AI platform. Plus, we explore Alien: Earth, a haunting new sci-fi series tackling the AI threat. Stay tuned for the trends, breakthroughs, and debates shaping our AI-powered future. This report comes from program sponsor Apple. Discover the innovative world of Apple and shop everything iPhone, iPad, Apple Watch, Mac, and Apple TV, plus explore accessories, entertainment, and expert device support at https://www.apple.com/ This is The AI Report.
See omnystudio.com/listener for privacy information.
Fraud just had its AI upgrade.It's faster than your payments and industrialised at a global scale. Welcome to the Scamdemic — a global wave of scams and financial crime growing faster than almost any asset class.We're joined by Simon Taylor, Head of Strategy at Sardine and Co-Founder of 11:FS. Together, we unpack how fraud has evolved, why it's so hard to stop, and what financial institutions, fintechs, and regulators must do now.Inside the episode:- Why faster payments = faster fraud- Deepfakes, fraud-as-a-service, and industrialised scam call centres- Why liability shifts and reimbursements can fuel crime instead of stopping it- The missed open banking opportunity: building a global “fraud utility”- Stablecoins, AI agents, and the next wave of attack surfaces- How to protect customers without killing innovationWhether you work in banking, fintech, payments, or policy, this episode will change how you think about fraud prevention in the age of AI and real-time payments.
Reading Plan: Old Testament - Nehemiah 1-3Psalms - Psalm 94:8-15Gospels - Luke 11:1-13New Testament - Colossians 2:1-15Visit https://www.revivalfromthebible.com/ for more information.
Your face unlocks your phone, animates your emoji, and verifies your identity but who actually owns the digital rights to your unique features? In this deep dive into biometric data law, we explore the high-stakes legal battles reshaping how technology interacts with our most personal physical characteristics.When Facebook paid $650 million to settle a class action lawsuit over facial recognition, it signaled a seismic shift in how companies must approach biometric data collection. We break down the landmark cases—from White Castle's potential $17 billion fingerprint scanning liability to Clearview AI's global legal troubles for scraping billions of public photos without consent. These aren't just American concerns; we journey from China, where a professor successfully sued a wildlife park over mandatory facial scans, to India's Supreme Court ruling on the world's largest biometric ID system.Beyond privacy concerns, fierce patent wars are erupting over who owns the methods for collecting and using biometric data. Companies battle over facial authentication patents worth billions while "liveness detection" technology becomes crucial in a world of deepfakes and digital impersonation. The stakes couldn't be higher as these technologies become embedded in everything from banking to border control.We untangle the global patchwork of regulations emerging to govern facial recognition, from Illinois' pioneering BIPA law to Europe's strict GDPR protections and China's surprising new limits on private biometric collection. Throughout it all, a clear trend emerges: your face isn't just data, it's your identity, and increasingly, the law recognizes that distinction.Whether you're concerned about your rights, curious about the future of facial recognition, or simply want to understand why your social media filters might be collecting more than just likes, this episode offers essential insights into the legal frameworks shaping our biometric future. Listen now to discover how to protect your digital identity in a world that increasingly wants to scan it.Send us a textSupport the show
Tax season scams are nothing new, but David Maimon is tracking a worrying evolution. As head of Fraud Insights at SentiLink and a professor of fraud intelligence at Georgia State University, David has been studying how organised crime groups are now blending stolen identities with generative AI and deepfake technology to outpace traditional security measures. In this conversation, he explains how identities from some of the least likely victims, including death row inmates, are being exploited to open neobank accounts, set up fake businesses, and run sophisticated bust-out schemes with a low risk of detection. David breaks down how these operations work, from creating synthetic identities using stolen Social Security numbers to manufacturing convincing documents and faces that can pass liveness checks. He reveals the telltale signs his team uncovered, such as shared physical addresses, legacy email domains, and consistent digital fingerprints that point to coordinated fraud rings. With tools like DeepFaceLive, Avatarify, and cloned voices now being deployed to bypass authentication, he warns that the gap between criminal innovation and institutional defences can be as wide as 7 to 12 months. We also explore why financial institutions struggle to detect these scams early, and why layered verification, combining real-time checks with historical identity analysis, is essential. David shares the threats on the horizon, from increasingly realistic AI-generated images to voice cloning attacks, and stresses the need for both technological solutions and public awareness to slow the momentum of these schemes. Whether you work in banking, cybersecurity, or simply want to protect your own identity, this episode offers a rare look inside the tactics, tools, and vulnerabilities shaping the next wave of financial fraud. And yes, there is still time at the end for a great book recommendation and a classic Tom Petty track.
This week, Bridget is joined by Producer Mike to break down the tech stories you might have missed. Lynx coach Cheryl Reeve had some eloquent, powerful things to say about the sexist crypto bros who threw dildos at WNBA players as a publicity stunt for their new meme coin. It's less than a minute and worth a listen: https://www.reddit.com/r/Fauxmoi/comments/1mkdusa/lynx_coach_cheryl_reeve_livid_over_sex_toy/ Disgraced journalist Chris Cuomo fell for an obvious AOC deepfake video about Sydney Sweeney, then demanded she answer for it even after he knew it was fake: https://www.independent.co.uk/news/world/americas/us-politics/chris-cuomo-aoc-sydney-sweeney-jeans-b2803523.html A jury has ruled that Meta illegally collected Flo users’ menstrual data: https://www.theverge.com/news/753469/meta-flo-period-tracker-lawsuit-verdict TeaOnHer, a rival Tea app for men, sprang up overnight and is already leaking users’ personal data and driver’s licenses: https://techcrunch.com/2025/08/06/a-rival-tea-app-for-men-is-leaking-its-users-personal-data-and-drivers-licenses/ Grok generates fake Taylor Swift nudes without being asked. https://arstechnica.com/tech-policy/2025/08/grok-generates-fake-taylor-swift-nudes-without-being-asked/ ICYMI: Taylor Swift Twitter deep fakes are everyone’s problem: https://podcasts.apple.com/us/podcast/taylor-swift-twitter-deep-fakes-are-everyones-problem/id1520715907?i=1000643579343 Grok Imagine's 'Spicy' mode lacks basic guardrails for sexual deepfakes: https://mashable.com/article/xai-grok-imagine-sexual-deepfakes?test_uuid=003aGE6xTMbhuvdzpnH5X4Q&test_variant=b Meta illegally collected data from Flo period and pregnancy app, jury finds. https://arstechnica.com/tech-policy/2025/08/jury-finds-meta-broke-wiretap-law-by-collecting-data-from-period-tracker-app/ Hackers Clown Trump Education Secretary With ‘Curb Your Enthusiasm’ Music and ‘Corrupt Billionaire’ Heckles. ttps://www.thedailybeast.com/hackers-clown-trump-education-secretary-with-circus-music-and-corrupt-billionaire-heckles/ If you’re listening on Spotify, you can leave a comment there or email us at hello@tangoti.com! Follow Bridget and TANGOTI on social media! Many vids each week. instagram.com/bridgetmarieindc/ tiktok.com/@bridgetmarieindc youtube.com/@ThereAreNoGirlsOnTheInternetSee omnystudio.com/listener for privacy information.
Lionel expresses grave concern about existential threats to free speech, particularly focusing on the intersection of artificial intelligence (AI) and the First Amendment. Lionel emphasizes the importance of public education on these complex legal and technological issues to safeguard fundamental freedoms. Lionel begins by discussing freedom of speech, societal changes in attire at public places like airports, and the perceived limitations on what can be said publicly by certain demographics. Lionel analyzes mayoral candidate Zoran Mamdani's policies through the lens of globalist agendas, the Hegelian dialectic (problem-reaction-solution), and the potential for increased surveillance and control through seemingly benevolent programs like free transit and housing. Lionel critiques the "bumper sticker sloganeering" and "incuriosity" of people who use labels like "communist," "wokester," "law and order," and "Sharia law" without understanding their true meaning or nuances. Lionel talks about Andrew Cuomo's controversial policies and then Lionel and callers discuss historical hoaxes. Lionel then talks about filming a scene for House of Cards with Robin Wright. While on set, he experienced excruciating, alien-like abdominal pain that led to an ambulance being called and being given morphine at the hospital, which instantly and completely alleviated his pain. Lionel then talks about economic and political theories, discussing figures like Marx and Engels in the context of communism's ideal of a "classless, stateless society" Learn more about your ad choices. Visit megaphone.fm/adchoices
Lionel, expresses grave concern about existential threats to free speech, particularly focusing on the intersection of artificial intelligence (AI) and the First Amendment. Lionel emphasizes the importance of public education on these complex legal and technological issues to safeguard fundamental freedoms. Learn more about your ad choices. Visit megaphone.fm/adchoices
See omnystudio.com/listener for privacy information.
In this episode of The Healthier Tech Podcast, we dive headfirst into the uncanny rise of AI-generated influencers—and what it means for your health, your identity, and your sanity. If you've ever scrolled past a flawless face on social media and felt just a little worse about your own, this episode is for you. We break down how synthetic influencers like Lil Miquela and AI video tools like HeyGen and Sora are reshaping not only marketing and media—but also our perception of what's real, what's desirable, and what's even possible. Highlights you won't want to miss: How the digital influencer economy is being infiltrated by perfect, programmable personas Why our brains struggle to tell the difference between reality and AI-generated content The psychological toll of comparing ourselves to flawless fakes How ideal self distortion is warping mental health, especially in teens Why authenticity is becoming the new luxury in the age of AI 5 real-world ways to protect your mental clarity and digital wellness starting today This isn't just a tech trend—it's a cultural shift. And it's happening right now, on your feed, in your head, and across every scroll of your screen. If you care about digital wellness, tech-life balance, and protecting your mental health in a synthetic world, hit play. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.
Indiana State Rep. Craig Haggard, whose wife is allegedly the victim of an AI topless deepfake video, joins Kendall and Casey to discuss the controversy.See omnystudio.com/listener for privacy information.
See omnystudio.com/listener for privacy information.