Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
In this week's Crime Roundup, Sheryl McCollum and Joshua Schiffer discuss one of the most urgent and misunderstood frontiers in criminal law: who owns your identity in the age of AI? From celebrities digitally resurrected without consent to student-athletes blocked from profiting off their own names, Sheryl and Joshua expose the legal chaos surrounding name, image, and likeness (NIL). They share real cases involving image theft, fraud, and stalking to show just how deep this rabbit hole goes, including how NIL rights intersect with immigration law, gambling, and even organized crime. Whether it’s a deepfake of Robin Williams or a repurposed viral video, the message is clear: your face, your voice, and your story... they’re assets. And they’re up for grabs. Highlights: (0:00) Welcome to Crime Roundup with Sheryl McCollum and Joshua Schiffer (1:00) The power of presence: from Michael Jackson to Jennifer Aniston (3:30) Robin Williams’ image and the dangers of AI-generated deepfakes (5:30) What is NIL, and why it matters in crime, sports, and identity (9:00) Real-world image theft: fraud, fake agents, and misused images in advertising (10:45) When student visas block NIL deals (12:00) Who owns the school logos in NIL endorsements? (14:30) Why you cannot always use your own sports photos (17:00) Free speech vs. image rights: parody, politics, and the First Amendment (18:30) Public figures and stalking: where the law draws the line (24:45) Illegal gambling, NIL, and the risk of game-fixing (29:30) Protecting yourself in the digital age (34:45) This week’s message: “The capacity for love knows no bounds.” About the Hosts Joshua Schiffer is a veteran trial attorney and one of the Southeast’s most respected legal voices. He is a founding partner at ChancoSchiffer P.C., where he has litigated high-stakes criminal, civil rights, and personal injury cases for over two decades. Known for his bold courtroom presence and ability to clearly explain complex legal issues, Schiffer is a frequent media contributor and a fearless advocate for accountability. Sheryl “Mac” McCollum is an Emmy Award-winning CSI, a writer for CrimeOnline, a forensic and crime scene expert for Crime Stories with Nancy Grace, and co-author of the textbook Cold Case: Pathways to Justice. She is the founder and director of the Cold Case Investigative Research Institute, a national collaboration that advances techniques for solving cold cases and assists families and law enforcement with unsolved homicides, missing persons, and kidnappings.
What if protecting your digital twin becomes the new cyber hygiene? In this week's episode, Ron welcomes back cybersecurity leader Jason Rebholz, CEO of Evoke, to discuss how AI is reshaping the fundamentals of cyber hygiene. From data breaches and deepfakes to everyday habits that protect our digital lives, Jason shares how small actions and smarter use of AI can make all the difference. Together, they uncover how our growing digital footprints are giving rise to digital twins, AI replicas that can mirror our behaviors, voices, and even decisions, and what that means for the future of trust, identity, and security. Impactful Moments: 00:00 - Introduction 01:00 - The Neon app data leak story 03:00 - Why our voices are the new passwords 05:00 - How AI can strengthen cyber hygiene 07:00 - Jason's mission to secure AI systems 09:00 - AI as a force multiplier for defenders 11:00 - Deepfakes and the new social engineering playbook 13:00 - Attackers' use of AI and what it means for us 15:00 - The rise of digital twins and identity threats 19:00 - How to defend against “yourself” online 20:00 - Final reflection: Trust in the AI age Links: Connect with Jason on LinkedIn: https://www.linkedin.com/in/jrebholz/ Check out the TechCrunch article on the Neon app data leak story: https://techcrunch.com/2025/09/25/viral-call-recording-app-neon-goes-dark-after-exposing-users-phone-numbers-call-recordings-and-transcripts/ Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
Ben cant figure out if he is more scared of sassy Jake Paul being real or AI. Go to https://zocdoc.com/MAMA to find and instantly book a top-rated doctor today. Watch the video version of the episode here: https://youtu.be/onAO0MnGozw Follow my Instagram: https://www.instagram.com/benoftheweek Business Inquiries: benoftheweek@night.co Originally produced by Studio71. But now it's produced by meee :) Learn more about your ad choices. Visit megaphone.fm/adchoices
Ben cant figure out if he is more scared of sassy Jake Paul being real or AI. Go to https://zocdoc.com/MAMA to find and instantly book a top-rated doctor today. Watch the video version of the episode here: https://youtu.be/onAO0MnGozw Follow my Instagram: https://www.instagram.com/benoftheweek Business Inquiries: benoftheweek@night.co Originally produced by Studio71. But now it's produced by meee :) Learn more about your ad choices. Visit megaphone.fm/adchoices
In episode 582 of Lawyerist Podcast, Zack Glaser talks with Merisa Bowers, Loss Prevention and Outreach Counsel at the Ohio Bar Liability Insurance Company, about how artificial intelligence is reshaping lawyers' ethical duties. Merisa explains how deepfakes and realistic scams are creating new challenges for diligence and verification, why unregulated chatbots can accidentally create attorney-client relationships, and what disclosures lawyers should make when using AI tools. She also shares practical steps to maintain confidentiality, protect client data, and apply long-standing ethics rules to fast-changing technologies. Links from the episode: ABA Formal Opinion 512 - Generative AI ABA Formal Opinion 510 - Prospective Clients & Rule 1.18 Listen to our previous episodes about non-lawyer ownership: #354: A Look at the New Non-lawyer Firm Ownership Reform, with Lori Gonzalez: Apple Podcasts | Spotify | Lawyerist #355: A Look at the New Non-lawyer Firm Ownership Reform, Pt.2, with Allen Rodriguez: Apple Podcasts | Spotify | Lawyerist #221: The State of the Legal Profession, with ABA President Robert M. Carlson: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 0:00 – ClioCon4:45 – Meet Merisa Bowers6:50 – Tech Shifts & New Ethics Risks9:10 – Deepfakes & Diligence13:40 – AI Scams & Fake Clients18:30 – Chatbots Creating Clients 23:40 – Ethical Chatbot Models26:45 – Should Lawyers Disclose AI?29:40 – Don't Let AI Think for You34:20 – Protecting Client Data36:10 – Staying Ethical with AI37:40 – Wrap-Up & Final Thoughts
Send us a textImagine algorithms that feel more attentive than people – always patient, never tired, and ready with the perfect follow‑up question. That illusion of empathy is already shaping how buyers research, decide, and stay loyal. We sit down with Mark Schaefer to unpack the human side of AI: cognitive offloading at scale, de‑skilling that changes how customers think, and what it means when assistants default to a single recommendation instead of a page of links.We dig into the new rules of discoverability and why text‑rich content, precise transcripts, and entity clarity now carry outsized weight with large language models. Mark explains why PR is having a renaissance as AI shifts from counting backlinks to interpreting authoritative mentions, and how urgency matters because early movers will lock in the default answers. We also explore the power of overrides—those human factors that still trump machine suggestions – like word‑of‑mouth, brand preference, and community trust that carries through to the moment of choice.The conversation moves beyond tactics to tackle a looming trust crisis. Deepfakes, synthetic media, and frictionless misinformation demand visible transparency: provenance for assets, clear disclosures, and rapid response playbooks. Yet there's optimism here. With daily experimentation and a strong learning community, founders and teams can use AI to compress go‑to‑market cycles, scale real empathy in service, and design products faster – without losing the human touch that makes brands memorable and resilient.If you're ready to future‑proof your marketing while staying deeply human, this one's for you. Listen, subscribe, and leave a review to help others find the show – and tell us: what's your smartest move to stay findable and trusted in an AI‑first world?This episode was recorded through a Descript call on October 7, 2025. Read the blog article and show notes here: https://webdrie.net/how-ai-changes-your-customers-mark-schaefers-new-book
P&C drink and review Festifall, a Marzen lager by Southcounty Brewing, then discuss the prospect of a world without cash. Deepfakes, fake documents, and AI-generated “evidence” are becoming harder to spot — and that might be what finally pushes governments and businesses to demand trusted digital sources. But that trust comes at a price: a digital ID system that could track who you are, what you do, and even what you buy.In this episode, we explore:* How fake videos and online deception could accelerate the rollout of digital IDs.* The battle between centralized systems (government-controlled) and decentralized ones (blockchain-based).* The alleged benefits for individuals, businesses, and governments: convenience, security, efficiency, fraud prevention.* The hidden dangers: surveillance, data breaches, social control, and the loss of financial and personal freedom.* Real-world examples from China's social credit system to Canada's Freedom Convoy—and what they reveal about where this might lead.* Whether we're even ready for this kind of technology—or if policymakers are racing ahead of reality.And yes, the biblical echoes in Revelation 13: the “mark of the beast” that decides who can buy or sell.Bottom line:Digital ID could make life easier—or it could make freedom optional. The difference lies in how it's built, who controls it, and whether we keep cash, privacy, and choice alive.
In episode 582 of Lawyerist Podcast, Zack Glaser talks with Merisa Bowers, Loss Prevention and Outreach Counsel at the Ohio Bar Liability Insurance Company, about how artificial intelligence is reshaping lawyers' ethical duties. Merisa explains how deepfakes and realistic scams are creating new challenges for diligence and verification, why unregulated chatbots can accidentally create attorney-client relationships, and what disclosures lawyers should make when using AI tools. She also shares practical steps to maintain confidentiality, protect client data, and apply long-standing ethics rules to fast-changing technologies. Links from the episode: ABA Formal Opinion 512 - Generative AI ABA Formal Opinion 510 - Prospective Clients & Rule 1.18 Listen to our previous episodes about non-lawyer ownership: #354: A Look at the New Non-lawyer Firm Ownership Reform, with Lori Gonzalez: Apple Podcasts | Spotify | Lawyerist #355: A Look at the New Non-lawyer Firm Ownership Reform, Pt.2, with Allen Rodriguez: Apple Podcasts | Spotify | Lawyerist #221: The State of the Legal Profession, with ABA President Robert M. Carlson: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 0:00 – ClioCon4:45 – Meet Merisa Bowers6:50 – Tech Shifts & New Ethics Risks9:10 – Deepfakes & Diligence13:40 – AI Scams & Fake Clients18:30 – Chatbots Creating Clients 23:40 – Ethical Chatbot Models26:45 – Should Lawyers Disclose AI?29:40 – Don't Let AI Think for You34:20 – Protecting Client Data36:10 – Staying Ethical with AI37:40 – Wrap-Up & Final Thoughts Learn more about your ad choices. Visit megaphone.fm/adchoices
AI video creation software is advancing rapidly and some of its output is very alarming. OpenAI's Sora, currently the most downloaded app in the App Store, allows users to create incredibly realistic deepfake videos with minimal effort. One viral example? A fake video of OpenAI CEO Sam Altman shoplifting in a department store. With technology this convincing, how can we trust what we see online? And what kind of destabilizing impact could this have on our society? Guests: Max Read, journalist, screenwriter, editor, former editor at Gawker and Select All Alice Marwick, director of research, Data & Society Jason Koebler, co-founder, 404 Media Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of TechMagic, hosts Cathy Hackl and Lee Kebler unpack the week's biggest tech stories, from OpenAI's latest announcements and Sora's deepfake potential to Meta's sold-out Ray-Ban smart glasses. They explore how AI integrations with platforms like Coursera, Canva, and Zillow could reshape how we access and trust information. The hosts also dive into the implications of skyrocketing AI startup valuations and reflect on the end of AOL dial-up, closing a chapter in internet history. Balancing enthusiasm with caution, they challenge listeners to consider the ethical, social, and creative consequences of an increasingly AI-driven world.Come for the tech and stay for the magic!Episode Highlights:The Privacy Implications of AI Video Generation – Cathy Hackl's hands-on test of OpenAI's Sora exposes major privacy challenges around biometric data and deepfake creation. By simply reading numbers and moving one's head, users can generate hyper-realistic videos of themselves or others, without consent. While visuals are striking, voice replication remains flawed. Cathy and Lee caution that such tools blur ethical lines, raising urgent questions about consent, authenticity, and data ownership in creative AI adoption.OpenAI's Vision for Information Control – Lee Kebler dissects OpenAI's ambition to centralise online experiences through deep integrations with Coursera, Canva, and Zillow. By keeping users inside ChatGPT's ecosystem, OpenAI could redefine how information flows, but at a cost. Cathy and Lee warn that algorithmic filtering may restrict perspective diversity, amplify bias, and erode the open web's foundational freedom. Their advice: diversify digital sources to maintain balanced, transparent information access.The Rise of Consumer AR Hardware – Meta's Ray-Ban smart glasses sell-out reveals the next leap in wearable adoption. Cathy and Lee analyse how fashion-forward design, practical utility, and accessible pricing have made AR more approachable. Positioned between novelty and necessity, these wearables hint at future accessibility, productivity, and communication uses. They stress that long-term success will depend on ergonomic design, battery life, and balancing innovation with real-world wearability.The AI Investment Bubble Warning Signs – Cathy and Lee discuss mounting worries from industry giants like Sam Altman and Jeff Bezos about inflated AI valuations. Unlike past bubbles driven by retail hype, today's surge stems from institutional overconfidence and misunderstanding of AI fundamentals. Many investors conflate general AI with LLMs, fuelling unrealistic expectations. Key Discussion Topics:00:00 Intro: Welcome to Tech Magic with Cathy Hackl and Lee Kebler01:44 Testing OpenAI's Sora: Deep Fake Capabilities and Privacy Concerns15:02 OpenAI's Dev Day: Impressive Tools and Integration Announcements18:09 The Dark Side of AI Integration: Information Control and Filter Bubbles25:23 Apple's Vision Pro Updates and Meta's Ray-Ban Success32:41 Farewell to AOL: The End of Dial-Up Internet Era34:17 AI Startup Valuations: Warning Signs of a Tech Bubble38:05 Final Thoughts: Music Recommendations and Show Wrap-Up Hosted on Acast. See acast.com/privacy for more information.
Learn more about Blackbird: https://www.blackbird.vc/get-investment?utm_source=centennial Fanfic form: https://docs.google.com/forms/d/e/1FAIpQLScF1m6gAh3gWMjpOPdDHIamwJLcOSLawoTBTjq2C9dIv7hDog/viewform?usp=preview Please consider buying us a coffee or subscribing to a membership to help keep Centennial World's weekly podcasts going! Every single dollar goes back into this business
He started small, swiping cards, buying gift cards, and cashing out. It spiraled into a full‑blown criminal enterprise. Dozens of co‑conspirators, stacks of stolen plastic, and a lifestyle built on chaos.Meet Nathan Michael, leader of Oak Cliff Swipers.SponsorsSupport for this show comes from ThreatLocker®. ThreatLocker® is a Zero Trust Endpoint Protection Platform that strengthens your infrastructure from the ground up. With ThreatLocker® Allowlisting and Ringfencing™, you gain a more secure approach to blocking exploits of known and unknown vulnerabilities. ThreatLocker® provides Zero Trust control at the kernel level that enables you to allow everything you need and block everything else, including ransomware! Learn more at www.threatlocker.com.Support for this show comes from Pantheon. Pantheon keeps your site fast, secure, and always on. That means better SEO, more conversions, and no lost sales from downtime. But this isn't just a business win; it's a developer win too. Your team gets automated workflows, isolated test environments, and zero-downtime deployments. Visit Pantheon.io, and make your website your unfair advantage.Support for this show comes from Adaptive Security. Deepfake voices on a Zoom call. AI-written phishing emails that sound exactly like your CFO. Synthetic job applicants walking through the front door. Adaptive is built to stop these attacks. They run real-time simulations, exposing your teams to what these attacks look like to test and improve your defences. Learn more at adaptivesecurity.com.
Las vidas de varias jóvenes universitarias se derrumbaron cuando descubrieron fotos de ellas editadas con inteligencia artificial: eran sus caras, pero esos cuerpos desnudos no eran suyos. Así comenzó una odisea para que esto se viera como un delito y no quedara en la impunidad. En nuestro sitio web puedes encontrar una transcripción del episodio. Or you can also check this English translation. ♥ Vivimos tiempos difíciles. Somos un medio sin ánimo de lucro, y nuestra permanencia depende de oyentes como tú. Si valoras nuestro trabajo, únete a Deambulantes, nuestras membresías. Ayúdanos a elevar las voces latinas y narrar la experiencia de nuestras comunidades. Tu aporte se invierte directamente en nuestro trabajo periodístico y hace toda la diferencia. ★ Si no quieres perderte ningún episodio, suscríbete a nuestro boletín y recibe todos los martes un correo. Además, los viernes te enviaremos cinco recomendaciones inspiradoras del equipo para el fin de semana. ✓ ¿Nos escuchas para mejorar tu español? Tenemos algo extra para ti: prueba nuestra app Jiveworld, diseñada para estudiantes intermedios de la lengua que quieren aprender con nuestros episodios. The lives of several young university students fell apart when they discovered photos of themselves altered with artificial intelligence: the faces were theirs, but the naked bodies were not. And so began an odyssey to have this aggression recognized as a crime that would not remain unpunished.See omnystudio.com/listener for privacy information.
Trump Blocked from Deploying National Guard to Oregon – Court says you can't “borrow” another state's troops; politics meets civics class. Gaza Ceasefire Talks Begin in Egypt – Hostages, aid, airstrikes on the table; optimism on backorder.Spotify CEO Daniel Ek Steps Down – Co-CEO era begins; artists still want real payouts.Mark Sanchez Felony Charge (Parking Lot Fight) – From color commentary to charge sheet; “Parking Lot Rumble” replaces “Butt Fumble.”OpenAI + AMD Mega Deal – AI arms race picks teams; GPUs choose sides.Americans Don't Trust AI in Elections – Deepfakes, disinfo, and regulation wish-casting.Tech Firms Crack Down on AI Nudity – Platforms promise fewer deepfakes; enforcement question marks.Everest Blizzard Rescue – Hundreds guided to safety in Tibet; climate chaos meets adventure tourism.MIT: Musicians Focus Better – Selective attention advantage; practice > procrastination.Diddy Sentenced to 50 Months – Accountability message from the bench; appeal incoming.TikTok: (search) diddy sentenced 50 months, judge subramanian diddy sentencingLouis C.K. Performs in Saudi Arabia – “Art vs. optics” debate ignites—again.Supreme Court Rejects Ghislaine Maxwell Appeal – 20-year sentence stands; unanswered questions linger. LINKShttps://instagram.com/itsnewstoushttps://tiktok.com/@itsnewstous Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Oplichters proberen mensen geld af te troggelen door fake online profielen aan te maken van BV's. Fatma Taspinar, Peter Van de Veire, Jacotte Brokken... allemaal lijken ze iets te verkopen, maar daar klopt niks van. Maar hoe kan je echt van vals onderscheiden. Deepfake-video's bijvoorbeeld zijn zo goed dat je niet meer ziét dat het een valse Fatma/Peter/Jacotte is die spreekt. Computerwetenschapper dr. Hannes Mareen (UGent) waarschuwt dat we steeds kritischer moeten kijken naar wat we zien.Gastspreker: Hannes MareenPresentatie: Lotte De CaluwéRedactie: Helene VanlathemEindredactie: Katleen BrackeMontage: Alexander Van VlierbergheDeze podcast is mogelijk dankzij de medewerking van KU Leuven, UAntwerpen, UGent, UHasselt, VUB en de Jonge Academie en komt tot stand met de steun van VRT, de Nationale Loterij en de Vlaamse overheid.
durée : 00:04:11 - Un monde connecté - par : François Saltiel - L'entreprise de Sam Altman vient de sortir Sora 2, un réseau social qui permet la production débridée de courtes vidéos générées par intelligence artificielle, un produit aux très maigres garde-fous.
Deepfake attacks are exploding, and your company is probably not ready. In this episode of The Backup Wrap-up, we dive into how cybercriminals are using AI to clone voices and create fake videos to authorize fraudulent wire transfers and reset credentials. With nearly 50% of businesses already experiencing deepfake attacks, this isn't a future problem – it's happening right now. We break down the two main attack vectors: authorization fraud (where fake CEOs trick employees into wiring money) and credential theft (where attackers reset passwords and MFA tokens). More importantly, we give you actionable defense strategies: multi-channel verification protocols, callback procedures for sensitive transactions, employee training programs, and break-glass scenarios. You'll learn what not to rely on (spoiler: caller ID is worthless) and why policy and procedure matter more than technology alone. This is a must-listen for anyone responsible for security or financial controls.
Chaque jour, en quelques minutes, un résumé de l'actualité culturelle. Rapide, facile, accessible.Notre compte InstagramDES LIENS POUR EN SAVOIR PLUSOPENAI - SORA : Le Figaro, FrandroidTAYLOR SWIFT : Franceinfo, Le MondeELON MUSK - NETFLIX : Le Huff Post, LibérationSPIN-OFF PEAKY BLINDERS : Allociné, Le ParisienZAHO DE SAGAZAN : Le Télégramme, TéléramaMISS FRANCE 2026 : RTL, Le ParisienÉcriture : Enzo BruillotIncarnation : Samy Rabbata Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Today we're going to talk about deep fakes. Wow. Deep fakes. You see them all the time. Sometimes they're funny and sometimes they're too real and can be troublesome. Screw The Commute Podcast Show Notes Episode 1042 How To Automate Your Business - https://screwthecommute.com/automatefree/ Internet Marketing Training Center - https://imtcva.org/ Higher Education Webinar – https://screwthecommute.com/webinars See Tom's Stuff – https://linktr.ee/antionandassociates 00:23 Tom's introduction to Deep Fakes 02:07 Current deep fakes have a lot of uses 04:07 Used for entertainment and for nefarious purposes 07:00 Play with this technology and get to understand it Entrepreneurial Resources Mentioned in This Podcast Higher Education Webinar - https://screwthecommute.com/webinars Screw The Commute - https://screwthecommute.com/ Screw The Commute Podcast App - https://screwthecommute.com/app/ College Ripoff Quiz - https://imtcva.org/quiz Know a young person for our Youth Episode Series? Send an email to Tom! - orders@antion.com Have a Roku box? Find Tom's Public Speaking Channel there! - https://channelstore.roku.com/details/267358/the-public-speaking-channel How To Automate Your Business - https://screwthecommute.com/automatefree/ Internet Marketing Retreat and Joint Venture Program - https://greatinternetmarketingtraining.com/ Kartra - https://screwthecommute.com/kartra/ Copywriting901 - https://copywriting901.com/ Become a Great Podcast Guest - https://screwthecommute.com/greatpodcastguest Training - https://screwthecommute.com/training Disabilities Page - https://imtcva.org/disabilities/ Tom's Patreon Page - https://screwthecommute.com/patreon/ Tom on TikTok - https://tiktok.com/@digitalmultimillionaire/ Email Tom: Tom@ScrewTheCommute.com Internet Marketing Training Center - https://imtcva.org/ Related Episodes No Bumper Stickers - https://screwthecommute.com/1041/ More Entrepreneurial Resources for Home Based Business, Lifestyle Business, Passive Income, Professional Speaking and Online Business I discovered a great new headline / subject line / subheading generator that will actually analyze which headlines and subject lines are best for your market. I negotiated a deal with the developer of this revolutionary and inexpensive software. Oh, and it's good on Mac and PC. Go here: http://jvz1.com/c/41743/183906 The Wordpress Ecourse. Learn how to Make World Class Websites for $20 or less. https://screwthecommute.com/wordpressecourse/ Join our Private Facebook Group! One week trial for only a buck and then $37 a month, or save a ton with one payment of $297 for a year. Click the image to see all the details and sign up or go to https://www.greatinternetmarketing.com/screwthecommute/ After you sign up, check your email for instructions on getting in the group.
Miljan is the Founder and CEO of Primal. Bitcoin is an open protocol for money, nostr is an open protocol for speech, Primal brings the power of both into an easy to use interface for everyone.Search Primal in your favorite app store!Miljan on Nostr: https://primal.net/miljanEPISODE: 180BLOCK: 917284PRICE: 842 sats per dollar(00:00) Ten31 retreat(01:02) Retreat impressions and collaboration(01:32) How many bitcoiners(03:01) Estimating self custody users and real audience sizes(05:09) Stalled growth and focusing Primal on Bitcoiners(06:07) Curated "Getting Started" invite packs(09:14) Solving empty feeds: recommendations vs. decentralization(12:02) Improving follow discovery(14:53) Retention over onboarding: first-touch UX matters(16:11) Empowering users with tools, not surveillance models(18:08) Aligning incentives: Primal's user-paid model(19:42) Live streaming on Nostr: open spec meets clean UI(22:20) Not competing with TikTok: high-signal over dopamine(25:06) Platform risk for streamers and Nostr's modular freedom(28:05) Unified chat, zaps, and the magic of interoperable apps(29:25) Nostr's organic growth(31:06) Creators' final destination(34:50) AI in social: open models, features, and roadmap thoughts(41:05) Deepfakes, authenticity, and signed content value(46:02) Keys, key-rotation, and building resilient identity tools(49:28) Practical key management: tradeoffs from phone to multisig(53:12) Closing thoughtsmore info on the show: https://citadeldispatch.comlearn more about me: https://odell.xyz
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Between the future of search, the biggest threats in cybersecurity, and the jobs and platforms of tomorrow, Nikesh Arora sees one common thread connecting and transforming them all—AI. Sarah Guo and Elad Gil sit down with Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks, to talk about a wide array of topics from agentic AI to leadership. Nikesh dives into the future of search, the disruptive potential of AI agents for existing business models, and how AI has both compressed the timeline for cyberattacks as well as fundamentally shifted defense strategies in cybersecurity. Plus, Nikesh shares his leadership philosophy, and why he's so optimistic about AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @nikesharora | @PaloAltoNtwks Chapters: 00:00 – Nikesh Arora Introduction 00:39 – Nikesh on the Future of Search 04:46 – Shifting to an Agentic Model of Search 08:12 – AI-as-a-Service 16:55 – State of Enterprise Adoption 20:15 – Gen AI and Cybersecurity 27:35 – New Problems in Cybersecurity in the AI Age 29:53 – Deepfakes, Spearfishing, and Other Attacks 32:56 – Expanding Products at Palo Alto 35:49 – AI Agents and Human Replaceability 44:28 – Nikesh's Thoughts on Growth at Scale 46:52 – Nikesh's Leadership Tips 51:14 – Nikesh on Ambition 54:18 – Nikesh's Thoughts on AI 58:21 – Conclusion
Sora AIs can use other people's likeness and put them into many different scenarios, including this unlikely one.
Today on FLYTECH Daily, Nick & Michelle break down the top 5 tech stories you need to know — in under 10 minutes: Apple vs. Musk: Apple fires back at Elon Musk's lawsuit over its OpenAI deal, calling the claims “speculation on top of speculation” (Bloomberg). EV Lease Hacks: Ford & GM find a clever workaround to keep the $7,500 EV tax credit alive through leasing programs (Reuters). Porsche's OLED Flow Display: Porsche teases its Cayenne Electric with a futuristic curved OLED dashboard and new AI-powered voice assistant (Electrek, Autogefuehl). OpenAI Sora 2: Deepfake or reality? OpenAI unveils its most powerful video + audio generator yet, making hyperrealistic content mainstream (OpenAI briefing, The Verge). UK vs. Apple Encryption: The UK renews its demand for a backdoor into iCloud, reigniting the global encryption battle (Financial Times). Takeaway: Tech isn't slowing down — from EV loopholes to AI deepfakes, the future is unfolding faster than ever.
The news to know for Wednesday, October 1, 2025! What to know about the government shutdown now in effect as of midnight—and how millions of Americans will be impacted. Also, we'll bring you inside the meeting of America's top generals and admirals, where they heard blunt orders about culture, fitness, and even missions here at home. Plus: Big Oil is downsizing, TrumpRX could give people a new way to buy medicines, and an all-AI social media app is coming from the makers of ChatGPT. Those stories and even more news to know in about 10 minutes! Join us every Mon-Fri for more daily news roundups! See sources: https://www.theNewsWorthy.com/shownotes Become an INSIDER to get AD-FREE episodes here: https://www.theNewsWorthy.com/insider Sign-up for our Friday EMAIL here: https://www.theNewsWorthy.com/email Get The NewsWorthy MERCH here: https://thenewsworthy.dashery.com/ Sponsors: Get 50% off 1 month of Trade at drinktrade.com/newsworthy Save 25% on your first month of a Dose subscription by going to dosedaily.co/NEWSWORTHY or entering NEWSWORTHY at checkout. To advertise on our podcast, please reach out to ad-sales@libsyn.com
What if the next video you watch—of a politician, a celebrity, or even your best friend—isn't real? In this episode of The Healthier Tech Podcast, we dive into the explosive rise of AI-powered deepfakes and ultra-realistic video tools like Veo 3. These platforms can turn a single line of text into a flawless, lifelike video. The potential for creativity is staggering. The potential for harm? Even greater. Here's what you'll discover: The tech leap in 2025: How Veo 3 and other tools now generate studio-quality videos in minutes. The dark side of realism: From CEO fraud scams to synthetic propaganda, deepfakes are already being weaponized. Philosophical curveballs: If seeing is no longer believing, what does authenticity mean in a digital-first world? The human impact: Why false videos can ruin reputations, fuel anxiety, and erode trust. Solutions on the horizon: Content credentials, AI-powered detection, new regulations, and digital literacy as our best defense. We're not just talking about technology—we're talking about the future of truth itself. If you care about digital wellness, online privacy, and protecting your mental clarity in a hyper-connected world, this episode is a must-listen. Subscribe to The Healthier Tech Podcast and join the conversation about how we can build a healthier relationship with technology. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.
In this in-studio episode, Stephen from Biblical Hitmen dives into AI deepfakes, discernment, Christian revival vs. nationalism, forgiveness vs. vengeance after the Charlie Kirk killing, propaganda through history, end-times chatter, and more. We also swap testimonies (Teen Challenge), talk Nephilim News Network bits, and why symbols & “post-reality” make truth-finding harder than ever.• AI deception & “hyper-reality” • Christian revival vs. cultural backlash• Miami Mall, UAP psyops & Q-style ARGs • Forgiveness vs. vengeance in public tragedies• History rhymes: Augustus' comet, propaganda cycles • Symbols, signaling & discernmentBiblical Hitmen (video on YouTube; audio on Spreaker/Apple/Spotify; socials on X/FB/IG/TikTokNEPHILIM DEATH SQUADPatreon (early access + Telegram): https://www.patreon.com/NephilimDeathSquadWebsite & Merch: https://nephilimdeathsquad.comSupport Joe Gilberti: GiveSendGoListen/Watch:Spreaker: https://www.spreaker.com/podcast/nephilim-death-squad--6389018YouTube: https://www.youtube.com/@NephilimDeathSquadRumble: https://rumble.com/user/NephilimDeathSquadX: https://twitter.com/NephilimDSquadInstagram: https://instagram.com/nephilimdeathsquadContact: chroniclesnds@gmail.comX Community – Nephilim Watch: https://twitter.com/i/communities/1725510634966560797TopLobsta:X: https://twitter.com/TopLobstaInstagram: https://instagram.com/TopLobstaMerch: https://TopLobsta.comRaven:X: https://twitter.com/DavidLCorboInstagram: https://instagram.com/ravenofndsSponsors:Rife Tech – https://realsrifetechnology.com/ (Code: NEPHILIM for 10% off)Purge Store – https://purgestore.com/ (Code: NEPHILIM for 10% off)Credits:Intro Animation: @jslashr on XMusic: End of Days by Vinnie PazBecome a supporter of this podcast: https://www.spreaker.com/podcast/nephilim-death-squad--6389018/support.☠️ Nephilim Death Squad — New episodes 5x/week.Join our Patreon for early access, bonus shows & the private Telegram hive.Subscribe on YouTube & Rumble, follow @NephilimDSquad on X/Instagram, grab merch at toplobsta.com. Questions/bookings: chroniclesnds@gmail.com — Stay dangerous.
Join CLC in Dayton, Ohio for a timely message from Pastor Jordan Hansen exploring Artificial Intelligence (A.I.) from a Christian perspective. This talk delves into the different types of A.I., from narrow A.I. like ChatGPT and Alexa to the concepts of general and super-intelligent A.I.. Drawing wisdom from the Tower of Babel story in Genesis, this sermon addresses the ethical questions, potential dangers, and incredible opportunities A.I. presents for the church. Discover how to navigate technology by remembering a biblical identity, using wisdom, and trusting God with the future. Pastor: Jordan Hansen Series: You Asked For It (2) Title: Artificial Intelligence (A.I.) Date: 2025.09.27+28 YouTube Playlist: https://www.youtube.com/playlist?list=PLVQVNGAqiDEtVZxv79GmqhKpKEg1b-fud LINKS:
We'll talk to a leading cybersecurity specialist about what we should know about AI (artificial intelligence) and its use to create “deep fakes.” Salina Heller will bring us a story from Iowa that is a cautionary tale about how Wisconsin's healthy public employee pension system could be the next target of right wing politicians seeking to undermine public service. Mornings with Pat Kreitlow is powered by UpNorthNews, and it airs on several stations across the Civic Media radio network, Monday through Friday from 6-9 am. Subscribe to the podcast to be sure not to miss out on a single episode! To learn more about the show and all of the programming across the Civic Media network, head over to civicmedia.us/shows to see the entire broadcast line up. Follow the show on Facebook, X, and YouTube. Guests: Salina Heller, Rahul Powar
„Wir haben alle Verantwortung dafür, wie gute Debatten gelingen können“, sagt Julia Reuschenbach in dieser Podcast-Folge. Wir sprechen über die Grundlagen einer funktionierenden Debattenkultur, als Grundlage einer lebendigen Demokratie. Was haben soziale Medien bei unseren Debatten zum Schlechten, aber auch zum Guten verändert? Wie könnten notwendige Regulierungen sozialer Plattformen aussehen? Wäre eine Klarnamenpflicht eine Gefahr für die Meinungsfreiheit? Debatten brauchen einen Wertekompass, dazu gehört das Anerkennen von Fakten und die Verständigung auf eine Wahrheit. Mit Deepfakes stellt sich radikal die Frage nach Echtheit. Welche Regeln braucht es hier? Wie können wir besser streiten, damit der Streit nicht nur Energie entlädt, sondern auch Erkenntnis erzeugt? Gibt es heute verstärkt einen „Bekenntniszwang“, also den Druck, sich immer sofort eindeutig zu positionieren, entweder für oder gegen etwas zu sein? Gehört zur Meinungsfreiheit auch das Recht auf Schweigen? Es geht außerdem um Cancel Culture und die Frage, warum woke sein, also wachsam und achtsam gegenüber Verletzungen und Diskriminierung, immer häufiger als Schimpfwort benutzt wird. Was sind die wichtigsten Schritte hin zu einer besseren Debattenkultur? Antworten darauf und viele Denkanstöße gibt Julia Reuschenbach in dieser Podcast-Folge. Julia Reuschenbach ist Politikwissenschaftlerin an der Freien Universität Berlin und forscht zu Parteien und politischer Kommunikation. Gemeinsam mit Korbinian Frenzel, Journalist und Moderator beim Deutschlandfunk, hat sie das Buch „Defekte Debatten – Warum wir als Gesellschaft besser streiten müssen“ geschrieben.
Cybercriminals are using sophisticated AI deepfakes and cloned websites to scam unsuspecting South Africans out of their savings. In this revealing interview, forensic investigator Paul O'Sullivan joins Alec Hogg to unpack a recent scam, explain how these syndicates operate, and share practical tips to protect yourself. If it sounds too good to be true - it probably is.
中國信託聯手統一集團推出uniopen聯名信用卡 2025年12月31日前消費享最高11%回饋 完成指定任務加碼每月免費跨行轉帳10次,ATM存領外幣各1次免手續費 了解詳情> https://sofm.pse.is/86q2kq 謹慎理財 信用至上 -- PDF文電通專業版,提供完整的PDF功能:PDF編輯、合併、批次轉檔,不僅功能齊全、操作高效,價格更親民。 立即體驗企業專屬兩個月免費試用: https://sofm.pse.is/86q2lc 文電通官網結帳輸入「RightPDF」(2025/10/31前),全站再享 9 折優惠! ----以上為 SoundOn 動態廣告---- 高科技為人類帶來便利,但一定會讓人比較自由快樂嗎?當人類與機器的關係逐漸取代人與人之間的關係,演算法取代人類自主思考能力,腦腐、AI成癮和孤獨成為新時代議題,我們有可能再回到過去科技程度低但互動真實的世界,抑或已經無法回頭?AI智力進化遠比人類還快,10年後的機器人看人類,可能就像我們現在看猴子一般,這會否導致更多產業沒落、人類反而成為機器人的奴隸?AI 高耗能、高耗水、高污染,可能形成新的「資源殖民體系」,發達國家也可能成為這種體系下的犧牲者?獨裁國家利用AI監控人民的程度越趨無孔不入,人民有可能反過來利用AI推翻獨裁者嗎?民主國家又該如何防止AI成為獨裁者的利器,進行民主AI數位治理?精彩訪談內容,請鎖定@華視三國演議! 本集來賓:#宋國誠 #矢板明夫 主持人:#汪浩 以上言論不代表本台立場 #AI #人工智慧 #上癮 #異化 電視播出時間
Welcome to What's New With ME — where we break down the internet's wildest viral stories and social media trends with satire, context, and zero bias.This week's episode dives deep into:
Gavin Wilde, Nonresident Fellow at the Carnegie Endowment for International Peace, adjunct lecturer at Johns Hopkins University, and author of the recent paper, “Pyrite or Panic? Deepfakes, Knowledge and the Institutional Backstop,” joins Lawfare's Justin Sherman to discuss worries about deepfakes and their impact on information and society, the history of audiovisual media and what we can learn from previous evolutions in audiovisual technologies, and the role that fakery has played over the centuries in said media. They also discuss the social media and political context surrounding deepfake evolutions circa 2015; what happened, or not, with deepfakes in elections around the globe in 2024; and how institutions, policy, and law might pursue a less technology-centric approach to deepfakes and their information impacts.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
The James Comey indictment will create a ripple effect that has been in the works for a long time. The Q posts are more relevant today than ever. This is operational. I also discuss UVU's deepfake department and their previous crisis drills; and finally, how school levies will fail and why the left will never learn why they've already lost. Book Websites: https://www.moneytreepublishing.com/shop PROMO CODE: “AEFM” for 10% OFF https://armreg.co.uk PROMO CODE: "americaneducationfm" for 15% off all books and products. (I receive no kickbacks). Q posts book: https://drive.proton.me/urls/JJ78RV1QP8#yCO0wENuJQPH
AI is transforming both sides of the cybersecurity cat-and-mouse game. Attackers are using LLMs to scale impersonation, phishing, and even deepfake fraud—while defenders are racing to automate detection and takedowns at the same speed.In this episode, a16z partner Joel de la Garza talks with Kevin Tian, cofounder & CEO of Doppel Security (and former Uber engineer), about building in this new landscape. They cover:Why outsider founders sometimes build the most effective security companiesThe “3 V's” framework for today's social engineering attacks: volume, velocity, varietyHow Doppel uses reasoning models and reinforcement fine-tuning to cut false positives and improve precisionSimulation tools like “vibe phishing” to train employees on real attacker tacticsThe shift from manual cyber-intelligence services to AI-driven, software-margin businessesWhy the biggest bottleneck now isn't model cost—but engineering time to deliver the right contextIf you're building security products or exploring how AI can automate tough edge cases, this is a ground-level look at what's working—and what comes next. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today's show:What is “workslop”? And is it already slowing you down at the office?On a new This Week in Startups, we've got full co-host quorum with JCal, Alex, AND Lon tackling a meaty docket of news at the cross-section of tech, startups, and pop culture.For starters: A new Harvard/Stanford study suggests that AI isn't massively improving workplace efficiency because SOME workers aren't using it properly. Are low-quality, lazily-assembled AI outputs costing US enterprises millions in lost productivity? It's certainly possible based on these results.PLUS, why YouTube invited back all those banned creators… a deep-dive into CA's new social media law that's dividing tech and civil rights advocacy groups… what we can learn from Stripe's mega-share buyback… a look at what Polymarket's sharps think will happen with the US TikTok deal… and much more.Timestamps:(0:00) Intro. What will South Park have to say about prediction markets?(06:06) Alibaba's new AI model will turn you into any celebrity… can you still believe what you see?(09:49) Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twist(11:03) Show Continues…(19:42) AWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/credits(21:02) Is CA's new proposed law a boon for civil rights, or a weapon against free speech?(29:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(32:21) Why YouTube invited banned creators back(41:24) What is Workslop? And is it costing companies MILLIONS?(48:42) PolyMarket asks… when will the US TikTok deal go down?(52:27) Would Jason invest in Tether?(01:01:29) Why Stripe is buying back so many shares… and what the future may hold.(01:12:16) Another Reddit Rapid Response: should startups do pilot programs?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twistAWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/creditsNorthwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
E se um vídeo seu dissesse algo que você nunca falou? E se a sua imagem fosse usada para cometer um crime sem que você estivesse lá?No episódio de hoje, o Mundo Freak Confidencial mergulha no universo sombrio das deepfakes — tecnologias capazes de manipular imagens, vozes e vídeos com tanta perfeição que se tornam quase indistinguíveis da realidade. O que começou como uma curiosidade digital se transformou em uma ameaça real à privacidade, à política e até à nossa noção de verdade.Andrei Fernandes conversa com a Dra. Amanda Muniz, e com o jornalista Caio Almendra sobre os impactos dessa ferramenta poderosa — e perigosa. Eles discutem como a manipulação de rostos e vozes tem sido usada para fraudes, desinformação, pornografia de vingança e guerras narrativas, além de refletirem sobre os riscos éticos, legais e sociais do uso de IA para enganar os nossos sentidos.Na era das deepfakes confiar nos seus olhos pode ser o primeiro erro.Links: CURSOS MUNDO FREAK: É o Demônio! A origem do Diabo - Tupá GuerraMundo Freak na FUNARJ - Ingressos disponíveis entre dia 25 e 26 de setembro neste link: https://funarj.eleventickets.com/#!/evento/c21414dc00e33cecc0ff231b410a9c216edbfbf3Presidentes Brasileiros Cantando
Like many others, you have been scammed. Is all hope lost? Maybe not. On this episode of The Data Chronicles, host Scott Loughlin is joined by Hogan Lovells colleagues, Lauren Berkebile and Byron Phillips, to break down today's most common online fraud schemes and what to do if your organization gets hit. They trace the shift from the internet's initial gift card scams to today's highly targeted wire-fraud campaigns powered by deepfakes, polished phishing, and social engineering. You'll hear how criminals stage fake transactions over Zoom, why C-suite leaders are targeted, and the first moves that can help you recover funds. Whether you have been scammed before and are trying to heighten your defenses or you are planning for how to respond to a future scam, this episode will be a great resource.
Deepfake vidéo, phishing sur-mesure, vishing vocal… Les cybercriminels ne reculent devant rien pour piéger les collaborateurs d'entreprise. Et avec l'explosion de l'intelligence artificielle, ces attaques deviennent de plus en plus crédibles, ciblées, et dévastatrices. Arnaud Loubatière, directeur Europe du Sud de SoSafe, revient dans cet épisode sur les nouvelles formes d'ingénierie sociale.En partenariat avec SoSafeLoin d'être une simple question de pare-feu ou d'antivirus, la cybersécurité est aujourd'hui un enjeu humain. Dans cet entretien, Arnaud Loubatière explique pourquoi les hackers visent désormais directement les individus, à travers des attaques contextuelles, multicanales et psychologiquement bien ficelées. Il revient aussi sur les failles exploitées dans plusieurs affaires récentes : l'attaque contre Marks & Spencer via un sous-traitant, le vol massif de données chez les opérateurs télécoms, ou encore les détournements bancaires par simple changement d'IBAN.Face à cette menace invisible mais omniprésente, la solution passe par la sensibilisation, la formation continue et l'accompagnement intelligent des collaborateurs. Arnaud Loubatière décrit les outils que SoSafe déploie pour renforcer la vigilance et la posture cyber dans les entreprises, avec notamment un assistant IA intégré à la messagerie. L'enjeu est clair : transformer l'humain, souvent perçu comme le maillon faible, en première ligne de défense contre les cyberattaques de demain.Savoir plus : Tendances en cybercriminalité 2025
Prepare yourself for a sobering look at the increasingly invasive world of technology monetization. Nick Espinosa, Chief Security Fanatic, joins the Tech Time crew to expose how tech giants are finding alarming new ways to serve us advertisements – from Samsung refrigerators with built-in ads to Microsoft's new full-screen "scoop" ads in Windows 11 that you can't escape. As Nick bluntly puts it, "We're never going to get rid of ads. They are trying to monetize absolutely everything."The conversation takes a disturbing turn when examining the recent deepfake case in Scotland, where a man received only a fine after creating and sharing non-consensual nude images with a woman's face. This landmark case highlights the inadequacy of current legal frameworks to address AI-generated content that causes real psychological harm. Meanwhile, Google's takedown of 224 Android apps involved in a massive ad fraud operation generating 2.3 billion daily requests raises serious questions about mobile device security.OpenAI's forthcoming $4 ChatGPT Go plan signals a potentially revolutionary shift in how we'll access information. This budget-friendly AI service, already available in India and Indonesia, raises fascinating questions about the future of search and whether "better thinking" will become a premium service only available to those willing to pay for it.The team also examines how AI might impact child development, with Mike offering compelling arguments about the dangers of using technology as a substitute for human interaction. His concern that "it's not human development, it's human replacement" resonates deeply as we consider the implications of AI companions for our youngest generation.Subscribe now for more insights on navigating our increasingly complex digital landscape without losing your privacy, security, or sanity in the process. And don't forget to scan your Android device with Malwarebytes or Bitdefender – you might be surprised what you find lurking there!Support the show
Clearview AI, the facial recognition company that scraped the internet for images of people's faces for its database, is building a tool to deal with an emerging problem: AI-generated faces. In comments to FedScoop, Hal Lambert, the company's co-CEO, said Clearview AI is dealing with the problem by building a new tool for detecting these manipulated images for its customers, many of whom are federal law enforcement agencies. Lambert was named co-CEO of the company earlier this year, after the company board voted to replace its original top executive. Clearview AI has collected billions of images from the internet, including from social media accounts that are set to public, according to the company. Clearview AI has created a database of those images and made it available to a wide range of customers, including Immigration and Customs Enforcement, the government of Ukraine, and law enforcement officials that seek to identify victims of child pornography. Clearview AI has also sold the tool to police departments. The company touts its facial recognition efficacy scores from the National Institute of Standards and Technology. But deepfakes could make building tools like Clearview AI's more complicated. Right now, deepfakes, or images that are edited or enhanced with artificial intelligence, haven't been a major problem for the company, Lambert told FedScoop. Still, the company is developing a tool that is supposed to tag images that might be AI-generated, with the goal of having it ready for customers by the end of the year. Lambert did not share further details. The Trump administration is signaling to industry and allies that it is considering a broader set of actions related to quantum computing, both to improve the nation's capacity to defend against future quantum-enabled hacks and ensure the United States promotes and maintains global dominance around a key national security technology. The discussions include potentially taking significant executive action, such as one or more executive orders, a national plan similar to the AI Action Plan issued earlier this year, and a possible mandate for federal agencies to move up their timelines for migrating to post-quantum protections, multiple sources told CyberScoop. None of the sources CyberScoop spoke with could provide a definitive timeline for an official rollout, but multiple executives in the quantum computing industry and former national security officials said the White House has signaled serious interest in taking bolder action to promote and shape the development of the technology. Some felt official announcements could come as soon as this week, while others cautioned the process could stretch into the coming months. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
Seit der Ermordung des konservativen Influencers Charlie Kirk wollen amerikanische Sicherheitsanalysten einen Anstieg von Falschmeldungen in den sozialen Medien festgestellt haben, hinter dem sie russische Akteure vermuten. Trifft diese US-Erkenntnis auch auf Europa zu? Wie sollte mit Deepfakes und Bots umgegangen werden?
See omnystudio.com/listener for privacy information.
In this Labor Day Special Episode of AI and the Future of Work, host Dan Turchin sits down with three leaders for an open conversation about how AI is reshaping the workplace.They discuss both opportunities and challenges. The focus is on building organizations where technology amplifies human potential rather than replacing it.From rethinking the employee experience to redefining leadership in the AI era, the panel explores themes of empathy, trust, and the evolving social contract between employers and employees. Together, they share what it means to lead responsibly at a moment when the future of work is being rewritten in real time.Featuring Guests:
Premier of NSW Chris Minns stopped by the show to talk us through the new AI and Deepfake laws. It will now be illegal to create OR share sexually explicit AI/Deepfake images and videos. See omnystudio.com/listener for privacy information.
Can AI save your sanity when therapy isn't enough? In this episode, Rajeev Kapur shares how ChatGPT became a lifeline for a suicidal business exec, repairing his marriage, reconnecting him with his daughter, and even saving his company. We cover AI therapy prompts, data privacy hacks, deepfake nightmares, and how to use AI responsibly without losing your soul. Topics Discussed: AI has been integrated into our lives for a long time. Mental health support can be augmented by AI tools. Therapists should embrace AI to enhance their practice. Prompting techniques can make AI more effective for users. Data privacy is crucial when using AI applications. Deepfakes pose significant risks to individuals and organizations. AI can help improve personal relationships and communication. The future of work may involve universal basic income due to AI advancements. Ethics in AI development is a pressing concern. AI is still in its early stages, with much potential ahead. Resources: Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur ---- GUEST WEBSITE: https://rajeev.ai/ ---- MORE FROM BROBOTS: Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok Subscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group ---- LINKS TO OUR PARTNERS: Take control of how you'd like to feel with Apollo Neuro Explore the many benefits of cold therapy for your body with Nurecover Muse's Brain Sensing Headbands Improve Your Meditation Practice. Get started as a Certified Professional Life Coach! Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs Revamp your life with Bulletproof Coffee You Need a Budget helps you quickly get out of debt, and save money faster! Start your own podcast!
Deepfakes are videos, picture or audio clips made with artificial intelligence to look real.Although sometimes used for fun, they can also be used to defame or discredit people. Anyone from politicians to celebrities to normal members of the public can become the subject of deepfake imagery. So how can we protect our image from being used maliciously?In Denmark, the government is proposing a new law which would give people copyright-like protection over their face, voice, and appearance. In this edition of the Inquiry, Tanya Beckett explores how the new law would work and asks how do we strike a balance between Big Tech and AI innovation and the need to protect our identity?Contributors:Gitte Løvgren Larsen, Lawyer and partner, Sirius Advocator, DenmarkDr Alina Trapova, lecturer (Assistant Prof), Intellectual property law, University College LondonIgnacio Cofone, Professor of Law and Regulation of AI, University of OxfordMikkel Flyverbom, Professor of Communication and Digital Transformations, Copenhagen Business SchoolPresenter: Tanya Beckett Producer: Emma Forde Researcher: Matt Toulson Technical Producer: Richard Hannaford Production Coordinator: Tammy Snow Editor: Louise Clarke (Photo: Digital Human Head, Credit: imaginima/Getty Images)
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis, principal of Duha. Joining us is our sponsored guest, Brian Long, CEO, Adaptive Security. In this episode: Hiring North Korean operatives on a Tuesday AI coding and the death of specifications Deepfake personas beyond video calls The middleman problem with SMS Huge thanks to our sponsor, Adaptive Security AI-powered social engineering threats like deepfake voice calls, GenAI phishing, and vishing attacks are evolving fast. Adaptive helps security leaders get ahead with an AI-native platform that simulates realistic genAI attacks, and delivers expert-vetted security awareness training — all in one unified solution. And now, with Adaptive's new AI Content Creator, security teams can instantly transform breaking threat intel or updated policy docs into interactive, multilingual training — no instructional design needed. That means faster compliance, better engagement, and less risk. Trusted by Fortune 500s and backed by Andreessen Horowitz and the OpenAI Startup Fund, Adaptive is helping security teams prepare for the next generation of cyber threats. Learn more at adaptivesecurity.com.