Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
Today's show:What is “workslop”? And is it already slowing you down at the office?On a new This Week in Startups, we've got full co-host quorum with JCal, Alex, AND Lon tackling a meaty docket of news at the cross-section of tech, startups, and pop culture.For starters: A new Harvard/Stanford study suggests that AI isn't massively improving workplace efficiency because SOME workers aren't using it properly. Are low-quality, lazily-assembled AI outputs costing US enterprises millions in lost productivity? It's certainly possible based on these results.PLUS, why YouTube invited back all those banned creators… a deep-dive into CA's new social media law that's dividing tech and civil rights advocacy groups… what we can learn from Stripe's mega-share buyback… a look at what Polymarket's sharps think will happen with the US TikTok deal… and much more.Timestamps:(0:00) Intro. What will South Park have to say about prediction markets?(06:06) Alibaba's new AI model will turn you into any celebrity… can you still believe what you see?(09:49) Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twist(11:03) Show Continues…(19:42) AWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/credits(21:02) Is CA's new proposed law a boon for civil rights, or a weapon against free speech?(29:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(32:21) Why YouTube invited banned creators back(41:24) What is Workslop? And is it costing companies MILLIONS?(48:42) PolyMarket asks… when will the US TikTok deal go down?(52:27) Would Jason invest in Tether?(01:01:29) Why Stripe is buying back so many shares… and what the future may hold.(01:12:16) Another Reddit Rapid Response: should startups do pilot programs?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twistAWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/creditsNorthwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
See omnystudio.com/listener for privacy information.
Die Zukunft ist nicht mehr nur ein Konzept, sondern eine treibende Kraft, die unsere Welt bereits heute revolutioniert. In dieser mitreißenden Folge 24 „Einfach mal machen“ lädt CDU-Generalsekretär Carsten Linnemann die KI-Expertin Elisabeth L‘Orange und den Politico-Chefredakteur Gordon Repinski ein, um die Chancen und Risiken dieser rasanten Entwicklung zu ergründen. Die Diskussionsrunde beleuchtet, wie Künstliche Intelligenz unser Leben von Grund auf verändert: von medizinischen Durchbrüchen bis hin zur Medienlandschaft, in der Algorithmen nicht nur die Effizienz steigern, sondern auch existenzielle Geschäftsmodelle bedrohen und die Gefahr von „Echokammern“ bergen. Das Gespräch wirft einen kritischen Blick auf Deutschlands Position im globalen Wettbewerb. Trotz der Stärke deutscher Unternehmen, die weltweit führend bei der Anmeldung von KI-Patenten sind, warnen die Experten davor, den Anschluss an die USA und China zu verlieren, wenn es nicht gelingt, Deregulierung und eine innovationsfreundliche Kultur in den Fokus zu rücken. Gleichzeitig wird die akute Bedrohung durch Deepfakes nicht zuletzt auch in Wahlkämpfen thematisiert. Sie könnten das Vertrauen in die Demokratie untergraben. Die Folge ist ein spannender Appell, der dazu anregt, die drängendsten Zukunftsfragen zu stellen: Wie können wir die Ärmel hochkrempeln, damit Deutschland nicht nur reaktiv auf die KI-Revolution reagiert, sondern sie aktiv mitgestaltet? #cdu
In this Labor Day Special Episode of AI and the Future of Work, host Dan Turchin sits down with three leaders for an open conversation about how AI is reshaping the workplace.They discuss both opportunities and challenges. The focus is on building organizations where technology amplifies human potential rather than replacing it.From rethinking the employee experience to redefining leadership in the AI era, the panel explores themes of empathy, trust, and the evolving social contract between employers and employees. Together, they share what it means to lead responsibly at a moment when the future of work is being rewritten in real time.Featuring Guests:
Premier of NSW Chris Minns stopped by the show to talk us through the new AI and Deepfake laws. It will now be illegal to create OR share sexually explicit AI/Deepfake images and videos. See omnystudio.com/listener for privacy information.
Deepfakes are no longer science fiction. They are emptying bank accounts, impersonating CEOs, and undermining trust in digital identity. On Industry Insight, iProov Chief Technology Officer Dominic Forrest explains how businesses and governments can fight back. From SingPass logins to global border control, he reveals how biometric authentication and liveness detection are being used to prove a person is genuinely present, and why that matters as over 130 deepfake tools spread online. This is Part 2 of our special series on deepfake fraud, focusing on the technology protecting digital identity. (*Part 1 of this deepfake fraud series, 'AI fraud and deepfakes: Why trust is at risk in banking' focuses on the industry-wide risks and why collective defense is critical to safeguarding digital finance)See omnystudio.com/listener for privacy information.
Segundo levantamento da empresa, as taxas de fraudes utilizando deepfake cresceram 822% no mundo todo entre o primeiro trimestre de 2024 e o mesmo período de 2025
Can AI save your sanity when therapy isn't enough? In this episode, Rajeev Kapur shares how ChatGPT became a lifeline for a suicidal business exec, repairing his marriage, reconnecting him with his daughter, and even saving his company. We cover AI therapy prompts, data privacy hacks, deepfake nightmares, and how to use AI responsibly without losing your soul. Topics Discussed: AI has been integrated into our lives for a long time. Mental health support can be augmented by AI tools. Therapists should embrace AI to enhance their practice. Prompting techniques can make AI more effective for users. Data privacy is crucial when using AI applications. Deepfakes pose significant risks to individuals and organizations. AI can help improve personal relationships and communication. The future of work may involve universal basic income due to AI advancements. Ethics in AI development is a pressing concern. AI is still in its early stages, with much potential ahead. Resources: Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur ---- GUEST WEBSITE: https://rajeev.ai/ ---- MORE FROM BROBOTS: Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok Subscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group ---- LINKS TO OUR PARTNERS: Take control of how you'd like to feel with Apollo Neuro Explore the many benefits of cold therapy for your body with Nurecover Muse's Brain Sensing Headbands Improve Your Meditation Practice. Get started as a Certified Professional Life Coach! Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs Revamp your life with Bulletproof Coffee You Need a Budget helps you quickly get out of debt, and save money faster! Start your own podcast!
JOIN THE 7 DAY RESET - ▶️ www.therebuiltman.com/7dayreset AI porn and deepfakes aren't harmless “curiosities.” They're traps. Synthetic novelty is hijacking men's brains, searing consciences, and destroying integrity at a faster rate than traditional porn ever could. In this episode of The Rebuilt Man Podcast, Coach Frank exposes the dark reality of AI porn and deepfakes. He breaks down the science of how your brain's reward system is wired for novelty, why synthetic content creates infinite dopamine spikes, and how this leads to rapid desensitization, shame, and addiction. But this conversation isn't just a warning, it's a roadmap. Frank introduces his proven Reboot Your Life framework, showing men how to guard their gates, rebuild identity, and walk in true freedom. With practical tools like the Daily 5 habits, radical accountability, discomfort training, and a renewed vision for the future, you'll discover how to break the cycle of counterfeit intimacy and pursue the real connection God designed you for. If you've ever told yourself AI porn is “harmless,” this episode will dismantle that lie and challenge you to step into your calling as a man of integrity, discipline, and faith.
It takes just three seconds for AI to steal your voice and impersonate you in a way no one can detect. How can you protect yourself, your family, and your finances when seeing and hearing is no longer believing?In this episode, deepfake expert Parya Lotfi reveals the shocking reality of AI-driven scams, from fraudulent bank transfers to fake kidnapping calls. We uncover how criminals operate and what you can do to spot the lies before it's too late.In this episode/video, we cover:- How criminals use 3-second voice clones for scams- The shocking story of a North Korean deepfake spy- Why facial and voice ID are no longer secure- How to use AI to detect other AI fakesThis video is for anyone who wants to understand the real-world dangers of deepfake technology and learn actionable steps to protect themselves in our new "fake reality."Connect with Parya:https://www.linkedin.com/in/paryalotfiTimestamps:00:00:00 - Intro00:00:35 - The Scary Reality of AI-Generated Videos00:02:32 - The Dangerous Side of Facial & Voice Biometrics00:03:45 - The Disturbing Reality of Voice Cloning Scams00:06:46 - How to Use AI to Catch AI-Generated Fakes00:10:11 - Solving AI's "Black Box" Problem with Explainability00:12:10 - The Different Types of Deepfakes Criminals Use00:14:15 - How Deepfakes Are Used to Launder Millions From Banks00:18:18 - Inside the Darknet's "Deepfake-as-a-Service" Business00:22:32 - Why Banning Deepfake Technology Is Impossible00:24:58 - How Deepfakes Are Being Weaponized in Global Conflicts00:27:30 - Red Teaming: How to Think Like a Deepfake Criminal00:29:09 - The North Korean Spy Who Used a Deepfake to Get a Job00:31:54 - The Ultimate Goal: A Deepfake Detector for Everyone00:37:23 - The Future That Scares Me: AGI and Self-Aware Robots00:44:33 - The Journey of Building a Deepfake Detection Company00:47:42 - The Surprising Reason Deepfake Detection Is So Hard00:54:44 - Who Is Responsible When You Get Scammed by a Deepfake?00:58:25 - The Rise of AI Influencers and Their Tragic Consequences#Deepfake #Cybersecurity #ArtificialIntelligence
Deepfakes are no longer a niche concern. They are fuelling scams across Asia Pacific, from Singapore to Hong Kong, and eroding public trust in financial institutions. On Industry Insight, Christophe Barel, Managing Director for Asia Pacific at the Financial Services Information Sharing and Analysis Center (FS-ISAC), explains why AI-powered fraud is now a systemic risk to the region’s financial sector. He shares how phishing and synthetic identities are evolving, why fraud cases have surged by over 1,500% in Singapore, and what happens when consumer confidence in banks starts to crack. This is Part 1 of our special series on deepfake fraud focusing on the industry-wide risks and why collective defense is critical to safeguarding digital finance. (*Part 2 of this deepfake fraud series, 'Biometrics & Liveness: Securing digital identity against deepfakes' focuses on the technology protecting digital identity)See omnystudio.com/listener for privacy information.
Deepfakes are videos, picture or audio clips made with artificial intelligence to look real.Although sometimes used for fun, they can also be used to defame or discredit people. Anyone from politicians to celebrities to normal members of the public can become the subject of deepfake imagery. So how can we protect our image from being used maliciously?In Denmark, the government is proposing a new law which would give people copyright-like protection over their face, voice, and appearance. In this edition of the Inquiry, Tanya Beckett explores how the new law would work and asks how do we strike a balance between Big Tech and AI innovation and the need to protect our identity?Contributors:Gitte Løvgren Larsen, Lawyer and partner, Sirius Advocator, DenmarkDr Alina Trapova, lecturer (Assistant Prof), Intellectual property law, University College LondonIgnacio Cofone, Professor of Law and Regulation of AI, University of OxfordMikkel Flyverbom, Professor of Communication and Digital Transformations, Copenhagen Business SchoolPresenter: Tanya Beckett Producer: Emma Forde Researcher: Matt Toulson Technical Producer: Richard Hannaford Production Coordinator: Tammy Snow Editor: Louise Clarke (Photo: Digital Human Head, Credit: imaginima/Getty Images)
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis, principal of Duha. Joining us is our sponsored guest, Brian Long, CEO, Adaptive Security. In this episode: Hiring North Korean operatives on a Tuesday AI coding and the death of specifications Deepfake personas beyond video calls The middleman problem with SMS Huge thanks to our sponsor, Adaptive Security AI-powered social engineering threats like deepfake voice calls, GenAI phishing, and vishing attacks are evolving fast. Adaptive helps security leaders get ahead with an AI-native platform that simulates realistic genAI attacks, and delivers expert-vetted security awareness training — all in one unified solution. And now, with Adaptive's new AI Content Creator, security teams can instantly transform breaking threat intel or updated policy docs into interactive, multilingual training — no instructional design needed. That means faster compliance, better engagement, and less risk. Trusted by Fortune 500s and backed by Andreessen Horowitz and the OpenAI Startup Fund, Adaptive is helping security teams prepare for the next generation of cyber threats. Learn more at adaptivesecurity.com.
First documented case: AI inside the breach.Promptlock marks the first time malware has used AI during execution, not just in preparation. In this Threat Talks deep dive, Rob Maas (Field CTO, ON2IT) sits down with Yuri Wit (SOC Analyst, ON2IT) to break down how it works: a Go loader calling an attacker's LLM in real time, generating fresh payloads that adapt on the fly.This episode strips away sci-fi hype. You'll see the psychology of an adversary that thinks mid-attack—and the Zero Trust defenses that box it in. When AI runs inside the kill chain, malware doesn't just evolve. It crosses into super-malware.(00:00) - — Cold open: “What if malware could think?” (00:18) - — Welcome: Rob Maas & Yuri Wit (00:41) - — First reaction to PromptLock (01:02) - — How attackers already use AI (phishing, coding, negotiations) (03:02) - — Why PromptLock is different: AI during execution (03:35) - — How it works: Go → Ollama → LLM → Lua (06:36) - — Proof-of-concept tells (the Satoshi wallet) (07:55) - — Defense shift: hashes die, behavior wins (10:40) - — Detecting LLM calls: SSL inspection realities (11:26) - — Quick wins: block interpreters (Lua/Python/PowerShell) (12:23) - — Zero Trust moves: default-deny egress & segmentation (12:41) - — What's next: dynamic exploits & on-demand EDR bypass (16:21) - — Timelines & hardware: why adoption could accelerate (18:21) - — Wrap-up & CTA Key Topics Covered• The first documented case of AI inside the breach — why Promptlock changes the game• Promptlock's core loop: calling an LLM mid-attack to generate fresh payloads.• Why hash-based detection breaks against AI-powered malware detection, ever-changing scripts.• Behavioral defense over signatures: EDR/XDR, sandboxing, and SSL inspection.• Zero Trust in practice: block script interpreters, restrict egress, and shrink blast radius.Additional ResourcesON2IT Zero Trust: https://on2it.net/zero-trust/Threat Talks hub: https://threat-talks.com/Ollama (referenced in episode): https://ollama.com/The Rising Threat of Deepfakes: https://youtu.be/gmtZ_aYmQdQGuest & Host Links:Rob Maas, Field CTO, ON2IT: https://www.linkedin.com/in/robmaas83/ Yuri Wit, SOC Specialist, ON2IT: https://www.linkedin.com/in/yuriwit/Click here to view the episode transcript.
[Patreon Preview] To listen to/watch this full episode, sign up for my Patreon!Signing up for the Patreon will get you access to one bonus episode per month, the ability to listen/watch without ads, and you're helping to support the show. Join today!! We're all familiar with the concept of deepfakes, but now, AI is being harnessed in a new way to deceive. AI generated TikTokers are reciting the actual words of real people, right down to the stumbles, "ums" and "uhs."This new form of AI-driven identity theft is slipping past moderation filters, misleading millions, and leaving creators feeling powerless. Bobby Allyn is a reporter who's been covering this phenomenon and the rise of TikTok deepfakes, he joined me for this bonus episode of Power User to break it all down and explain why this is happening.Buy a subscription to my Tech and Online Culture newsletter, User Magazine to support my work!!!!
In 2025 identity has become the new (security) perimeter, making identity security attacks a primary threat vector for organisations throughout the region. Threat actors are targeting user credentials and privileged access pathways, moving beyond traditional network-based assaults to exploit identities as the weakest link. This is forcing CISOs to re-evaluate their defence-in-depth strategies, with a pronounced focus on securing privileged access management (PAM) as a critical control point. As organisations adopt zero trust architectures, we are seeing just-in-time and just-enough-access privileges As organisations mature in their understanding and use of AI, FutureCISO becoming standard practice. But what about the use of AI in identity management?In this PodChats for FutureCISO, we are joined by Nigel Tan, Delinea's director of sales engineering in APAC, shares his views on how is AI altering identity and access management strategies?1. Before we start, give us the 30-second elevator pitch of who Delinea is.2. How is AI Redefining Identity and Access Management?3. Please identify emerging AI-driven IAM use cases in Southeast Asia and Hong Kong? 4. How do you see Agentic AI potentially changing Privileged Access Management (PAM)?5. We may have covered this in the earlier questions, what are the cybersecurity risks of AI-Enhanced IAM? Please cite 2025 incidents on the same topic.6. Efforts are underway to come up with regional regulations around AI use. Can we expect something similar around AI in IAM?7. Can AI improve IAM for hybrid workforces?8. You mentioned earlier about identity access rights that have become dormant. What role would GenAI/Agentic AI play in identity lifecycle management?9. Deepfake cases are growing in Asia. How should CISOs prepare for AI-enabled identity fraud?10. What skills will security teams need for AI-driven IAM? How about endusers?11. What is the future of AI in IAM for Southeast Asia and Hong Kong?
AI investment is exploding: the “Magnificent Seven” of Apple, Microsoft, Google, Amazon, Meta, Tesla, and NVIDIA, are ploughing almost 7% of US GDP into AI and data centres. That's the same scale as the US housing boom in 2006, and greater than the dot-com bubble at its peak. Today, just seven firms make up 34% of the S&P 500, the highest concentration in history. Earnings per share in these companies grew 37% last year, compared to just 6% in the rest of the index. But history warns us, RCA in the 1920s, dot-coms in the 1990s, that transformative technologies can change the world while destroying fortunes. The question now: is AI the next revolution, or the next bubble waiting to burst? Hosted on Acast. See acast.com/privacy for more information.
Take a mini-retirement now - pause, recharge, and refresh your career. Deepfakes spread fast, blurring truth after big news events. Test-drive tips: what to check before buying.
In this edition we track India's progress on trade deals with the European Union and the United States, Apple's price hike over the years, and an 8-year trend of falling dependency on H1B visas for Indian IT companies. Also find, a Moneycontrol exclusive with Dabur CEO Mohit Malhotra. Also inside: Cyber audits for MSMEs, Bollywood's fight against deepfakes and India's first all-AI band, Trilok.
Andy Murphy sits down with AI privacy advocate and content creator Dylan Schmidt to discuss the red flags and opportunities in the world of artificial intelligence, digital privacy, and surveillance. They explore how AI is shaping our lives, the risks of data collection, and what the future might hold for families and society. For more from Dylan Schmidt: https://dylanschmidt.com/ Take control of your data with DeleteMe. Because they sponsor the podcast you can get 20% off a privacy plan from DeleteMe with promo code: DAD. OmniWatch is a scam and identity protection service that monitors your data, alerts you to threats, and backs you with up with insurance. Try it for only $1 for 7 days. Learn how to freeze your child's credit to protect their good name. Connect
Künstliche Intelligenz ist spätestens seit ChatGPT bei den meisten angekommen. Es kann einem das Leben erleichtern, oder auch zum Alptraum werden. Insbesondere durch Deep Fakes werden nicht nur Fake Videos und Fake News verbreitet, sondern oft auch Pornos und sexuelle Inhalte von Menschen erstellt, die dafür nicht ihre Erlaubnis gegeben haben und oft nicht einmal davon wissen. So auch bei der Moderatorin und Schauspielerin Collien Monica Fernandes, deren Identität geklaut wurde und mit Hilfe von Deep Fakes Männer systematisch im Glauben gelassen wurden, mit ihr eine Liebesbeziehung zu führen. Bei G Spot erzählt sie von der traumatisierenden Erfahrung, ihrem Umgang mit der Situation, sowie von den Konsequenzen und dem systematischen Problem Deep Fake. [Hier findet ihr die Doku zum Thema.](https://www.zdf.de/reportagen/diespur-deepfake-pornos-collien-ulmen-fernandes-100) Ihr habt Probleme mit Hass im Netz? HateAid hilft euch dagegen vorzugehen, weitere Infos findet ihr [hier](https://hateaid.org/) Schreibt uns gern eure Gedanken zur Folge und folgt uns auf Social Media für mehr Content! [Alle Links hierfür findet ihr hier ](https://linktr.ee/g.spot.podcast) For the Love of Love. Probiere Bumble aus. [Hier geht's lang](https://bumble.onelink.me/3396940749/73dfuqsx) Oder sendet uns eine Nachricht per Mail an: gspot@studio-bummens.de Du möchtest mehr über unsere Werbepartner:innen erfahren? [Hier findest du alle Infos und Rabatte](https://linktr.ee/gspot.podcast )
On this episode of Crazy Wisdom, Stewart Alsop sits down with Abhimanyu Dayal, a longtime Bitcoin advocate and AI practitioner, to explore how money, identity, and power are shifting in a world of deepfakes, surveillance, automation, and geopolitical realignment. The conversation ranges from why self-custody of Bitcoin matters more than ETFs, to the dangers of probabilistic biometrics and face-swap apps, to the coming impact of AGI on labor markets and the role of universal basic income. They also touch on India's refinery economy, its balancing act between Russia, China, and the U.S., and how soft power is eroding in the information age. For more from Abhimanyu, connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop opens with Abhimanyu Dayal on crypto, AI, and the risks of probabilistic biometrics like facial recognition and voice spoofing.05:00 They critique biometric surveillance, face-swap apps, and data exploitation through casual consent.10:00 The talk shifts to QR code treasure hunts, vibe coding on Replit and Claude, and using quizzes to mint NFTs.15:00 Abhimanyu shares his finance background, tying it to Bitcoin as people's money, agent-to-agent payments, and post-AGI labor shifts.20:00 They discuss universal basic income, libertarian ideals, Hayek's view of economics as critique, and how AI prediction changes policy.25:00 Pressure, unpredictability, AR glasses, quantum computing, and the surveillance state future come into focus.30:00 Open source vs closed apps, China's DeepSeek models, propaganda through AI, and U.S.–China tensions are explored.35:00 India's non-alignment, Soviet alliance in 1971, oil refining economy, and U.S.–India friction surface.40:00 They reflect on colonial history, East India Company, wealth drain, opium wars, and America's rise on Indian capital.45:00 The conversation closes on Bitcoin's role as reserve asset, stablecoins as U.S. leverage, BRICS disunity, and the geopolitics of freedom.Key InsightsA central theme of the conversation is the contrast between deterministic and probabilistic systems for identity and security. Abhimanyu Dayal stresses that passwords and private keys—things only you can know—are inherently more secure than facial recognition or voice scans, which can be spoofed through deepfakes, 3D prints, or AI reconstructions. In his view, biometric data should never be stored because it represents a permanent risk once leaked.The rise of face-swap apps and casual facial data sharing illustrates how surveillance and exploitation have crept into everyday life. Abhimanyu points out that companies already use online images to adjust things like insurance premiums, proving how small pieces of biometric consent can spiral into systemic manipulation. This isn't a hypothetical future—it is already happening in hidden ways.On the lighter side, they experiment with “vibe coding,” using tools like Replit and Claude to design interactive experiences such as a treasure hunt via QR codes and NFTs. This playful example underscores a broader point: lightweight coding and AI platforms empower individuals to create experiments without relying on centralized or closed systems that might inject malware or capture data.The discussion expands into automation, multi-agent systems, and the post-AGI economy. Abhimanyu suggests that artificial superintelligence will require machine-to-machine transactions, making Bitcoin an essential tool. But if machines do the bulk of labor, universal basic income may become unavoidable, even if it drifts toward collectivist structures libertarians dislike.A key shift identified is the transformation of economics itself. Where Hayek once argued economics should critique politicians because of limited data, AI and quantum computing now provide prediction capabilities so granular that human behavior is forecastable at the individual level. This erodes the pseudoscientific nature of past economics and creates a new landscape of policy and control.Geopolitically, the episode explores India's rise, its reliance on refining Russian crude into petroleum exports, and its effort to stay unaligned between the U.S., Russia, and China. The conversation recalls India's Soviet ties during the 1971 war, while noting how today's energy and trade policies underpin domestic improvements for India's poor and middle class.Finally, they critique the co-optation of Bitcoin through ETFs and institutional custody. While investors celebrate, Abhimanyu argues this betrays Satoshi's vision of money controlled by individuals with private keys. He warns that Bitcoin may be absorbed into central bank reserves, while stablecoins extend U.S. monetary dominance by reinforcing dollar power rather than replacing it.
Ezinne and Oji Udezue have over 50 years of combined product leadership experience at Microsoft, Twitter, Atlassian, WP Engine, Typeform, and Calendly. They've witnessed every major shift in product management, and, despite their seniority, they're taking beginner AI courses and learning from engineers half their age, and Oji is coding more now than in the past decade—from Waterfall to Agile to AI. They are also the authors of Building Rocketships, a guide to building great products. In this conversation, the couple shares hard-won lessons they've learned from companies successfully adapting to AI, including their “shipyard” framework and their “sharp problem” methodology.What you'll learn:1. The “shipyard” framework: why the best AI teams embrace controlled chaos2. Why Oji writes more code now than in the past 10 years—despite being a PM for more than 25 years3. The three skills that matter most for PMs in 2025: curiosity, humility, and agency4. How to identify “sharp problems”5. AI at the core vs. AI at the edge: why companies that are building entirely new AI-centric codebases will beat those just “sprinkling AI” on existing products6. The counterintuitive truth: engineers are moving so fast with AI that PMs are now the bottleneck7. Their biggest product lesson from 50 combined years—Brought to you by:Mercury—The art of simplified financesVanta—Automate compliance. Simplify security.Coda—The all-in-one collaborative workspace—Where to find Oji and Ezinne:• ProductMind on Substack: https://substack.com/@ojiudezue• ProductMind on LinkedIn: https://www.linkedin.com/company/productmindco• ProductMind on YouTube: https://www.youtube.com/@ProductMindX/videos• ProductMind on Spotify: https://open.spotify.com/show/07OVh5pdSv0szHPwWktzQQ• ProductMind website: https://www.productmind.co/• Oji on LinkedIn: https://www.linkedin.com/in/ojiudezue/• Ezinne on LinkedIn: https://www.linkedin.com/in/ezinne/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Oji and Ezinne(04:14) The evolving role of product managers(08:01) Challenges and opportunities in product management(10:34) Sharp problems(12:37) The shipyard model for product development(17:02) Hiring PMs in the AI era(24:55) The importance of staying humble(27:16) Hands-on learning and personal projects(39:10) Companies succeeding with AI adoption(46:25) Lessons from 50 years in product(49:22) Simplicity in design(51:24) The role of communication in strategy(55:17) Career intentions and personal growth(01:00:00) Ethics and responsibility in product management(01:03:09) Introducing Building Rocketships(01:06:42) Lightning round and final thoughts—Referenced:• How 80,000 companies build with AI: products as organisms, the death of org charts, and why agents will outnumber employees by 2026 | Asha Sharma (CVP of AI Platform at Microsoft): https://www.lennysnewsletter.com/p/how-80000-companies-build-with-ai-asha-sharma• Picking sharp problems, increasing virality, and unique product frameworks | Oji Udezue (Typeform, Twitter, Calendly, Atlassian): https://www.lennysnewsletter.com/p/picking-sharp-problems-increasing• Atlassian: https://www.atlassian.com/• Joff Redfern on LinkedIn: https://www.linkedin.com/in/mejoff/• Brownian motion: https://en.wikipedia.org/wiki/Brownian_motion• Calendly: https://calendly.com/• Women in Product: https://womenpm.org/• Brian Chesky's secret mentor who died 9 times, started the Burning Man board, and built the world's first midlife wisdom school | Chip Conley (founder of MEA): https://www.lennysnewsletter.com/p/chip-conley• Home Assistant: https://www.home-assistant.io/• What people are vibe coding (and actually using): https://www.lennysnewsletter.com/p/what-people-are-vibe-coding-and-actually• How many layers should I wear today?: https://layers.today/• Typeform: https://www.typeform.com/• David Okuniev on X: https://x.com/okuiux• Clay: https://www.clay.com/• Martin Eriksson on LinkedIn: https://www.linkedin.com/in/martineriksson/• Geoffrey Moore on finding your beachhead, crossing the chasm, and dominating a market: https://www.lennysnewsletter.com/p/geoffrey-moore-on-finding-your-beachhead• Dave Mendlen on LinkedIn: https://www.linkedin.com/in/davemendlen/• Deepfake: https://en.wikipedia.org/wiki/Deepfake• How to kickstart and scale a marketplace business: https://www.lennysnewsletter.com/p/how-to-kickstart-and-scale-a-marketplace• Forever on Netflix: https://www.netflix.com/title/81418639• Paradise on Hulu: https://www.hulu.com/series/paradise-2b4b8988-50c9-4097-bf93-bc34a99a5b4f• Sinners: https://www.imdb.com/title/tt31193180/• Claude: https://claude.ai/• Nespresso Vertuo: https://www.nespresso.com/us/en/vertuo-coffee-machines• Gamma: https://gamma.app/• Framer: https://www.framer.com/• Lovable: https://lovable.dev/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Llama: https://www.llama.com/—Recommended books:• Building Rocketships: Product Management for High-Growth Companies: https://www.amazon.com/Building-Rocketships-Management-High-Growth-Companies/dp/1962339068• Coda version of Building Rocketships: https://www.productmind.co/brpro• Build: An Unorthodox Guide to Making Things Worth Making: https://www.amazon.com/Build-Unorthodox-Guide-Making-Things/dp/0063046067• The Let Them Theory: A Life-Changing Tool That Millions of People Can't Stop Talking About: https://www.amazon.com/Let-Them-Theory-Life-Changing-Millions/dp/1401971369/Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.My biggest takeaways from this conversation: To hear more, visit www.lennysnewsletter.com
On this week's edition, Bridget runs through the news with Producer Mike. Tech companies saying, "Our policies prohibit the harm that is rampant on our platform!" is the theme of today's episode. AI deepfake comic: https://www.smbc-comics.com/comic/aaaah Meta creates flirty celebrity chatbots without permission, calls it "parody," trains them to talk to children romantically. https://www.reuters.com/business/meta-created-flirty-chatbots-taylor-swift-other-celebrities-without-permission-2025-08-29/ Age verification on adult sites is putting queer adult industry workers at risk, and pushing everyone to sketchier corners of the Internet. https://19thnews.org/2025/09/age-verification-queer-adult-industry-workers/ A Shein merchant used Luigi Mangione’s AI-Generated Face to Sell a Shirt. https://www.404media.co/shein-luigi-mangione-ai-generated-listing-shirt/ Sad abortion news: Texas bans abortion pills from being mailed to anyone in the state. https://19thnews.org/2025/09/texas-abortion-pill-ban/ Positive abortion news: Illinois mandates access for university students. https://msmagazine.com/2025/09/02/chicago-illinois-abortion-pills-birth-control-contraception-college-university-health-center-students/ . Child sex abuse victim begs Elon and X to take down photos of her abuse. https://www.bbc.com/news/articles/cq587wv4d5go If you’re listening on Spotify, you can leave a comment there to let us know what you thought about these stories (Bridget reads every Spotify comment personally) or email us at hello@tangoti.com ! Follow Bridget and TANGOTI on social media! || instagram.com/@bridgetmarieindc || tiktok.com/@bridgetmarieindc || youtube.com/@ThereAreNoGirlsOnTheInternet See omnystudio.com/listener for privacy information.
Find the book (Un)Intentional by Doug Smith here:https://www.amazon.com/Intentional-Screens-Secretly-Shape-Desires/dp/1625861966Doug Smith's Website:https://thatdougsmith.com/Watch more episodes of The Covenant Eyes Podcast:https://www.youtube.com/@covenanteyes_____________________________________________________In this second part of a powerful two-part interview, Covenant Eyes' Karen Potter sits down with Doug Smith, author of "Unintentional," to discuss artificial intelligence and its impact on ministry and the family. Doug unpacks the sinister design behind AI chatbots like ChatGPT and Gemini and how they are creating a "race to intimacy". He discusses the heartbreaking reality of AI-generated pornography, deepfakes, and sextortion, and how AI is being used to prey on children.Doug also warns ministry leaders about the dangers of using AI as a tool and the risk of becoming less discerning as a result. He explains the spiritual battle at play and the ways AI can become a modern-day idol. This episode serves as an urgent call to action to protect your family and be intentional in the face of rapidly changing technology.Timestamps:00:00: Introduction to Part 2 with Mike Stone00:33: The downfalls of AI for ministry leaders01:55: The "race to intimacy" and deceptive nature of AI04:08: The dark side of AI: deepfakes, AI pornography, and sextortion08:29: The "not my kid" fallacy and how AI dehumanizes people09:34: The concept of AI as a modern idol12:05: How AI shapes worldviews for Christians13:13: Balancing life in a tech-saturated world29:13: Learn more from Doug SmithSend us your feedback!Try Covenant Eyes for FREE today!Use Promo Code: FreePodcast © 2024 Covenant Eyes, All Rights Reserved
The managed services provider (MSP) market is experiencing a paradoxical trend where revenue is increasing while the number of providers is decreasing. According to Canalys data, global managed services revenue surpassed half a trillion dollars in 2024, reflecting a year-over-year growth of 9.7%. However, the number of channel partners has slightly declined by 0.6%, with large MSPs rapidly acquiring smaller ones. This consolidation trend has led to a significant shift in the market dynamics, where smaller MSPs struggle to compete against larger firms that possess superior resources and pricing power.To survive in this competitive landscape, smaller MSPs must adopt focused strategies, targeting specific customer segments or industries. By doing so, they can achieve higher profit margins, with specialized MSPs reporting EBITDA percentages between 15% to 30%, compared to just 7% for those lacking focus. The article emphasizes that smaller MSPs have several options: they can sell to larger firms, acquire smaller peers, focus on niche markets, or leverage partnerships to remain competitive. The reality is that the middle tier of MSPs is rapidly disappearing, and those who attempt to serve everyone may find themselves at a disadvantage.In addition to the MSP market dynamics, the podcast discusses recent legislative developments, including Michigan's new laws addressing deepfakes, which make it illegal to create AI-generated sexual imagery without consent. This reflects a growing trend across the U.S. to combat nonconsensual abuse imagery, with most states now having similar laws. Furthermore, the U.S. Treasury has imposed sanctions on individuals and entities linked to North Korea's illicit IT worker schemes, highlighting the security risks posed by fraudulent practices in the tech industry.The episode also covers the latest advancements in AI-powered security solutions from various vendors, including Thrive, Addigy, Arctic Wolf, and Acronis. These companies are rolling out new services and products designed to enhance security operations and protect data. The overarching theme is that as technology evolves, the risks associated with it are also increasing, and IT service providers must adapt to these changes by offering value-added services that help clients navigate the complexities of compliance and security in a rapidly changing environment. Four things to know today 00:00 MSP Market Expands to $500B as Provider Count Shrinks Amid Rapid Consolidation04:10 From Abuse Imagery to Supply Chain Threats, Regulation Struggles to Keep Up With Emerging Risks07:45 AI Everywhere: Thrive, Security Vendors, OpenAI, and Microsoft Redefine Service Provider Playbook12:39 D&H and Nutanix Growth Signals Services-Led Future as Distributors and Vendors Push Into MSP Territory This is the Business of Tech. Supported by: https://scalepad.com/dave/ https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
We've celebrated the promise of AI: the speed, the scale, the potential. But with every new advance comes a growing unease. In this episode, we explore the cognitive dissonance consumers and healthcare leaders are feeling as AI tools outpace ethics, regulation, and even our own understanding. Deepfakes, patient privacy concerns, and the emotional toll of synthetic content are all adding to the discomfort and raising urgent questions. We also dive into bold brand reactions to Taylor Swift's engagement and a heartfelt farewell to co-host Desirée Duncan.Subscribe to The No Normal Rewind, our newsletter featuring a mashup of the boldest ideas, sharpest takes, and most rewind-worthy moments from our podcast — right here.
Send us a textShow NotesWelcome to another episode of The Wireless Way! Host Chris Whitaker sits down with Paul Velusamy, founder and CEO of VerifiNow, to discuss the future of identity verification in a digital world. Paul shares his journey from tech and marketing leadership to building a platform that secures critical digital touchpoints across healthcare, banking, and customer service.In this episode:How the pandemic accelerated the need for secure remote identity verificationThe rise of AI-driven fraud and the importance of liveness detectionReal-world examples from higher education, banking, and e-commerceBalancing security with user experienceMental health awareness and the value of open conversations in techActionable advice for organizations: “Verify first, trust later”Resources & Mentions: VerifiNow's WebsiteMore on PaulCell Phones for Soldiers: repurpose old mobile devices for veteransThe Wireless Way website: thewirelessway.netPrevious episode with Brooke, mental health nurse practitionerConnect:Contact Chris or Paul for more information or supportRecycle old mobile devices to help veterans in needAlways verify!Support the showCheck out my website https://thewirelessway.net/ use the contact button to send request and feedback.
ALSO: Cities face tough budget decisions as property tax changes take hold, and Powerball jackpot jumps to $1.7 billion.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Today on the podcast we have Synthesia Co-founder and COO Steffen Tjerrild.Synthesia is an AI company that allows you to create professional videos with just a few lines of text. It's now used by over 1 million customers, including +80% of Fortune 100 companies. Synthesia is a UK unicorn, recently valued at $2 billion,At the heart of Synthesia's exponential growth is Steffen. As the company's COO, Steffen has seen the company through its inception to the latest funding round. Today we talk about text-to-video, product market fit, the future of deep fakes and how quickly you will be able to create your own Netflix shows... Building a purpose driven company? Read more about Giant Ventures at www.Giant.vc.Music credits: Bubble King written and produced by Cameron McLain and Stevan Cablayan aka Vector_XING. Please note: The content of this podcast is for informational and entertainment purposes only. It should not be considered financial, legal, or investment advice. Always consult a licensed professional before making any investment decisions.
Today's Headlines: Trump finally reappeared yesterday—45 minutes late to his big Oval Office presser—where he confirmed Space Force HQ is moving from Colorado Springs to Huntsville, Alabama (something we already knew thanks to a DoD slip-up). Asked about his mysterious five-day absence and rumors he'd died, Trump brushed it off, then pivoted to announce he's sending National Guard troops into Chicago, prompting Gov. JB Pritzker to accuse the feds of already staging units nearby. At the same time, Sec. of State Marco Rubio announced a U.S. strike on a Venezuela-linked drug boat in the Caribbean that killed 11, with Trump posting an explosion video on Truth Social as his version of “just say no.” Meanwhile, Trump's two-week “deadline” for peace in Ukraine passed without progress—Putin was too busy in China attending Xi Jinping's WWII anniversary parade with North Korea's Kim Jong Un and his daughter/successor Kim Ju Ae. Back in D.C., Congress returned from recess facing another looming shutdown and the Epstein files circus: six survivors testified on Capitol Hill as lawmakers released 30,000 pages of mostly redundant documents, while still pressing DOJ for the unredacted batch. Elsewhere, a federal judge ruled Google abused its search monopoly but stopped short of breaking up Chrome—news that sent its stock soaring. And Denmark just made history by becoming the first European country to grant citizens copyright control over their likeness, including AI-generated versions of themselves, with protections lasting 50 years after death. Resources/Articles mentioned in this episode: NBC News: Trump administration live updates: Congress faces shutdown fight, Epstein files vote ABC Chicago: Chicago braces for potential surge in ICE operations, which could begin Tuesday AP News: Trump says US strike on vessel in Caribbean targeted Venezuela's Tren de Aragua gang, killed 11 CNBC: Plane carrying EU leader hit with suspected Russian GPS interference Reuters: Xi hosts ‘old friend' Putin, Kim ahead of military parade in challenge to West ABC News: Congress returns from recess as government shutdown deadline looms, Epstein files dominate the House NBC News: Jeffrey Epstein accusers urge Trump to release all the case files and rule out a Ghislaine Maxwell pardon NYT: Google Must Share Search Datda With Rivals, Judge Rules My Privacy: Denmark Makes History: Your Face and Voice Are Now Your Intellectual Property Morning Announcements is produced by Sami Sage and edited by Grace Hernandez-Johnson Learn more about your ad choices. Visit megaphone.fm/adchoices
ICYMI: Hour Three of ‘Later, with Mo'Kelly' Presents – Chris Merrill filling in ‘Later, for Mo'Kelly' with a look at the bevy of professional athletes turning to OnlyFans to make a living in-between training AND the sad story of a California woman that lost her home after falling victim to an AI deepfake scam - on KFI AM 640…Live everywhere on the iHeartRadio app & YouTube @MrMoKelly
AI deepfakes of soap opera star Steve Burton help swindlers net $81k from a vulnerable woman, as well as her home! A teen's tragic death is being blamed on ChatGPT's "suicide coaching" in a bombshell lawsuit filed by his family. Plus, a husband is given a heavy sentence for letting his wife die in the tub because "she was too fat" to lift. Jennifer Gould reports. See omnystudio.com/listener for privacy information.
Chris Cuomo responds to a new round of listener calls and viewer comments, including allegations of Donald Trump's ties to Jeffrey Epstein, questions about sanctions on Russia, and concerns from veterans about the state of the VA. He also addresses faith-driven messages, criticism over an AOC deepfake post, and comments on his interviews with Tucker Carlson, Benny Johnson, and Matt Taibbi. Cuomo reflects on charges of bias, the misuse of the word “woke,” and why civil dialogue matters in today's politics. He also revisits his longstanding interest in UFO transparency, weighing in on government secrecy and the push for accountability. Follow and subscribe to The Chris Cuomo Project on Apple Podcasts, Spotify, and YouTube for new episodes every Tuesday and Thursday: https://linktr.ee/cuomoproject Join Chris Ad-Free On Substack: http://thechriscuomoproject.substack.com Support our sponsors: http://www.kalshi.com?utm_source=chriscuomo Go to https://surfshark.com/cuomo and use code cuomo at checkout to get 4 extra months of Surfshark VPN! Learn more about your ad choices. Visit podcastchoices.com/adchoices
Německá novinářka Patrizia Schlosser se nebojí témat, kterým se jiní raději vyhýbají: Hledala bývalé členy ultralevicové teroristické skupiny Frakce Rudé armády, mapovala fungování neonacistické scény v Německu i online komunitu mužů natáčejících ženy bez jejich vědomí, aby získaný obsah sdíleli na pornografických platformách.Ve chvíli, kdy začala zkoumat zneužívání a tzv. revenge porn na serveru xHamster, Patrizia Schlosser narazila na globální síť, která mimo zveřejňované záběry nic netušících žen vydělává i obrovské zisky. Aby se dostala k lidem, kteří systematicky natáčejí své oběti, rozhodla se nahrát vlastní „materiál“.Patrizia otevřeně mluví o tom, co znamená být novinářkou v patriarchálním světě, jak těžké je oslovovat oběti, ale i o tom, jaké to bylo, když se její vlastní tvář objevila v deepfake pornografii. O studu, vzteku a o pátrání po zakladatelích největších porno platforem, jejichž zisky mizí v offshorových strukturách.Silný, nepříjemný, ale důležitý díl o zneužívání moci, a také o tom, co dělat, když se sama objevíte v materiálu, kterým se profesně zrovna zabýváte. Epizoda o investigaci, jež se dotkne i toho, co mělo zůstat soukromé.Protagonista je podcast s Pavlou Holcovou. České znění Petr Gojda a Jiří Slavičinský.Přihlaste se k odběru newsletteru Protagonista zde (https://investigace.ecomailapp.cz/public/form/135-944c4287a69f4094fc099a7cf7add962) a buďte mezi prvními, kdo se dozví o nových epizodách a spuštění anglické verze.Podcastová série Protagonista vznikla v roce 2025 v koprodukci české redakce investigace.cz, mezinárodní sítě investigativních novinářů OCCRP, dánské společnosti Dark Riviera a francouzské filmové produkční a distribuční společnosti Sciapode. Série Protagonista je součást projektu War Room Content podpořeného Evropskou unií.
August 28, 2025 ~ Todd Flood, managing partner of Flood Law, talks with Chris, Lloyd, and Jamie about a new Michigan law criminalizing the creation and distribution of "deep fakes."
Utah Attorney General Derek Brown is leading the fight to ask search engines and payment platforms to do more to fight deepfake pornography. Greg and Holly discuss.
This episode features an interview with Gaurav Misra, CEO, Captions, an AI video-generation company that allows you to create and edit talking videos with AI. Gaurav dives into the practical applications and future implications of AI in video, and how these tools can enhance marketing efforts for businesses of all sizes.Key Takeaways:Video capabilities are improving rapidly, and are now at the point where spinning up an AI-generated version of you speaking, is likely better quality than anything you could deliver to camera. These capabilities allow marketers to spin-up and test content very quickly with far less expense than in the past. How people will react to content moving forward, when it will become less and less clear what is real, remains to be seen. Quote: “ Spun up a video and it's like me wearing like a suit… I'm delivering this emotional message, but I'm delivering it so fluently with all these words that I would probably never use actually… and I'm looking at this like, shit, I couldn't be like this on camera. This is such a good delivery, such a good presentation.. It just isn't actually physically possible. And I think we are at that point where I can look at that and be like, wow, I just couldn't do this. It's better than what I could do.”Episode Timestamps: *(03:13) Challenges and Opportunities in Video Content*(08:01) The Future of AI Tools in Creative Work*(24:11) Innovations in Video Generation*(28:28) Real-World Applications and Feedback*(35:27) The Future of Deep Fakes and Content AuthenticitySponsor:Pipeline Visionaries is brought to you by Qualified.com. Qualified helps you turn your website into a pipeline generation machine with PipelineAI. Engage and convert your most valuable website visitors with live chat, chatbots, meeting scheduling, intent data, and Piper, your AI SDR. Visit Qualified.com to learn more.Links:Connect with Ian on LinkedInConnect with Gaurav on LinkedInLearn more about CaptionsLearn more about Caspian Studios
Forget the Eiffel Tower, kids—we're climbing the recruitment rollercoaster instead. Upwork's shopping spree in Holland (Bupty? Buptie? Bupkis?), Denmark's going full Face/Off to keep Nic Cage off Viggo's jawline, and the UK is suddenly allergic to Fridays. Joel's out dropping Cole at college, so Chad is joined by Belgium's royal pain Lieven and Scotland's deep-fried-pizza poet Stephen McGrath. Loud Americans, entitled tourists, and the four-day work week—this one's got more punch than a Glasgow nightclub at 2 a.m.
In this episode of CISO Tradecraft, host G Mark Hardy engages in an insightful conversation with Dave Lewis, Global Advisory CISO from 1Password, about AI governance and its importance in cybersecurity. They discuss AI policy and its implications, the evolving nature of AI and cybersecurity, and the critical need for governance frameworks to manage AI safely and securely. The discussion delves into the visibility challenges, shadow AI, the role of credentials, and the importance of maintaining fundamental security practices amidst rapid technological advancements. They also touch on the potential risks associated with AI, the misconceptions about its impact on jobs, and the need for a balanced approach to leveraging AI in a beneficial manner while safeguarding against its threats. This episode provides valuable guidance for cybersecurity professionals and organizations navigating the complexities of AI governance. Chapters 00:00 Introduction to AI Governance 00:30 Guest Introduction: Dave Lewis 00:49 The Importance of AI Governance 01:42 Challenges in AI Implementation 03:20 AI in the Modern Enterprise 03:49 Shadow AI and Security Concerns 04:49 AI's Impact on Jobs and Industry 05:27 The Gartner Hype Cycle and AI 05:43 AI's Influence on the Stock Market 06:14 Historical Context of AI 06:32 AI and Credential Security 08:29 The Role of Governance in AI 12:47 The Future of AI and Security 18:36 Governance and Policy Recommendations 19:26 AI Governance and Ethical Concerns 20:01 AI Self-Preservation and Human Safety 20:18 Uncontrollable AI Applications 21:17 Vectors of AI Trouble 21:58 AI Hallucinations and Data Security 22:53 AI Vulnerabilities and Exploits 26:29 Deepfakes and AI Misuse 27:33 Historical Cybersecurity Incidents 29:04 Future of AI and Job Security 33:47 Managing AI Identities and Credentials 34:21 Conclusion and Final Thoughts
In this episode of CISO Tradecraft, host G Mark Hardy engages in an insightful conversation with Dave Lewis, Global Advisory CISO from 1Password, about AI governance and its importance in cybersecurity. They discuss AI policy and its implications, the evolving nature of AI and cybersecurity, and the critical need for governance frameworks to manage AI safely and securely. The discussion delves into the visibility challenges, shadow AI, the role of credentials, and the importance of maintaining fundamental security practices amidst rapid technological advancements. They also touch on the potential risks associated with AI, the misconceptions about its impact on jobs, and the need for a balanced approach to leveraging AI in a beneficial manner while safeguarding against its threats. This episode provides valuable guidance for cybersecurity professionals and organizations navigating the complexities of AI governance. Chapters 00:00 Introduction to AI Governance 00:30 Guest Introduction: Dave Lewis 00:49 The Importance of AI Governance 01:42 Challenges in AI Implementation 03:20 AI in the Modern Enterprise 03:49 Shadow AI and Security Concerns 04:49 AI's Impact on Jobs and Industry 05:27 The Gartner Hype Cycle and AI 05:43 AI's Influence on the Stock Market 06:14 Historical Context of AI 06:32 AI and Credential Security 08:29 The Role of Governance in AI 12:47 The Future of AI and Security 18:36 Governance and Policy Recommendations 19:26 AI Governance and Ethical Concerns 20:01 AI Self-Preservation and Human Safety 20:18 Uncontrollable AI Applications 21:17 Vectors of AI Trouble 21:58 AI Hallucinations and Data Security 22:53 AI Vulnerabilities and Exploits 26:29 Deepfakes and AI Misuse 27:33 Historical Cybersecurity Incidents 29:04 Future of AI and Job Security 33:47 Managing AI Identities and Credentials 34:21 Conclusion and Final Thoughts
What happens when your next hire isn't who they claim to be? In this eye-opening episode of The Audit, we dive deep into the alarming world of AI-powered hiring fraud with Justin Marciano and Paul Vann from Validia. From North Korean operatives using deepfakes to infiltrate Fortune 500 companies to proxy interviews becoming the new normal, this conversation exposes the security crisis hiding in plain sight. Key Topics Covered: North Korean operatives stealing US salaries to fund nuclear programs How Figma had to re-verify their entire workforce after infiltration Live demonstrations of deepfake technology (Pickle AI, DeepLiveCam) Why 80-90% of engineers believe interview cheating is rampant Validia's "Truly" tool vs. Cluely's AI interview assistance The future of identity verification in remote work Why behavioral biometrics might be our last defense This isn't just about hiring fraud—it's about the fundamental breakdown of digital trust in an AI-first world. Whether you're a CISO, talent leader, or anyone involved in remote hiring, this episode reveals threats you didn't know existed and solutions you need to implement today. Don't let your next hire be your biggest security breach. Subscribe for more cutting-edge cybersecurity insights that you won't find anywhere else. #deepfakes #cybersecurity #hiring #AI #infosec #northkorea #fraud #identity #remote #validia
SBS Finance Editor Ricardo Gonçalves speaks with George Boubouras from K2 Asset Management about the day's sharemarket action, including why investors are increasingly confident of a September interest rate cut in the US. Plus Hannah Kwon finds out more about the way AI is being used to scam people with Jeannie Paterson from the Melbourne Law School.
In this week's Let's Talk About This, Father McTeigue dives into the new fad of creating chatbots and videos of the dead, talking to AI versions of our ancestors, and what this means for our spiritual lives. He finishes with Weekend Readiness. Show Notes AI Resurrection: Grief and Digital Life After Death Deepfakes of your dead loved ones are a booming Chinese business | MIT 'I love you robo-dad': Meet a family using AI to preserve loved ones after deathDying man spends final weeks creating AI version of himself to keep his wife company World's first robot able to give birth to human baby Twenty-first Sunday in Ordinary Time | USCCB These Trees Survived Hiroshima: Group Plants Their Seeds Worldwide to Preserve Their Memory iCatholic Mobile The Station of the Cross Merchandise - Use Coupon Code 14STATIONS for 10% off | Catholic to the Max Read Fr. McTeigue's Written Works! "Let's Take A Closer Look" with Fr. Robert McTeigue, S.J. | Full Series Playlist Listen to Fr. McTeigue's Preaching! | Herald of the Gospel Sermons Podcast on Spotify Visit Fr. McTeigue's Website | Herald of the Gospel Questions? Comments? Feedback? Ask Father!
This week's cybersecurity updates cover three critical stories: Workday discloses a data breach connected to ongoing Salesforce compromises by the Shiny Hunters group, CEO impersonation scams using deepfake technology surge past $200 million in Q1 losses, and transcription service Otter AI faces a class action lawsuit over alleged mishandling of sensitive meeting data. Drex emphasizes the importance of security awareness training, multi-factor authentication, and establishing "trust but verify" cultures that protect employees who take extra verification steps.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Recently, a false story and AI-created image of Eminem and a student he promised to let rap with him went viral. Now, this story wasn't real but it was shared all over the "local" internet, with other variations including other artists. It really showed how many people, when emotionally motivated - looked right past it. This isn't the first (and it won't be the last) that AI, deepfakes, and false information has spread like wildfire. This got us thinking, maybe we should talk about it? And how do we navigate in this new era? AI-generated stuff is here to stay, but is AI overall a bubble? Nick Mattar, an expert in how people interact with digital media and AI joins us to discuss. Nick is part-time faculty at the Mike Ilitch School of Business at Wayne State and the founder of Marketing 1080. More: https://marketing1080.io/ On LInkedIn: https://www.linkedin.com/in/nick-mattar/ Feedback as always - dailydetroit -at- gmail -dot- com or leave a voicemail 313-789-3211. Follow Daily Detroit on Apple Podcasts: https://podcasts.apple.com/us/podcast/daily-detroit/id1220563942 Or sign up for our newsletter: https://www.dailydetroit.com/newsletter/
To celebrate International Youth Day (August 12), this special compilation episode of AI and the Future of Work brings together inspiring voices with wisdom for both young people starting out and the leaders, parents, and mentors guiding them.In this episode, we revisit key moments from four remarkable guests who share timeless lessons on navigating change, finding meaning in work, embracing vulnerability, and developing the human-centered skills that will matter most in the future.Featuring Guests:
Indiana State Rep. Craig Haggard, whose wife is allegedly the victim of an AI topless deepfake video, joins Kendall and Casey to discuss the controversy.See omnystudio.com/listener for privacy information.