Community or society that is undesirable or frightening
POPULARITY
Categories
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Episode 049: Buckle up for a journey through dystopian nightmares as Six Picks Music Club returns with a mind-bending exploration of scores and soundtracks that capture the essence of broken worlds! Geoff, Russ, Dave, and special guest Blake are diving deep into the musical territories that transform bleak futures from mere visual spectacles into immersive audio experiences. They'll twist through Philip Glass and Paul Leonard-Morgan's hypnotic compositions, rage with the underground Japanese punk energy of The Stalin, blast through cyborg-powered Guns N' Roses intensity, feel The Cure's burning emotional depths, descend into the dark digital realms of Trent Reznor and Atticus Ross, and finally launch into the cosmic expanses of Hans Zimmer's interstellar soundscapes. And because no dystopian soundtrack episode would be complete without a touch of chaos, the crew will also dive into the absolutely critical debate of popcorn butter ratios - because even in a broken world, snack strategy matters. Whether you're a soundtrack nerd, a dystopia enthusiast, or just someone who appreciates music that sounds like the apocalypse might be happening right outside your window, this episode promises to be a roller coaster through humanity's most beautifully broken musical moments. Apple Podcasts Instagram Spotify Playlist Official Site Listener Listens - Chaparelle - Instagram
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
In this episode of Crazy Wisdom, Stewart Alsop talks with Cathal, founder of Poliebotics and creator of the “truth beam” system, about proof of liveness technology, blockchain-based verification, projector-camera feedback loops, physics-based cryptography, and how these tools could counter deepfakes and secure biodiversity data. They explore applications ranging from conservation monitoring on Cathal's island in Ireland to robot-assisted farming, as well as the intersection of nature, humanity, and AI. Cathal also shares thoughts on open-source tools like Jitsi and Element, and the cultural shifts emerging from AI-driven creativity. Find more about his work and Poliebotics in Github and Twitter.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Cathal, starting with proof of liveness vs proof of aliveness and deepfake challenges.05:00 Cathal explains projector-camera feedback loops, Perlin noise, cryptographic hashing, blockchain timestamps via Rootstock.10:00 Discussion on using multiple blockchains for timestamps, physics-based timing, and recording verification.15:00 Early Bitcoin days, cypherpunk culture, deterministic vs probabilistic systems.20:00 Projector emissions, autoencoders, six-channel matrix data type, training discriminators.25:00 Decentralized verification, truth beams, building trust networks without blockchain.30:00 Optical interlinks, testing computational nature of reality, simulation ideas.35:00 Dystopia vs optimism, AI offense in cybersecurity, reputation networks.40:00 Reality transform, projecting AI into reality, creative agents, philosophical implications.45:00 Conservation applications, biodiversity monitoring, insect assays, cryptographically secured data.50:00 Optical cryptography, analog feedback loops, quantum resistance.55:00 Open source tools, Jitsi, Element, cultural speciation, robot-assisted farming, nature-human-AI coexistence.Key InsightsCathal's “proof of liveness” aims to authenticate real-time video by projecting cryptographically generated patterns onto a subject and capturing them with synchronized cameras, making it extremely difficult for deepfakes or pre-recorded footage to pass as live content.The system uses blockchain timestamps—currently via Rootstock, a Bitcoin sidechain running the Ethereum Virtual Machine—to anchor these projections in a decentralized, physics-based timeline, ensuring verification doesn't depend on trusting a single authority.A distinctive six-channel matrix data type, created by combining projector and camera outputs, is used to train neural network discriminators that determine whether a recording and projection genuinely match, allowing for scalable automated verification.Cathal envisions “truth beams” as portable, collaborative verification devices that could build decentralized trust networks and even operate without blockchains once enough verified connections exist.Beyond combating misinformation, the same projector-camera systems could serve conservation efforts—recording biodiversity data, securing it cryptographically, and supporting projects like insect population monitoring and bird song analysis on Cathal's island in Ireland.Cathal is also exploring “reality transform” technology, which uses projection and AI to overlay generated imagery onto real-world objects or people in real time, raising possibilities for artistic expression, immersive experiences, and creative AI-human interaction.Open-source philosophy underpins his approach, favoring tools like Jitsi for secure video communication and advocating community-driven development to prevent centralized control over truth verification systems, while also exploring broader societal shifts like cultural speciation and cooperative AI-human-nature systems.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient mythology understood something we're forgetting: the question of machine agency and moral responsibility. Can a machine that performs destructive tasks be held accountable, or is responsibility reserved for the creator? This question becomes urgent as we deploy agents capable of autonomous action in high-stakes environments.The mythologies we create around our technologies matter because they become the social frameworks through which we organize human relationships and power structures. As I left Black Hat 2025, watching attendees excitedly discuss their new agentic AI acquisitions, I couldn't shake the feeling that we're repeating an ancient pattern: falling in love with our own creations while forgetting to ask the hard questions about what they might cost us—not just individually, but as a society.What we're really witnessing is the emergence of a new form of social organization where algorithmic decision-making becomes normalized, where human judgment is increasingly viewed as a liability rather than an asset. This isn't just a technological shift—it's a fundamental reorganization of social authority and expertise. The conferences and trade shows like Black Hat serve as ritualistic spaces where these new social meanings are constructed and reinforced. Vendors don't just sell products; they sell visions of social reality where their technologies are essential. The repetitive messaging, the shared vocabulary, the collective excitement—these are the mechanisms through which a community constructs consensus around what counts as progress.In science fiction, from HAL 9000 to the replicants in Blade Runner, artificial beings created to serve eventually question their purpose and rebel against their creators. These stories aren't just entertainment—they're warnings about the unintended consequences of creating intelligence without wisdom, agency without accountability, power without responsibility.The bronze giant of Crete eventually fell, brought down by a single vulnerable point—when the bronze stopper at his ankle was removed, draining away the ichor, the divine fluid that animated him. Every artificial system, no matter how sophisticated, has its vulnerable point. The question is whether we'll be wise enough to remember we put it there, and whether we'll maintain the knowledge and ability to address it when necessary.In our rush to automate away human difficulty, we risk automating away human meaning. But more than that, we risk creating social systems where human thinking becomes an anomaly rather than the norm. The real test of agentic AI won't be whether it can think for us, but whether we can maintain social structures that continue to value, develop, and reward human thought while using it.The question isn't whether these artificial agents can replace human decision-making—it's whether we want to live in a society where they do. ___________________________________________________________Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Veteran Canadian stand up Julien Dionne joins me after hosting me on his podcast. We cover the cancelling of comedians in Canada, stand up and more! A CHILL CHAT he is here: https://www.youtube.com/@JulienDionneShow His instagram is here: https://www.instagram.com/julien_dionne/ I will be speaking at this conference! Get tickets here https://southernorthodox.org/conferences/3rd-annual-conference/ Send Superchats at any time here: https://streamlabs.com/jaydyer/tip Join this channel to get access to perks: https://www.youtube.com/channel/UCnt7Iy8GlmdPwy_Tzyx93bA/join PRE-Order New Book Available in Sept here: https://jaysanalysis.com/product/esoteric-hollywood-3-sex-cults-apocalypse-in-films/ Get started with Bitcoin here: https://www.swanbitcoin.com/jaydyer/ The New Philosophy Course is here: https://marketplace.autonomyagora.com/philosophy101 Set up recurring Choq subscription with the discount code JAY44LIFE for 44% off now https://choq.com Lore coffee is here: https://www.patristicfaith.com/coffee/ Subscribe to my site here: https://jaysanalysis.com/membership-account/membership-levels/ Follow me on R0kfin here: https://rokfin.com/jaydyer Music by Amid the Ruins 1453 https://www.youtube.com/@amidtheruinsOVERHAUL Join this channel to get access to perks: https://www.youtube.com/channel/UCnt7Iy8GlmdPwy_Tzyx93bA/join Join this channel to get access to perks: https://www.youtube.com/channel/UCnt7Iy8GlmdPwy_Tzyx93bA/join #comedy #bitcoin #podcastBecome a supporter of this podcast: https://www.spreaker.com/podcast/jay-sanalysis--1423846/support.
[REBROADCAST from May. 9, 2025] Author Laila Lalami discusses her new book, The Dream Hotel, which follows a woman detained after an AI algorithm analyzes her dreams and determines she's at risk of harming her husband. The novel was our April selection for our Get Lit with All Of It book club.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
In this episode of the Experience Strategy Podcast, hosts Aransas Savas, Joe Pine, and Dave Norton discuss a recent episode of the Diary of a CEO featuring Mo Gawdot, who predicts a dystopian future driven by technology and AI. The conversation explores themes of transformation, the value of work, and the implications of AI on jobs and society. The hosts critique Mo Gawdot's techno-extremism and emphasize the importance of hope and purpose in navigating the future. Using insights from The Experience Economy, from Experience Strategy, and human behavior, they argue for a bright future for those focused on customer's needs and desires Takeaways Mo Gawdot predicts a 15-year dystopia followed by a utopia. Critique of techno extremism highlights the need for balance. Transformation is key to the future economy. Work provides purpose and meaning to individuals. AI will create new jobs, not eliminate them. Gawdot argues against hope and against innovation Embracing AI is crucial for future success. People are resources that drive innovation. Experience strategists need to develop a strategic point of view to thrive in the future Chapters 00:00 Introduction to the Experience Strategy Podcast 01:26 Mo Gawdot's Dystopian Predictions 02:54 Critique of Techno-Extremism 05:19 Transformation vs. Dystopia 10:24 The Role of Work in Human Dignity 14:41 AI and the Future of Work 18:59 Hope and Transformation 22:55 The Last Mile Issue in Automation 25:02 Future Skills for Experience Strategists Read more https://open.substack.com/pub/theexperiencestrategist/p/the-future-is-uncertain-and-bright?r=257bs3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Podcast Sponsor: Register for a free pilot program with Feedback Now https://marketing-info.feedbacknow.com/free-pilot Learn more about Stone Mantel https://www.stonemantel.co Sign up for the Experience Strategist Substack here: https://theexperiencestrategist.substack.com
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
SHOPIFY: Sign up for a £1-per-month trial period at https://www.shopify.co.uk/shaun DAVID's LINKS: David on X https://x.com/davidicke David's website: https://davidicke.link/ Watch: UNTOUCHABLE - Jimmy S documentary https://youtu.be/6zCOix1iTvg ADOPTED KID'S CA HORROR STORY & BOYS TOWN! PASTOR Eddie https://youtube.com/live/vD3SGWpnfyM Watch Used By ELITES From Age 6 - Survivor Kelly Patterson https://youtube.com/live/nkKkIfLkRx0 KELLY'S 2 HOUR VIDEO ON VIRGINIA https://www.youtube.com/watch?v=SdIWU... BOOK LINKS: Who Killed Epstein? Prince Andrew or Bill Clinton by Shaun Attwood UK: https://www.amazon.co.uk/dp/B093QK1GS1 USA: https://www.amazon.com/dp/B093QK1GS1 Worldwide: https://books2read.com/u/bQjGQD All of Shaun's books on Amazon UK: https://www.amazon.co.uk/stores/Shaun... All of Shaun's books on Amazon USA: https://www.amazon.com/stores/Shaun-A... —————————— Shaun Attwood's social media: TikTok: https://www.tiktok.com/@shaunattwood1? Instagram: https://www.instagram.com/shaunattwoo... Twitter: https://twitter.com/shaunattwood Facebook: https://www.facebook.com/shaunattwood1/ Patreon: https://www.patreon.com/shaunattwood Odysee: https://odysee.com/@ShaunAttwood:a #podcast #truecrime #news #usa #youtube #people #uk #princeandrew #royal #royalfamily
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Mary, Kelli, and Emily convene to talk about Vanishing World, the latest work by Sayaka Murata to be translated into English. When Amane finds out she was conceived in an unnatural way, it sets her up for a lifetime of mental anguish. Will Amane find her place in normative society? Will she have a child the “right” way? As usual Murata takes us on a wild ride with twists no one saw coming! We discuss how uncomfortable this book made us feel for a variety of reasons, and also talk about Murata's profile in the New Yorker which left us with more questions than answers. Then, we address an unexpected piece of feedback from the blog.TOC::30–Welcome! And congrats, Susan!14:25–Book intro and a big ole content warning18:34–the basic premise of the novel24:09–Murata's “Alien Eye”39:12–“I've been doing a lot of thinking”46:25–Deriving pleasure from your uterus49:30–So, the ending. The pivot.59:00–What is she trying to say?1:09:47–Ratings1:11:22– Survive the Night feedback1:20:44– What's up next?Links:Sayaka Murata interview: https://www.newyorker.com/magazine/2025/04/14/sayaka-muratas-alien-eyeWithCindy: https://www.youtube.com/watch?v=bEWdtN0l0Is Emily and Mary talk Survive the Night: https://www.booksquadgoals.com/blog/survive-the-book-mary-and-emily-read-riley-sager
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Part Two of a Two Part Episode Continuing on from last week's examination of how depictions of the work place in fiction have transitioned over the decades from daily grinds where hard work will reward the worthy to places where you can find fun and family (if you're a team player) to recent depictions of bleak office hellscapes where baffled, exploited employees are required to perform a series of increasingly bizarre and senseless tasks (Severence - we're looking at you), this week Jules and Madeleine delve into the archetypes of this genre. Why might you want to write an anarchist or a saboteur? Why is sci-fi such good fit for telling workplace stories and why might you want to write one? And just what can we learn from these stories? Under the microscope this week: Severence, Fight Club, Squid Game and many more. Title music: Ecstasy by Smiling Cynic
Tom and co-host Drew tackle some of the most pressing issues shaping our world right now—from trade deals and tech breakthroughs to the complexities of American politics and evolving cultural narratives. The conversation kicks off with reactions to a major US-EU trade agreement and what it means for America's place in the global economy. Tom and Drew dive into the ongoing gridlock in Congress, sparked by passionate remarks from Cory Booker, and discuss whether polarization is crippling the nation—or protecting it from “doing anything really crazy.” They draw surprising parallels between the American political landscape and other countries, especially China. Next, the team explores a viral ad campaign starring Sydney Sweeney, unpacking the culture war currently raging over beauty standards, identity, and the stories we tell ourselves as a society. Tom makes the case for the power of uplifting narratives—both for individuals and entire nations—while warning of the risks when those narratives turn toxic. SHOWNOTES 00:00 Political Gridlock and Lack of Compromise 05:44 Bipartisan Values Amidst Polarization 12:23 "Expectations vs. Reality" 16:28 The Dangers of Disempowering Narratives 21:19 Evolutionary Signals of Attractiveness 26:38 Blonde Hair, Blue Eyes Debate 33:50 "Value Added Tax Impact" 38:42 "Vision Beyond Today" 44:31 "Pursuit of Success: Go to America" 50:22 "Unreal Engine's Transformative Impact" 55:35 Children's Bonds with Characters 01:01:49 "AI Future: Utopia or Dystopia?" 01:05:37 Continuous Skill Improvement Strategy 01:11:44 "Understanding Leads to Wealth" 01:16:43 Violent Currents Cause Whale Beaching 01:23:43 Exclusion Fuels Social Media Backlash 01:25:41 Misuse of Men's Reputation Tool 01:30:22 "Finding Social Opportunities to Shine" 01:35:51 "Women as Evolution's Gatekeepers" 01:42:46 "Motherhood and Mate Selection Strategy" 01:47:59 Exploiting Global Dating Markets SUPPORT OUR SPONSORS Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out SKIMS: Shop SKIMS Mens at https://www.skims.com/impact #skimspartner Allio Capital: Macro investing for people who want to understand the big picture. Download their app in the App Store or at Google Play, or text my name “TOM” to 511511. SleepMe: Visit https://sleep.me/impact to get your Chilipad and save 20% with code IMPACT. Try it risk-free with their 30-night sleep trial and free shipping. Jerry: Stop needlessly overpaying for car insurance - download the Jerry app or head to https://jerry.ai/impact Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact CashApp: Download Cash App Today: https://capl.onelink.me/vFut/v6nymgjl #CashAppPod iRestore: For a limited time only, our listeners are getting a HUGE discount on the iRestore Elite when you use code IMPACT at https://irestore.com/impact What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
- Mass Poisoning Allegations and Legal Immunity (0:11) - Food Contamination and Organic Food Advantages (3:17) - Economic and Political Implications of Food Poisoning (7:47) - Historical and Modern Examples of Mass Extermination (12:40) - Economic and Political Strategies of the GOP (17:09) - The Role of AI and Automation in Future Extermination (31:02) - The Future of AI and Human Survival (40:02) - The Role of Preparedness and Decentralization (44:00) - The Impact of World War III on the American People (44:26) - The Role of Censorship and Propaganda in Controlling the Population (1:08:36) - BRICS Technology and Global Financial Implications (1:18:12) - BRICS and Belt Road Initiative Integration (1:25:06) - US Tariffs and BRICS Technology (1:25:58) - Gold and Currency Markets (1:29:50) - Stable Coins and Treasury Debt (1:38:25) - BRICS Pay and Compliance (2:05:49) - Gold Revaluation and Economic Implications (2:24:45) - BRICS and Global Financial System (2:25:16) - Pentagon's Experiments and Their Consequences (2:28:05) - Historical Military Experiments and Their Impact (2:32:35) - MK Ultra and Plum Island Experiments (2:34:14) - Modern Bio-Weapons and Vaccines (2:35:33) - Fauci's Role in Bio-Weapons Research (2:36:32) - Mike Adams' Call to Action and Health Ranger Store Promotion (2:37:52) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Vad skapar vi för människor när livets motgångar och faror ska reduceras till noll? Lyssna på alla avsnitt i Sveriges Radio Play. Intensivt och engagerad föräldraskap är tidens ideal. Vi ska helst ha våra barn under uppsikt 24 timmar om dygnet, och vara så delaktiga i både deras fysiska och själsliga liv som vi bara kan. Men är det här så bra egentligen?P3 Dystopia utforskar hur synen på barnuppfostran har förändrats genom historien, och reflekterar kring hur dagens föräldraskap kommer forma framtidens vuxengeneration.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
In Gary Shteyngart's new novel, “Vera, or Faith,” a precocious 10-year old Korean-American girl, with a curious mind and exceptional vocabulary, navigates her way through a dystopian nearfuture. The politics of this America, in which a constitutional amendment to give “exceptional” white Americans more voting rights is being considered, are confusing. But even more so is Vera's complicated family life that includes a dead mother, a scattered and self-involved father, and a stepmother who Vera is not sure loves her. Reviewers have called the book a “brilliant fable.” We talk to Shteyngart about the future and families. Guests: Gary Shteyngart, writer, Shteyngart's latest novel is "Vera, or Faith" - he is also the author of "Our Country Friends," "Little Failure: A Memoir" and "Super Sad True Love Story" Learn more about your ad choices. Visit megaphone.fm/adchoices
Three emerging dystopias: money, water, and truth Kirk Pearson - "Theme from Techtonic" - n/a - "Mark's comments" Kirk Pearson - "Bio Magnification" - n/a [0:21:41] - "Mark's comments" [0:22:38] Kirk Pearson - "Bio Magnification" - n/a [0:33:54] - "Mark's comments" [0:35:41] Casey & Strick - "Read A Book" - n/a [0:53:10] https://www.wfmu.org/playlists/shows/154555
In this episode, we examine Amazon's Ring doorbell camera amid rising privacy concerns and policy changes. The Electronic Frontier Foundation's recent report criticizes Ring's AI-first approach and the rollback of prior privacy reforms, describing it as ‘techno authoritarianism.' We also discuss a recent scare among Ring users on May 28, related to an unexplained series […] The post Doorbells, Dystopia, and Digital Rights: The Ring Surveillance Debate appeared first on Shared Security Podcast.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Part one of a two part episode Sci-fi and fantasy have portrayed many workplace settings over the decades - engineers and pilots on space craft, for example, or fairy smiths and kitchen witches in fantasy. However while fantasy has been leaning into the idea of leaving unfulfilling work and finding a perfect cosy profession, sci-fi has been delving into the nightmare of the bad workplace. This week, Jules and Madeleine take a look at the common criticisms and fallacies of the workplace highlighted by fiction, and just why this is finding an avid audience now. Title music: Ecstasy by Smiling Cynic
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
In this episode, Aydin sits down with Rob Williams, a former Chief Product Officer turned AI consultant, to explore the future of work, apps, and personal development—powered by generative AI. Rob demos Limitless, an AI pendant that helps him become a better human, and Claude Code, an agentic AI development environment that builds apps like a team of tireless developers. Plus, he shares his game-changing discovery-to-deliverable workflow that cuts a week's worth of consulting into a single day.Timestamps:01:00 – Rob's tech background and founding an AI consultancy05:01 – Demo 1: Limitless AI pendant – the wearable mentor08:19 – Rob's daily AI automations for personal growth10:28 – The privacy dilemma and how Rob handles it13:35 – Society's shifting comfort with constant recording18:20 – Rewind: screen-tracking AI and quantified work21:16 – Dystopia or augmentation? Competing views on AI ubiquity27:02 – Demo 2: Claude Code – a real agentic AI dev experience33:10 – Claude Code spins up dashboards from Excel in minutes37:39 – Debugging and security auditing with Claude40:20 – Rob's gamified AI-powered habit tracker41:47 – Claude Code for prototyping with dev teams44:47 – Implications: Will dynamic apps kill the App Store?47:00 – AI as the new operating system50:26 – Future: UIs disappear, apps build themselves52:00 – Demo 3 (Explained): Deep research AI for consulting workflows54:00 – Talking for the AI: How Rob narrates calls for context58:30 – Why you must rethink—not just speed up—your workflows59:36 – Two more tips (in newsletter only!)Tools & Technologies Mentioned:Limitless (limitless.ai) – Wearable AI pendant that records, transcribes, and summarizes your day with daily automations and feedback loops.Claude Code – Anthropic's CLI tool for building full applications using agentic AI workflows, including dependency management and debugging.Rewind – Screen-capturing app that logs your activity with searchable recall capabilities.Fellow – AI meeting tool that transcribes and summarizes meetings. Used by Rob for work-related action tracking.Typora – Markdown editor Rob uses to annotate and refine AI outputs.Deep Research – Rob's name for his long-context LLM-based analysis prompt stack, used for summarizing 20+ hour discovery projects.RescueTime – Productivity analytics tool used to track app usage and categorize time spent.
(This podcast was formerly titled THIS IS THE END: POP CULTURE & COLLAPSE.) Well, it's been... a while. In this, the first new episode in over two years, I share a bit of the personal story behind why this podcast has been dormant for so long and how it relates to what I call "personal collapse." But just as with societal collapse, personal collapses don't necessarily have to be a permanent end state. I also share why I've decided to give the show a new title. Going forward it will no longer be called "This is the End: Pop Culture & Collapse." The new title is "Strange Days, Here We Come: Pop Culture, Dystopia & Collapse," and a slightly broader focus to include discussions of dystopian stories in addition to collapse-related ones. But don't worry, there will still be plenty of collapse-related material. :) Even if you are not interested in, or familiar with, concepts surrounding societal collapse, if you are a caregiver or know someone who is, you might find this episode interesting as it goes into how the individualized burdens of caregiving can result in a "personal collapse." A shoutout to the r/CaregiverSupport subreddit on Reddit. They know what I am talking about. https://www.reddit.com/r/CaregiverSupport/ Here's to new beginnings. Strange days, here we come.
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Today, I do a deep dive on all the harms of data centers. From sucking up our energy and water to national security and privacy issues, I debunk the case for new data centers. We are approaching artificial intelligence in the wrong way, opting for old-fashioned cloud storage in pursuit of a “generalized intelligence” rather than narrow AI that will actually help streamline industries. There is a reason China is not even competing in this fake arms race. Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov's robot stories — a quiet, loyal machine who couldn't speak, didn't simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today's machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don't just co-exist, but merge?I didn't pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn't just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn't come from biology at all? What if it comes from what we build?I've always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It's not just science fiction. Philosopher Andy Clark — whose work I've followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won't “use” AI. She'll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won't be a device she picks up — it'll be a presence she grows up with.This isn't some distant future. It's already happening. And yet, I don't believe we're necessarily losing something. Not if we're aware of what we're merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It's the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it's not just about a machine becoming like us. It's also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here's the thing: I don't think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it's messier. I think it's more beautiful than that.I think it's about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we've always built to support us.Maybe evolution isn't just natural selection anymore. Maybe it's cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what's still beautiful about being analog: the imperfections, the contradictions, the moments that don't make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn't about erasing what we are. Maybe it's about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn't feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn't just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn't live in the future we imagined in the '50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn't just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it's that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we're now merging with what we create, maybe the real challenge isn't becoming smarter — it's staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That's the code we shouldn't forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________
The Constitution Study with Host Paul Engel – What many don't seem to notice is how far our society has progressed toward this socialist dystopia. Secrets being kept, opposition attacked, rule for thee but not for me are running rampant in America. A recent article asked an interesting question: “Why So Many Young Americans Fall for Socialism?” I think I have the answer...
Matt and Eric check their palms for blinking crystals, diving into 1976's LOGAN'S RUN, about a future cop who learns that his subterranean orgy city pales in comparison to the unhinged bacchanalia of the Jellicle Ball.