Unconventional, outrageous, unexpected, or unpredictable behavior linked to religious or spiritual pursuits
POPULARITY
In this episode of Crazy Wisdom, host Stewart Alsop talks with Michel Bauwens, founder of the P2P Foundation, about the rise of peer-to-peer dynamics, the historical cycles shaping our present, and the struggles and possibilities of building resilient communities in times of crisis. The conversation moves through the evolution of the internet from Napster to Web3, the cultural shifts since 1968, Bauwens' personal experiences with communes and his 2018 cancellation, and the emerging vision of cosmolocalism and regenerative villages as alternatives to state and market decline. For more on Michel's work, you can explore his Substack at 4thgenerationcivilization.substack.com and the extensive P2P Foundation Wiki at wiki.p2pfoundation.net.Check out this GPT we trained on the conversationTimestamps00:00 Michel Bauwens explains peer-to-peer as both computer design and social relationship, introducing trans-local association and the idea of an anthropological revolution.05:00 Discussion of Web1, Web3, encryption, anti-surveillance, cozy web, and dark forest theory, contrasting early internet openness with today's fragmentation.10:00 Bauwens shares his 2018 cancellation, deplatforming, and loss of funding after a dispute around Jordan Peterson, reflecting on identity politics and peer-to-peer pluralism.15:00 The cultural shifts since 1968, the rise of identity movements, macro-historical cycles, and the fourth turning idea of civilizational change are unpacked.20:00 Memories of 1968 activism, communes, free love, hypergamy, and the collapse of utopian experiments, showing the need for governance and rules in cooperation.25:00 From communes to neo-Reichian practices, EST seminars, and lessons of human nature, Bauwens contrasts failed free love with lasting models like kibbutzim and Bruderhof.30:00 Communes that endure rely on transcendence, religious or ideological foundations, and Bauwens points to monasteries as models for resilience in times of decline.35:00 Cycles of civilization, overuse of nature, class divisions, and the threat of social unrest frame a wider reflection on populism, Eurasian vs Western models, and culture wars.40:00 Populism in Anglo vs continental Europe, social balance, Christian democracy, and the contrast with market libertarianism in Trump and Milei.45:00 Bauwens proposes cosmolocalism, regenerative villages, and bioregional alliances supported by Web3 communities like Crypto Commons Alliance and Ethereum Localism.50:00 Historical lessons from the Roman era, monasteries, feudal alliances, and the importance of reciprocity, pragmatic alliances, and preparing for systemic collapse.55:00 Localism, post-political collaboration, Ghent urban commons, Web3 experiments like Zuzalu, and Bauwens' resources: fortcivilizationsubstack.com and wiki.p2pfoundation.net.Key InsightsMichel Bauwens frames peer-to-peer not just as a technical design but as a profound social relationship, what he calls an “anthropological revolution.” Like the invention of writing or printing, the internet created trans-local association, allowing people across the globe to coordinate outside of centralized control.The conversation highlights the cycles of history, drawing from macro-historians and the “fourth turning” model. Bauwens explains how social movements rise, institutionalize, and collapse, with today's cultural polarization echoing earlier waves such as the upheavals of 1968. He sees our era as the end of a long cycle that began after World War II.Bauwens shares his personal cancellation in 2018, when posting a video about Jordan Peterson triggered accusations and led to deplatforming, debanking, and professional exclusion. He describes this as deeply traumatic, forcing him to rethink his political identity and shift his focus to reciprocity and trust in smaller, resilient networks.The episode revisits communes and free love experiments of the 1970s, where Bauwens lived for years. He concludes that without governance, rules, and shared transcendence, these communities collapse into chaos. He contrasts them with enduring models like the Bruderhof, kibbutzim, and monasteries, which rely on structure, ideology, or religion to survive.A major theme is populism and cultural polarization, with Bauwens distinguishing between Anglo-Saxon populism rooted in market libertarianism and continental populism shaped by Christian democratic traditions. The former quickly loses support by privileging elites, while the latter often maintains social balance through family and worker policies.Bauwens outlines his vision of cosmolocalism and regenerative villages, where “what's heavy is local, what's light is global.” He argues that bioregionalism combined with Web3 technologies offers a practical way to rebuild resilient communities, coordinate globally, and address ecological and social breakdown.Finally, the episode underscores the importance of pragmatic alliances across political divides. Bauwens stresses that survival and flourishing in times of systemic collapse depend less on ideology and more on reciprocity, concrete projects, and building trust networks that can outlast declining state and market systems.
In this episode of Crazy Wisdom, Stewart Alsop speaks with Samuel, host of The Remnant Podcast, about the intersections of biblical prophecy, Gnostic traditions, transhumanism, and the spiritual battle unfolding in our age. The conversation moves from Dr. David Hawkins' teachings and personal encounters with the Holy Spirit to questions of Lucifer, Archons, and the distortions of occult traditions, while also confronting timelines of 2025, 2030, and 2045 in light of technological agendas from Palantir, Neuralink, and the United Nations. Together they explore the tension between organic human life and the merging with machines, weaving in figures like Blavatsky, Steiner, and Barbara Marx Hubbard, and tying it back to cycles of history, prophecy, and the remnant who remain faithful. You can find Samuel's work on The Remnant Podcast YouTube channel and follow future updates through his Instagram once it's launched.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop welcomes Samuel of The Remnant Podcast, connecting through Dr. David Hawkins' work and reflecting on COVID's effect on consciousness.05:00 Samuel shares his discovery of Hawkins, a powerful encounter with Jesus, and shifting views on Lucifer, Gnosticism, Archons, and Rudolf Steiner's Ahriman.10:00 They trace Gnosticism's suppression in church history, Frances Yates on occult revival, the Nicene Creed, Neoplatonism, and the church's battle with magic.15:00 Discussion of Acts 4, possessions, Holy Spirit, and Gnostic inversion of God and Lucifer; Blavatsky, Crowley, occult distortions, and forbidden knowledge in Enoch.20:00 Hawkins' framework, naivety at higher states, Jesus as North Star, synchronicities, and the law of attraction as both biblical truth and sorcery.25:00 Transhumanism, homo spiritus, Singularity University, Barbara Marx Hubbard, hijacked timelines, Neuralink, and Butlerian Jihad.30:00 Attractor patterns, algorithms mimicking consciousness, Starlink's omnipresence, singularity timelines—2025, 2030, 2045—and UN, WEF agendas.35:00 Organic health versus pod apartments and smart cities, Greg Braden's critiques, bio-digital convergence, and the biblical remnant who remain faithful.40:00 Trump, MAGA as magician, Marina Abramović, Osiris rituals in inaugurations, Antichrist archetypes, and elite esoteric influences.50:00 Edward Bernays, Rockefeller, UN history, Enlightenment elites, Nephilim bloodlines, Dead Sea Scrolls on sons of light and darkness, Facebook's control systems.55:00 Quantum dots using human energy, D-Wave quantum computers, Gordy Rose's tsunami warning, Samuel's book As It Was in the Days of Noah: The Rising Tsunami.Key InsightsThe episode begins with Stewart Alsop and Samuel connecting through their shared study of Dr. David Hawkins, whose work profoundly influenced both men. Samuel describes his path from Hawkins' teachings into a life-altering encounter with Jesus, which reshaped his spiritual compass and allowed him to question parts of Hawkins' framework that once seemed untouchable. This shift also opened his eyes to the living presence of Christ as a North Star in discerning truth.A central thread is the nature of Lucifer and the entities described in Gnostic, biblical, and esoteric traditions. Samuel wrestles with the reality of Lucifer not just as ego, but as a non-human force tied to Archons, Yaldabaoth, and Ahriman. This leads to the recognition that many leaders openly revere such figures, pointing to a deeper spiritual battle beyond mere metaphor.The discussion examines the suppression and resurgence of Gnosticism. Stewart references Frances Yates' historical research on the rediscovery of Neoplatonism during the Renaissance, which fused with Christianity and influenced the scientific method. Yet, both men note the distortions and dangers within occult systems, where truth often hides alongside demonic inversions.Samuel emphasizes the importance of discernment, contrasting authentic spiritual awakening with the false light of occultism and New Age thought. He draws on the Book of Enoch's account of fallen angels imparting forbidden knowledge, showing how truth can be weaponized when separated from God. The law of attraction, he argues, exemplifies this duality: biblical when rooted in faith, sorcery when used to “become one's own god.”Transhumanism emerges as a major concern, framed as a counterfeit path to evolution. They compare Hawkins' idea of homo spiritus with Barbara Marx Hubbard's transhumanist vision and Elon Musk's Neuralink. Samuel warns of “hijacked timelines” where natural spiritual gifts like telepathy are replaced with machine-based imitations, echoing the warnings of Dune's Butlerian Jihad.Technology is interpreted through a spiritual lens, with algorithms mimicking attractor patterns, social media shaping reality, and Starlink rendering the internet omnipresent. Samuel identifies this as Lucifer's attempt to counterfeit God's attributes, creating a synthetic omniscience that pulls humanity away from organic life and into controlled systems.Finally, the conversation grounds in hope through the biblical concept of the remnant. Samuel explains that while elites pursue timelines toward 2025, 2030, and 2045 with occult enlightenment and digital convergence, those who remain faithful to God, connected to nature, and rooted in Christ form the remnant. This small, organic community represents survival in a time when most will unknowingly merge with the machine, fulfilling the ancient struggle between the sons of light and the sons of darkness.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Sweetman, the developer behind on-chain music and co-founder of Recoup. We talk about how musicians in 2025 are coining their content on Base and Zora, earning through Farcaster collectibles, Sound drops, and live shows, while AI agents are reshaping management, discovery, and creative workflows across music and art. The conversation also stretches into Spotify's AI push, the “dead internet theory,” synthetic hierarchies, and how creators can avoid future shock by experimenting with new tools. You can follow Sweetman on Twitter, Farcaster, Instagram, and try Recoup at chat.recoupable.com.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Sweetman to talk about on-chain music in 2025.05:00 Coins, Base, Zora, Farcaster, collectibles, Sound, and live shows emerge as key revenue streams for musicians.10:00 Streaming shifts into marketing while AI music quietly fills shops and feeds, sparking talk of the dead internet theory.15:00 Sweetman ties IoT growth and shrinking human birthrates to synthetic consumption, urging builders to plug into AI agents.20:00 Conversation turns to synthetic hierarchies, biological analogies, and defining what an AI agent truly is.25:00 Sweetman demos Recoup: model switching with Vercel AI SDK, Spotify API integration, and building artist knowledge bases.30:00 Tool chains, knowledge storage on Base and Arweave, and expanding into YouTube and TikTok management for labels.35:00 AI elements streamline UI, Sam Altman's philosophy on building with evolving models sparks a strategy discussion.40:00 Stewart reflects on the return of Renaissance humans, orchestration of machine intelligence, and prediction markets.45:00 Sweetman weighs orchestration trade-offs, cost of Claude vs GPT-5, and boutique services over winner-take-all markets.50:00 Parasocial relationships with models, GPT psychosis, and the emotional shock of AI's rapid changes.55:00 Future shock explored through Sweetman's reaction to Cursor, ending with resilience and leaning into experimentation.Key InsightsOn-chain music monetization is diversifying. Sweetman describes how musicians in 2025 use coins, collectibles, and platforms like Base, Zora, Farcaster, and Sound to directly earn from their audiences. Streaming has become more about visibility and marketing, while real revenue comes from tokenized content, auctions, and live shows.AI agents are replacing traditional managers. By consuming data from APIs like Spotify, Instagram, and TikTok, agents can segment audiences, recommend collaborations, and plan tours. What once cost thousands in management fees is now automated, providing musicians with powerful tools at a fraction of the price.Platforms are moving to replace artists. Spotify and other major players are experimenting with AI-generated music, effectively cutting human musicians further out of the revenue loop. This shift reinforces the importance of artists leaning into blockchain monetization and building direct relationships with fans.The “dead internet theory” reframes the future. Sweetman connects IoT expansion and declining birth rates to a world where AI, not humans, will make most online purchases and content. The lesson: build products that are easy for AI agents to buy, consume, and amplify, since they may soon outnumber human users.Synthetic hierarchies mirror biological ones. Stewart introduces the idea that just as cells operate autonomously within the body, billions of AI agents will increasingly act as intermediaries in human creativity and commerce. This frames AI as part of a broader continuity of hierarchical systems in nature and society.Recoup showcases orchestration in practice. Sweetman explains how Recoup integrates Vercel AI SDK, Spotify APIs, and multi-model tool chains to build knowledge bases for artists. By storing profiles on Base and Arweave, Recoup not only manages social media but also automates content optimization, giving musicians leverage once reserved for labels.Future shock is both risk and opportunity. Sweetman shares his initial rejection of AI coding tools as a threat to his identity, only to later embrace them as collaborators. The conversation closes with a call for resilience: experiment with new systems, adapt quickly, and avoid becoming a Luddite in an accelerating digital age.
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Hannah Aline Taylor to explore themes of personal responsibility, freedom, and interdependence through her frameworks like the Village Principles, Distribution Consciousness, and the Empowerment Triangle. Their conversation moves through language and paradox, equanimity, desire and identity, forgiveness, leadership, money and debt, and the ways community and relationship serve as our deepest resources. Hannah shares stories from her life in Nevada City, her perspective on abundance and belonging, and her practice of love and curiosity as tools for living in alignment. You can learn more about her work at loving.university, on her website hannahalinetaylor.com, and in her book The Way of Devotion, available on Amazon.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop welcomes Hannah Aline Taylor, introducing Loving University, Nevada City, and the Village Principles.05:00 They talk about equanimity versus non-duality, emotional mastery, and curating experience through boundaries and high standards.10:00 The focus shifts to desire as “who do I want to be,” identity as abstraction, and relationships beyond monogamy or labels.15:00 Hannah introduces the Empowerment Triangle of anything, everything, nothing, reflecting on reality as it is and the role of perception.20:00 Discussion of Nevada City's healing energy, community respect, curiosity, and differences between East Coast judgment and West Coast freedom.25:00 Responsibility as true freedom, rebellion under tyranny, delicate ecosystems, and leadership inspired by the Dao De Jing.30:00 Love and entropy, conflict without enmity, curiosity as practice, and attention as the prerequisite for experience.35:00 Forgiveness, discernment, moral debts, economic debt, and reframing wealth consciousness through the “princess card.”40:00 Interdependence, community belonging, relationship as the real resource, and stewarding abundance in a disconnected world.45:00 Building, frontiers, wisdom of indigenous stewardship, the Amazon rainforest, and how knowledge without wisdom creates loss.50:00 Closing reflections on wholeness, abundance, scarcity, relationship technology, and prioritizing humanity in transition.Key InsightsHannah Taylor introduces the Village Principles as a framework for living in “distribution consciousness” rather than “acquisition consciousness.” Instead of chasing community, she emphasizes taking responsibility for one's own energy, time, and attention, which naturally draws people into authentic connection.A central theme is personal responsibility as the true meaning of freedom. For Hannah, freedom is inseparable from responsibility—when it's confused with rebellion against control, it remains tied to tyranny. Real freedom comes from holding high standards for one's life, curating experiences, and owning one's role in every situation.Desire is reframed from the shallow “what do I want” into the deeper question of “who do I want to be.” This shift moves attention away from consumer-driven longing toward identity, integrity, and presence, turning desire into a compass for embodied living rather than a cycle of lack.Language, abstraction, and identity are questioned as both necessary tools and limiting frames. Distinction is what fuels connection—without difference, there can be no relationship. Yet when we cling to abstractions like “monogamy” or “polyamory,” we obscure the uniqueness of each relationship in favor of labels.Hannah contrasts the disempowerment triangle of victim, perpetrator, and rescuer with her empowerment triangle of anything, everything, and nothing. This model shows reality as inherently whole—everything arises from nothing, anything is possible, and suffering begins when we believe something is wrong.The conversation ties money, credit, and debt to spiritual and moral frameworks. Hannah reframes debt not as a burden but as evidence of trust and abundance, describing her credit card as a “princess card” that affirms belonging and access. Wealth consciousness, she says, is about recognizing the resources already present.Interdependence emerges as the heart of her teaching. Relationship is the true resource, and abundance is squandered when lived independently. Stories of Nevada City, the Amazon rainforest, and even a friend's Wi-Fi outage illustrate how scarcity reveals the necessity of belonging, curiosity, and shared stewardship of both community and land.
On this episode of Crazy Wisdom, Stewart Alsop sits down with Abhimanyu Dayal, a longtime Bitcoin advocate and AI practitioner, to explore how money, identity, and power are shifting in a world of deepfakes, surveillance, automation, and geopolitical realignment. The conversation ranges from why self-custody of Bitcoin matters more than ETFs, to the dangers of probabilistic biometrics and face-swap apps, to the coming impact of AGI on labor markets and the role of universal basic income. They also touch on India's refinery economy, its balancing act between Russia, China, and the U.S., and how soft power is eroding in the information age. For more from Abhimanyu, connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop opens with Abhimanyu Dayal on crypto, AI, and the risks of probabilistic biometrics like facial recognition and voice spoofing.05:00 They critique biometric surveillance, face-swap apps, and data exploitation through casual consent.10:00 The talk shifts to QR code treasure hunts, vibe coding on Replit and Claude, and using quizzes to mint NFTs.15:00 Abhimanyu shares his finance background, tying it to Bitcoin as people's money, agent-to-agent payments, and post-AGI labor shifts.20:00 They discuss universal basic income, libertarian ideals, Hayek's view of economics as critique, and how AI prediction changes policy.25:00 Pressure, unpredictability, AR glasses, quantum computing, and the surveillance state future come into focus.30:00 Open source vs closed apps, China's DeepSeek models, propaganda through AI, and U.S.–China tensions are explored.35:00 India's non-alignment, Soviet alliance in 1971, oil refining economy, and U.S.–India friction surface.40:00 They reflect on colonial history, East India Company, wealth drain, opium wars, and America's rise on Indian capital.45:00 The conversation closes on Bitcoin's role as reserve asset, stablecoins as U.S. leverage, BRICS disunity, and the geopolitics of freedom.Key InsightsA central theme of the conversation is the contrast between deterministic and probabilistic systems for identity and security. Abhimanyu Dayal stresses that passwords and private keys—things only you can know—are inherently more secure than facial recognition or voice scans, which can be spoofed through deepfakes, 3D prints, or AI reconstructions. In his view, biometric data should never be stored because it represents a permanent risk once leaked.The rise of face-swap apps and casual facial data sharing illustrates how surveillance and exploitation have crept into everyday life. Abhimanyu points out that companies already use online images to adjust things like insurance premiums, proving how small pieces of biometric consent can spiral into systemic manipulation. This isn't a hypothetical future—it is already happening in hidden ways.On the lighter side, they experiment with “vibe coding,” using tools like Replit and Claude to design interactive experiences such as a treasure hunt via QR codes and NFTs. This playful example underscores a broader point: lightweight coding and AI platforms empower individuals to create experiments without relying on centralized or closed systems that might inject malware or capture data.The discussion expands into automation, multi-agent systems, and the post-AGI economy. Abhimanyu suggests that artificial superintelligence will require machine-to-machine transactions, making Bitcoin an essential tool. But if machines do the bulk of labor, universal basic income may become unavoidable, even if it drifts toward collectivist structures libertarians dislike.A key shift identified is the transformation of economics itself. Where Hayek once argued economics should critique politicians because of limited data, AI and quantum computing now provide prediction capabilities so granular that human behavior is forecastable at the individual level. This erodes the pseudoscientific nature of past economics and creates a new landscape of policy and control.Geopolitically, the episode explores India's rise, its reliance on refining Russian crude into petroleum exports, and its effort to stay unaligned between the U.S., Russia, and China. The conversation recalls India's Soviet ties during the 1971 war, while noting how today's energy and trade policies underpin domestic improvements for India's poor and middle class.Finally, they critique the co-optation of Bitcoin through ETFs and institutional custody. While investors celebrate, Abhimanyu argues this betrays Satoshi's vision of money controlled by individuals with private keys. He warns that Bitcoin may be absorbed into central bank reserves, while stablecoins extend U.S. monetary dominance by reinforcing dollar power rather than replacing it.
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Robin Hanson, economist and originator of the idea of futarchy, about how conditional betting markets might transform governance by tying decisions to measurable outcomes. Their conversation moves through examples of organizational incentives in business and government, the balance between elegant theories and messy implementation details, the role of AI in robust institutions, and the tension between complexity and simplicity in legal and political systems. Hanson highlights historical experiments with futarchy, reflects on polarization and collective behavior in times of peace versus crisis, and underscores how ossified bureaucracies mirror software rot. To learn more about his work, you can find Robin Hanson online simply by searching his name or his blog overcomingbias.com, where his interviews—including one with Jeffrey Wernick on early applications of futarchy—are available.Check out this GPT we trained on the conversationTimestamps00:05 Hanson explains futarchy as conditional betting markets that tie governance to measurable outcome metrics, contrasting elegant ideas with messy implementation details.00:10 He describes early experiments, including Jeffrey Wernick's company in the 1980s, and more recent trials in crypto and an India-based agency.00:15 The conversation shifts to how companies use stock prices as feedback, comparing public firms tied to speculators with private equity and long-term incentives.00:20 Alsop connects futarchy to corporate governance and history, while Hanson explains how futarchy can act as a veto system against executive self-interest.00:25 They discuss conditional political markets in elections, AI participation in institutions, and why proof of human is unnecessary for robust systems.00:30 Hanson reflects on simplicity versus complexity in democracy and legal systems, noting how futarchy faces similar design trade-offs.00:35 He introduces veto markets and outcome metrics, adding nuance to how futarchy could constrain executives while allowing discretion.00:40 The focus turns to implementation in organizations, outcome-based OKRs, and trade-offs between openness, liquidity, and transparency.00:45 They explore DAOs, crypto governance, and the need for focus, then compare news-driven attention with deeper institutional design.00:50 Hanson contrasts novelty with timelessness in academia and policy, explaining how futarchy could break the pattern of weak governance.00:55 The discussion closes on bureaucratic inertia, software rot, and how government ossifies compared to adaptive private organizations.Key InsightsFutarchy proposes that governance can be improved by tying decisions directly to measurable outcome metrics, using conditional betting markets to reveal which policies are expected to achieve agreed goals. This turns speculation into structured decision advice, offering a way to make institutions more competent and accountable.Early experiments with futarchy existed decades ago, including Jeffrey Wernick's 1980s company that made hiring and product decisions using prediction markets, as well as more recent trials in crypto-based DAOs and a quiet adoption by a government agency in India. These examples show that the idea, while radical, is not just theoretical.A central problem in governance is the tension between elegant ideas and messy implementation. Hanson emphasizes that while the core concept of futarchy is simple, real-world use requires addressing veto powers, executive discretion, and complex outcome metrics. The evolution of institutions involves finding workable compromises without losing the simplicity of the original vision.The conversation highlights how existing governance in corporations mirrors these challenges. Public firms rely heavily on speculators and short-term stock incentives, while private equity benefits from long-term executive stakes. Futarchy could offer companies a new tool, giving executives market-based feedback on major decisions before they act.Institutions must be robust not just to human diversity but also to AI participation. Hanson argues that markets, unlike one-person-one-vote systems, can accommodate AI traders without needing proof of human identity. Designing systems to be indifferent to whether participants are human or machine strengthens long-term resilience.Complexity versus simplicity emerges as a theme, with Hanson noting that democracy and legal systems began with simple structures but accreted layers of rules that now demand lawyers to navigate. Futarchy faces the same trade-off: it starts simple, but real implementation requires added detail, and the balance between elegance and robustness becomes crucial.Finally, the episode situates futarchy within broader social trends. Hanson connects rising polarization and inequality to times of peace and prosperity, contrasting this with the unifying effect of external threats. He also critiques bureaucratic inertia and “software rot” in government, arguing that without innovation in governance, even advanced societies risk ossification.
On this episode of Crazy Wisdom, Stewart Alsop sits down with Brad Costanzo, founder and CEO of Accelerated Intelligence, for a wide-ranging conversation that stretches from personal development and the idea that “my mess is my message” to the risks of AI psychosis, the importance of cognitive armor, and Brad's sovereign mind framework. They talk about education through the lens of the Trivium, the natural pull of elites and hierarchies, and how Bitcoin and stablecoins tie into the future of money, inflation, and technological deflation. Brad also shares his perspective on the synergy between AI and Bitcoin, the dangers of too-big-to-fail banks, and why decentralized banking may be the missing piece. To learn more about Brad's work, visit acceleratedintelligence.ai or reach out directly at brad@acceleratedintelligence.ai.Check out this GPT we trained on the conversationTimestamps00:00 Brad Costanzo joins Stewart Alsop, opening with “my mess is my message” and Accelerated Intelligence as a way to frame AI as accelerated, not artificial.05:00 They explore AI as a tool for personal development, therapy versus coaching, and AI's potential for self-insight and pattern recognition.10:00 The conversation shifts to AI psychosis, hype cycles, gullibility, and the need for cognitive armor, leading into Brad's sovereign mind framework of define, collaborate, and refine.15:00 They discuss education through the Trivium—grammar, logic, rhetoric—contrasted with the Prussian mass education model designed for factory workers.20:00 The theme turns to elites, natural hierarchies, and the Robbers Cave experiment showing how quickly humans split into tribes.25:00 Bitcoin enters as a silent, nonviolent revolution against centralized money, with Hayek's quote on sound money and the Trojan horse of Wall Street adoption.30:00 Stablecoins, treasuries, and the Treasury vs Fed dynamic highlight how monetary demand is being engineered through crypto markets.35:00 Inflation, disinflation, and deflation surface, tied to real estate costs, millennials vs boomers, Austrian economics, and Jeff Booth's “Price of Tomorrow.”40:00 They connect Bitcoin and AI as deflationary forces, population decline, productivity gains, and the idea of a personal Bitcoin denominator.45:00 The talk expands into Bitcoin mining, AI data centers, difficulty adjustments, and Richard Werner's insights on quantitative easing, commercial banks, and speculative vs productive loans.50:00 Wrapping themes center on decentralized banking, the dangers of too-big-to-fail, assets as protection, Bitcoin's volatility, and why it remains the strongest play for long-term purchasing power.Key InsightsOne of the strongest insights Brad shares is the shift from artificial intelligence to accelerated intelligence. Instead of framing AI as something fake or external, he sees it as a leverage tool to amplify human intelligence—whether emotional, social, spiritual, or business-related. This reframing positions AI less as a threat to authenticity and more as a partner in unlocking dormant creativity.Personal development surfaces through the mantra “my mess is my message.” Brad emphasizes that the struggles, mistakes, and rock-bottom moments in life can become the foundation for helping others. AI plays into this by offering low-cost access to self-insight, giving people the equivalent of a reflective mirror that can help them see patterns in their own thinking without immediately needing therapy.The episode highlights the emerging problem of AI psychosis. People overly immersed in AI conversations, chatbots, or hype cycles can lose perspective. Brad and Stewart argue that cognitive armor—what Brad calls the “sovereign mind” framework of define, collaborate, and refine—is essential to avoid outsourcing one's thinking entirely to machines.Education is another theme, with Brad pointing to the classical Trivium—grammar, logic, and rhetoric—as the foundation of real learning. Instead of mass education modeled on the Prussian system for producing factory workers, he argues for rhetoric, debate, and critical thinking as the ultimate tests of knowledge, even in an AI-driven world.When the discussion turns to elites, Brad acknowledges that hierarchies are natural and unavoidable, citing experiments like Robbers Cave. The real danger lies not in elitism itself, but in concentrated control—particularly financial elites who maintain power through the monetary system.Bitcoin is framed as a “silent, nonviolent revolution.” Brad describes it as a Trojan horse—appearing as a speculative asset while quietly undermining government monopoly on money. Stablecoins, treasuries, and the Treasury vs Fed conflict further reveal how crypto is becoming a new driver of monetary demand.Finally, the synergy between AI and Bitcoin offers a hopeful counterbalance to deflation fears and demographic decline. AI boosts productivity while Bitcoin enforces financial discipline. Together, they could stabilize a future where fewer people are needed for the same output, costs of living decrease, and savings in hard money protect purchasing power—even against the inertia of too-big-to-fail banks.
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Juan Samitier, co-founder of DAMM Capital, for a wide-ranging conversation on decentralized insurance, treasury management, and the evolution of finance on-chain. Together they explore the risks of smart contracts and hacks, the role of insurance in enabling institutional capital to enter crypto, and historical parallels from Amsterdam's spice trade to Argentina's corralito. The discussion covers stablecoins like DAI, MakerDAO's USDS, and the collapse of Luna, as well as the dynamics of yield, black swan events, and the intersection of DeFi with AI, prediction markets, and tokenized assets. You can find Juan on Twitter at @JuanSamitier and follow DAMM Capital at @DAMM_Capital.Check out this GPT we trained on the conversationTimestamps00:05 Stewart Alsop introduces Juan Samitier, who shares his background in asset management and DeFi, setting up the conversation on decentralized insurance.00:10 They discuss Safu, the insurance protocol Juan designed, and why hedging smart contract risk is key for asset managers deploying capital in DeFi.00:15 The focus shifts to hacks, audits, and why even fully audited code can still fail, bringing up historical parallels to ships, pirates, and early insurance models.00:20 Black swan events, risk models, and the limits of statistics are explored, along with reflections on Wolfram's ideas and the Ascent of Money.00:25 They examine how TradFi is entering crypto, the dominance of centralized stablecoins, and regulatory pushes like the Genius Act.00:30 DAI's design, MakerDAO's USDS, and Luna's collapse are explained, tying into the Great Depression, Argentina's corralito, and trust in money.00:35 Juan recounts his path from high school trading shitcoins to managing Kleros' treasury, while Stewart shares parallels with dot-com bubbles and Webvan.00:40 The conversation turns to tokenized assets, lending markets, and why stablecoin payments may be DeFi's Trojan horse for TradFi adoption.00:45 They explore interest rates, usury, and Ponzi dynamics, comparing Luna's 20% yields with unsustainable growth models in tech and crypto.00:50 Airdrops, VC-funded incentives, and short-term games are contrasted with building long-term financial infrastructure on-chain.00:55 Stewart brings up crypto as Venice in 1200, leading into reflections on finance as an information system, the rise of AI, and DeFi agents.01:00 Juan explains tokenized hedge funds, trusted execution environments, and prediction markets, ending with the power of conditional markets and the future of betting on beliefs.Key InsightsOne of the biggest risks in decentralized finance isn't just market volatility but the fragility of smart contracts. Juan Samitier emphasized that even with million-dollar audits, no code can ever be guaranteed safe, which is why hedging against hacks is essential for asset managers who want institutional capital to enter crypto.Insurance has always been about spreading risk, from 17th century spice ships facing pirates to DeFi protocols facing hackers. The same logic applies today: traders and treasuries are willing to sacrifice a small portion of yield to ensure that catastrophic losses won't wipe out their entire investment.Black swan events expose the limits of financial models, both in traditional finance and crypto. Juan pointed out that while risk models try to account for extreme scenarios, including every possible tail risk makes insurance math break down—a tension that shows why decentralized insurance is still early but necessary.Stablecoins emerged as crypto's attempt to recreate the dollar, but their design choices determine resilience. MakerDAO's DAI and USDS use overcollateralization for stability, while Luna's algorithmic model collapsed under pressure. These experiments mirror historical monetary crises like the Great Depression and Argentina's corralito, reminding us that trust in money is fragile.Argentina's history of inflation and government-imposed bank freezes makes its citizens uniquely receptive to crypto. Samitier explained that even people without financial training understand macroeconomic risks because they live with them daily, which helps explain why Argentina has some of the world's highest adoption of stablecoins and DeFi tools.The path to mainstream DeFi adoption may lie in the intersection of tokenized real-world assets, lending markets, and stablecoin payments. TradFi institutions are already asking how retail users access cheaper loans on-chain, showing that DeFi's efficiency could become the Trojan horse that pulls traditional finance deeper into crypto rails.Looking forward, the fusion of AI with DeFi may transform finance into an information-driven ecosystem. Trusted execution environments, prediction markets, and conditional markets could allow agents to trade on beliefs and probabilities with transparency, blending deterministic blockchains with probabilistic AI—a glimpse of what financial Venice in the information age might look like.
In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek's work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI's approach to memory as active context retrieval, context engineering, and why vector databases aren't the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It's easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don't just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley's obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.
In this episode of Crazy Wisdom, Stewart Alsop speaks with Juan Verhook, founder of Tender Market, about how AI reshapes creativity, work, and society. They explore the risks of AI-generated slop versus authentic expression, the tension between probability and uniqueness, and why the complexity dilemma makes human-in-the-loop design essential. Juan connects bureaucracy to proto-AI, questions the incentives driving black-box models, and considers how scaling laws shape emergent intelligence. The conversation balances skepticism with curiosity, reflecting on authenticity, creativity, and the economic realities of building in an AI-driven world. You can learn more about Juan Verhook's work or connect with him directly through his LinkedIn or via his website at tendermarket.eu.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart and Juan open by contrasting AI slop with authentic creative work. 05:00 – Discussion of probability versus uniqueness and what makes output meaningful. 10:00 – The complexity dilemma emerges, as systems grow opaque and fragile. 15:00 – Why human-in-the-loop remains central to trustworthy AI. 20:00 – Juan draws parallels between bureaucracy and proto-AI structures. 25:00 – Exploration of black-box models and the limits of explainability. 30:00 – The role of economic incentives in shaping AI development. 35:00 – Reflections on nature versus nurture in intelligence, human and machine. 40:00 – How scaling laws drive emergent behavior, but not always understanding. 45:00 – Weighing authenticity and creativity against automation's pull. 50:00 – Closing thoughts on optimism versus pessimism in the future of work.Key InsightsAI slop versus authenticity – Juan emphasizes that much of today's AI output tends toward “slop,” a kind of lowest-common-denominator content driven by probability. The challenge, he argues, is not just generating more information but protecting uniqueness and cultivating authenticity in an age where machines are optimized for averages.The complexity dilemma – As AI systems grow in scale, they become harder to understand, explain, and control. Juan frames this as a “complexity dilemma”: every increase in capability carries a parallel increase in opacity, leaving us to navigate trade-offs between power and transparency.Human-in-the-loop as necessity – Instead of replacing people, AI works best when embedded in systems where humans provide judgment, context, and ethical grounding. Juan sees human-in-the-loop design not as a stopgap, but as the foundation for trustworthy AI use.Bureaucracy as proto-AI – Juan provocatively links bureaucracy to early forms of artificial intelligence. Both are systems that process information, enforce rules, and reduce individuality into standardized outputs. This analogy helps highlight the social risks of AI if left unexamined: efficiency at the cost of humanity.Economic incentives drive design – The trajectory of AI is not determined by technical possibility alone but by the economic structures funding it. Black-box models dominate because they are profitable, not because they are inherently better for society. Incentives, not ideals, shape which technologies win.Nature, nurture, and machine intelligence – Juan extends the age-old debate about human intelligence into the AI domain, asking whether machine learning is more shaped by architecture (nature) or training data (nurture). This reflection surfaces the uncertainty of what “intelligence” even means when applied to artificial systems.Optimism and pessimism in balance – While AI carries risks of homogenization and loss of meaning, Juan maintains a cautiously optimistic view. By prioritizing creativity, human agency, and economic models aligned with authenticity, he sees pathways where AI amplifies rather than diminishes human potential.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Michael Jagdeo, a headhunter and founder working with Exponent Labs and The Syndicate, about the cycles of money, power, and technology that shape our world. Their conversation touches on financial history through The Ascent of Money by Niall Ferguson and William Bagehot's The Money Market, the rise and fall of financial centers from London to New York and the new Texas Stock Exchange, the consolidation of industries and the theory of oligarchical collectivism, the role of AI as both tool and chaos agent, Bitcoin and “quantitative re-centralization,” the dynamics of exponential organizations, and the balance between collectivism and individualism. Jagdeo also shares recruiting philosophies rooted in stories like “stone soup,” frameworks like Yu-Kai Chou's Octalysis and the User Type Hexad, and book recommendations including Salim Ismail's Exponential Organizations and Arthur Koestler's The Act of Creation. Along the way they explore servant leadership, Price's Law, Linux and open source futures, religion as an operating system, and the cyclical nature of civilizations. You can learn more about Michael Jagdeo or reach out to him directly through Twitter or LinkedIn.Check out this GPT we trained on the conversationTimestamps00:05 Stewart Alsop introduces Michael Jagdeo, who shares his path from headhunting actuaries and IT talent into launching startups with Exponent Labs and The Syndicate.00:10 They connect recruiting to financial history, discussing actuaries, The Ascent of Money, and William Bagehot's The Money Market on the London money market and railways.00:15 The Rothschilds, institutional knowledge, and Corn Laws lead into questions about New York as a financial center and the quiet launch of the Texas Stock Exchange by Citadel and BlackRock.00:20 Capital power, George Soros vs. the Bank of England, chaos, paper clips, and Orwell's oligarchical collectivism frame industry consolidation, syndicates, and stone soup.00:25 They debate imperial conquest, bourgeoisie leisure, the decline of the middle class, AI as chaos agent, digital twins, Sarah Connor, Godzilla, and nuclear metaphors.00:30 Conversation turns to Bitcoin, “quantitative re-centralization,” Jack Bogle, index funds, Robinhood micro bailouts, and AI as both entropy and negative entropy.00:35 Jagdeo discusses Jim Keller, Tenstorrent, RISC-V, Nvidia CUDA, exponential organizations, Price's Law, bureaucracy, and servant leadership with the parable of stone soup.00:40 Recruiting as symbiosis, biophilia, trust, Judas, Wilhelm Reich, AI tools, Octalysis gamification, Jordan vs. triangle offense, and the role of laughter in persuasion emerge.00:45 They explore religion as operating systems, Greek gods, Comte's stages, Nietzsche, Jung, nostalgia, scientism, and Jordan Peterson's revival of tradition.00:50 The episode closes with Linux debates, Ubuntu, Framer laptops, PewDiePie, and Jagdeo's nod to Liminal Snake on epistemic centers and turning curses into blessings.Key InsightsOne of the central insights of the conversation is how financial history repeats through cycles of consolidation and power shifts. Michael Jagdeo draws on William Bagehot's The Money Market to explain how London became the hub of European finance, much like New York later did, and how the Texas Stock Exchange signals a possible southern resurgence of financial influence in America. The pattern of wealth moving with institutional shifts underscores how markets, capital, and politics remain intertwined.Jagdeo and Alsop emphasize that industries naturally oligarchize. Borrowing from Orwell's “oligarchical collectivism,” Jagdeo notes that whether in diamonds, food, or finance, consolidation emerges as economies of scale take over. This breeds syndicates and monopolies, often interpreted as conspiracies but really the predictable outcome of industrial maturation.Another powerful theme is the stone soup model of collaboration. Jagdeo applies this parable to recruiting, showing that no single individual can achieve large goals alone. By framing opportunities as shared ventures where each person adds their own ingredient, leaders can attract top talent while fostering genuine symbiosis.Technology, and particularly AI, is cast as both chaos agent and amplifier of human potential. The conversation likens AI to nuclear power—capable of great destruction or progress. From digital twins to Sarah Connor metaphors, they argue AI represents not just artificial intelligence but artificial knowledge and action, pushing humans to adapt quickly to its disruptive presence.The discussion of Bitcoin and digital currencies reframes decentralization as potentially another trap. Jagdeo provocatively calls Bitcoin “quantitative re-centralization,” suggesting that far from liberating individuals, digital currencies may accelerate neo-feudalism by creating new oligarchies and consolidating financial control in unexpected ways.Exponential organizations and the leverage of small teams emerge as another key point. Citing Price's Law, Jagdeo explains how fewer than a dozen highly capable individuals can now achieve billion-dollar valuations thanks to open source hardware, AI, and network effects. This trend redefines scale, making nimble collectives more powerful than bureaucratic giants.Finally, the episode highlights the cyclical nature of civilizations and belief systems. From Rome vs. Carthage to Greek gods shifting with societal needs, to Nietzsche's “God is dead” and Jung's view of recurring deaths of divinity, Jagdeo argues that religion, ideology, and operating systems reflect underlying incentives. Western nostalgia for past structures, whether political or religious, risks idolatry, while the real path forward may lie in new blends of individualism, collectivism, and adaptive tools like Linux and AI.
In this episode of Crazy Wisdom, Stewart Alsop talks with Paul Spencer about the intersection of AI and astrology, the balance of fate and free will, and how embodiment shapes human experience in time and space. They explore cultural shifts since 2020, the fading influence of institutions, the “patchwork age” of decentralized communities, and the contrasts between solar punk and cyberpunk visions for the future. Paul shares his perspective on America's evolving role, the symbolism of the Aquarian Age, and why philosophical, creative, and practical adaptability will be essential in the years ahead. You can connect with Paul and explore more of his work and writings at zeitvillemedia.substack.com, or find him as @ZeitvilleMedia on Twitter and You Tube.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop and Paul Spencer open with a discussion on AI and astrology, exploring fate versus free will and how human embodiment shapes the way we move through time and space.05:00 Paul contrasts the human timeline, marked by death, with AI's lack of finality, bringing in Brian Johnson's transhumanism and the need for biological embodiment for true AI utility.10:00 They explore how labor, trade, food, and procreation anchor human life, connecting these to the philosophical experience of space and time.15:00 Nietzsche and Bergson's ideas on life force, music, and tactile philosophy are discussed as alternatives to detached Enlightenment thinking.20:00 The conversation shifts to social media's manipulation, institutional decay after 2020, and the absence of an “all clear” moment.25:00 They reflect on the chaotic zeitgeist, nostalgia for 2021's openness, and people faking cultural cohesion.30:00 Paul uses Seinfeld as an example of shared codes, contrasting it with post-woke irony and drifting expectations.35:00 Pluto in Aquarius and astrological energies frame a shift from heaviness to a delirious cultural mood.40:00 Emotional UBI and the risks of avoiding emotional work lead into thoughts on America's patchwork future.45:00 They explore homesteading, raw milk as a cultural symbol, and the tension between consumerism and alternative visions like solar punk and cyberpunk.50:00 Paul highlights the need for cross-tribal diplomacy, the reality of the surveillance state, and the Aquarian Age's promise of decentralized solutions.Key InsightsPaul Spencer frames astrology as a way to understand the interplay of fate and free will within the embodied human experience, emphasizing that humans are unique in their awareness of time and mortality, which gives life story and meaning.He argues that AI, while useful for shifting perspectives, lacks “skin in the game” because it has no embodiment or death, and therefore cannot fully grasp or participate in the human condition unless integrated into biological or cybernetic systems.The conversation contrasts human perception of space and time, drawing from philosophers like Nietzsche and Bergson who sought to return philosophy to the body through music, dance, and tactile experiences, challenging abstract, purely cerebral approaches.Post-2020 culture is described as a “patchwork age” without a cohesive zeitgeist, where people often “fake it” through thin veneers of social codes. This shift, combined with Pluto's move into Aquarius, has replaced the heaviness of previous years with a chaotic, often giddy nihilism.America is seen as the primary arena for the patchwork age due to its pioneering, experimental spirit, with regional entrepreneurship and cultural biodiversity offering potential for renewal, even as nostalgia for past unity and imperial confidence lingers.Tensions between “solar punk” and “cyberpunk” visions highlight the need for cross-tribal diplomacy—connecting environmentalist, primitivist, and high-tech decentralist communities—because no single approach will be sufficient to navigate accelerating change.The Aquarian Age, following the Piscean Age in the procession of the equinoxes, signals a movement from centralized, hypnotic mass programming toward decentralized, engineering-focused solutions, where individuals must focus on building beauty and resilience in their own worlds rather than being consumed by “they” narratives.
In this episode of Crazy Wisdom, Stewart Alsop talks with Cathal, founder of Poliebotics and creator of the “truth beam” system, about proof of liveness technology, blockchain-based verification, projector-camera feedback loops, physics-based cryptography, and how these tools could counter deepfakes and secure biodiversity data. They explore applications ranging from conservation monitoring on Cathal's island in Ireland to robot-assisted farming, as well as the intersection of nature, humanity, and AI. Cathal also shares thoughts on open-source tools like Jitsi and Element, and the cultural shifts emerging from AI-driven creativity. Find more about his work and Poliebotics in Github and Twitter.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Cathal, starting with proof of liveness vs proof of aliveness and deepfake challenges.05:00 Cathal explains projector-camera feedback loops, Perlin noise, cryptographic hashing, blockchain timestamps via Rootstock.10:00 Discussion on using multiple blockchains for timestamps, physics-based timing, and recording verification.15:00 Early Bitcoin days, cypherpunk culture, deterministic vs probabilistic systems.20:00 Projector emissions, autoencoders, six-channel matrix data type, training discriminators.25:00 Decentralized verification, truth beams, building trust networks without blockchain.30:00 Optical interlinks, testing computational nature of reality, simulation ideas.35:00 Dystopia vs optimism, AI offense in cybersecurity, reputation networks.40:00 Reality transform, projecting AI into reality, creative agents, philosophical implications.45:00 Conservation applications, biodiversity monitoring, insect assays, cryptographically secured data.50:00 Optical cryptography, analog feedback loops, quantum resistance.55:00 Open source tools, Jitsi, Element, cultural speciation, robot-assisted farming, nature-human-AI coexistence.Key InsightsCathal's “proof of liveness” aims to authenticate real-time video by projecting cryptographically generated patterns onto a subject and capturing them with synchronized cameras, making it extremely difficult for deepfakes or pre-recorded footage to pass as live content.The system uses blockchain timestamps—currently via Rootstock, a Bitcoin sidechain running the Ethereum Virtual Machine—to anchor these projections in a decentralized, physics-based timeline, ensuring verification doesn't depend on trusting a single authority.A distinctive six-channel matrix data type, created by combining projector and camera outputs, is used to train neural network discriminators that determine whether a recording and projection genuinely match, allowing for scalable automated verification.Cathal envisions “truth beams” as portable, collaborative verification devices that could build decentralized trust networks and even operate without blockchains once enough verified connections exist.Beyond combating misinformation, the same projector-camera systems could serve conservation efforts—recording biodiversity data, securing it cryptographically, and supporting projects like insect population monitoring and bird song analysis on Cathal's island in Ireland.Cathal is also exploring “reality transform” technology, which uses projection and AI to overlay generated imagery onto real-world objects or people in real time, raising possibilities for artistic expression, immersive experiences, and creative AI-human interaction.Open-source philosophy underpins his approach, favoring tools like Jitsi for secure video communication and advocating community-driven development to prevent centralized control over truth verification systems, while also exploring broader societal shifts like cultural speciation and cooperative AI-human-nature systems.
In this episode of Crazy Wisdom, host Stewart Alsop talks with Zachary Cote, Executive Director of Thinking Nation, about how history education can shape citizens who think critically rather than simply memorize facts. They explore the role of memory, the ethics of curation in a decentralized media landscape, and the need to rebuild trust in institutions through humility, collaboration, and historical thinking. Zachary shares insights from his teaching experience and emphasizes intellectual humility as essential for civic life and learning in the age of AI. You can learn more about his work at thinkingnation.org and follow @Thinking_Nation on social media.Check out this GPT we trained on the conversationTimestamps00:00 – Zachary introduces Thinking Nation's mission to foster critical thinking in history education, distinguishing memory from deeper historical discipline.05:00 – They unpack the complexity of memory, collective narratives, and how individuals curate their own realities, especially in a decentralized media landscape.10:00 – Zachary explains why epistemology and methodology matter more than static facts, and how ethical curation can shape flourishing societies.15:00 – Discussion turns to how history is often used for cultural arguments, and the need to reframe it as a tool for understanding rather than judgment.20:00 – They explore AI in education, contrasting it as tool vs. crutch, and warning about students' lack of question-asking skills.25:00 – The conversation shifts to authority, institutions, and tradition as “democracy extended to the dead.”30:00 – Stewart and Zachary reflect on rebuilding trust through honesty, humility, collaboration, and asking better questions.35:00 – They consider the decentralizing effects of technology and the urgency of restoring shared principles.40:00 – Zachary emphasizes contextualization, empathy, and significance as historical thinking skills rooted in humility.45:00 – They close on the challenge of writing and contributing meaningfully through questions and confident, honest articulation.Key InsightsZachary Cote argues that history education should move beyond memorization and focus on cultivating thinking citizens. He reframes history as a discipline of inquiry, where the past is the material through which students develop critical, ethical reasoning.The concept of memory is central to understanding history. Zachary highlights that we all remember differently based on our environment and identity, which complicates any attempt at a single, unified national narrative. This complexity invites us to focus on shared methodologies rather than consensus on content.In an age of media fragmentation and curated realities, Zachary emphasizes the importance of equipping students with epistemological tools to evaluate and contextualize information ethically, rather than reinforcing echo chambers or binary ideologies.The conversation calls out the educational system's obsession with data and convenient assessment, arguing that what matters most—like humility, critical thinking, and civic understanding—is often left out because it's harder to measure.Zachary sees AI as a powerful tool that, if used well, could help assess deeper thinking skills. But he warns that without training in asking good questions, students may treat AI like a gospel rather than a starting point for inquiry.Authority and tradition, often dismissed in a culture obsessed with novelty, are reframed by Zachary as essential democratic tools. Citing Chesterton, he argues that tradition is “democracy extended to the dead,” reminding us that collective wisdom includes voices from the past.Humility emerges as a recurring theme—not just spiritual or social humility, but intellectual humility. Through historical thinking skills like contextualization, empathy, and significance, students can learn to approach the past (and the present) with curiosity rather than certainty, making room for deeper civic engagement.
In this episode of Crazy Wisdom, host Stewart Alsop sits down with astrologer and researcher C.T. Lucero for a wide-ranging conversation that weaves through ancient astrology, the evolution of calendars, the intersection of science and mysticism, and the influence of digital tools like AI on symbolic interpretation. They explore the historical lineage from Hellenistic Greece to the Persian golden age, discuss the implications of the 2020 Saturn-Jupiter conjunction, touch on astrocartography, and reflect on the information age's shifting paradigms. For more on the guest's work, check out ctlucero.com.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces C.T. Lucero; they begin discussing time cycles and the metaphor of Monday as an unfolding future.05:00 Astrology's historical roots in Hellenistic Greece and Persian Baghdad; the transmission and recovery of ancient texts.10:00 The role of astrology in medicine and timing; predictive precision and interpreting symbolic calendars.15:00 Scientism vs. astrological knowledge; the objective reliability of planetary movement compared to shifting cultural narratives.20:00 Use of AI and large language models in astrology; the limits and future potential of automation in interpretation.25:00 Western vs. Vedic astrology; the sidereal vs. tropical zodiac debate and cultural preservation of techniques.30:00 Christianity, astrology, and the problem of idolatry; Jesus' position in relation to celestial knowledge.35:00 The Saturn-Jupiter conjunction of 2020; vaccine rollout and election disputes as symbolic markers.40:00 The Mayan Venus calendar and its eight-year cycle; 2020 as the true “end of the world.”45:00 Media manipulation, air-age metaphors, and digital vs. analog paradigms; the rise of new empires.50:00 Astrocartography and relocation charts; using place to understand personal missions.Key InsightsAstrology as a Temporal Framework: C.T. Lucero presents astrology not as mysticism but as a sophisticated calendar system rooted in observable planetary cycles. He compares astrological timekeeping to how we intuitively understand days of the week—Sunday indicating rest, Monday bringing activity—arguing that longer astrological cycles function similarly on broader scales.Historical Continuity and Translation: The episode traces astrology's lineage from Hellenistic Greece through Persian Baghdad and into modernity. Lucero highlights the massive translation efforts over the past 30 years, particularly by figures like Benjamin Dykes, which have recovered lost knowledge and corrected centuries of transcription errors, contributing to what he calls astrology's third golden age.Cultural and Linguistic Barriers to Knowledge: Lucero and Alsop discuss how language borders—historically with Latin and Greek, and now digitally with regional languages—have obscured access to valuable knowledge. This extends to old medical practices and astrology, which were often dismissed simply because their documentation wasn't widely accessible.Astrology vs. Scientism: Lucero critiques scientism for reducing prediction to material mechanisms while ignoring symbolic and cyclical insights that astrology offers. He stresses astrology's predictive power lies in pattern recognition and contextual interpretation, not in deterministic forecasts.Astrology and the Digital Age: AI and LLMs are starting to assist astrologers by generating interpretations and extracting planetary data, though Lucero points out that deep symbolic synthesis still exceeds AI's grasp. Specialized astrology AIs are emerging, built by domain experts for richer, more accurate analysis.Reevaluating Vedic and Mayan Systems: Lucero asserts that Western and Vedic astrology share a common origin, and even the Mayan Venus calendar may reflect the same underlying system. While the Indian tradition preserved techniques lost in the West, both traditions illuminate astrology's adaptive yet consistent core.2020 as a Historical Turning Point: According to Lucero, the Saturn-Jupiter conjunction of December 2020 marked the start of a 20-year societal cycle and the end of a Mayan Venus calendar “day.” He links this to transformative events like the vaccine rollout and U.S. election, framing them as catalysts for long-term shifts in trust, governance, and culture.
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Ryan Estes about the intersections of podcasting, AI, ancient philosophy, and the shifting boundaries of consciousness and technology. Their conversation spans topics like the evolution of language, the impact of AI on human experience, the role of sensory interfaces, the tension between scientism and spiritual insight, and how future technologies might reshape power structures and daily life. Ryan also shares thoughts on data ownership, the illusion of modern VR, and the historical suppression of mystical knowledge. Listeners can connect with Ryan on LinkedIn and check out his podcast at AIforFounders.co.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart Alsop and Ryan Estes open with thoughts on podcasting, conversation as primal instinct, and the richness of voice communication.05:00 – Language and consciousness, bicameral mind theory, early religion, and auditory hallucinations.10:00 – AI, cognitive ergonomics, interfacing with tech, new modes of communication, and speculative consciousness.15:00 – Scientism, projections, and authenticity; ownership of hardware, software, and data.20:00 – Tech oligarchs, Apple, Google, OpenAI, and privacy trade-offs.25:00 – VR, escapism, illusion vs. reality, Buddhist and Gnostic parallels.30:00 – Magic, Neoplatonism, Copernicus, alchemy, and suppressed knowledge.35:00 – Oligarchy, the fragile middle class, democracy's design, and authority temptation.40:00 – AGI, economic shifts, creative labor, vibe coding, and optimism about future work.45:00 – Podcasting's future, amateur charm, content creation tools, TikTok promotion.Key InsightsConversation is a foundational human instinct that transcends digital noise and brings people together in a meaningful way. Ryan Estes reflects on how podcasting revives the richness of dialogue, countering the flattening effects of modern communication platforms.The evolution of language might have sparked consciousness itself. Drawing on theories like the bicameral mind, Estes explores how early humans may have experienced internal commands as divine voices, illustrating a deep link between communication, cognition, and early religious structures.AI is not just a tool but a bridge to new kinds of consciousness. With developments in cognitive ergonomics and responsive interfaces, Estes imagines a future where subconscious cues might influence technology directly, reshaping how we interact with our environment and each other.Ownership of software, hardware, and data is emerging as a critical issue. Estes emphasizes that to avoid dystopian outcomes—such as corporate control via neural interfaces—individuals must reclaim the stack, potentially profiting from their own data and customizing their tech experiences.Virtual reality and AI-generated environments risk becoming addictive escapes, particularly for marginalized populations. Estes likens this to a digital opiate, drawing parallels to spiritual ideas about illusion and cautioning against losing ourselves in these seductive constructs.The suppression of mystical traditions—like Gnosticism, Neoplatonism, and indigenous knowledge—has led to vast cultural amnesia. Estes underscores how historical power structures systematically erased insights that modern AI might help rediscover or recontextualize.Despite the turbulence, AI and AGI offer a radically optimistic future. Estes sees the potential for a 10x productivity boost and entirely new forms of work, creativity, and leisure, reshaping what it means to be economically and spiritually fulfilled in a post-knowledge age.
Crazy Wisdom: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- On this episode of Crazy Wisdom, Stewart Alsop speaks with Rory Aronson, CEO of FarmBot, about how his open-source hardware project is transforming home gardening into a more automated and accessible practice. Rory explains how FarmBot works—essentially as a CNC machine for your garden—covering its evolution from Arduino-based electronics to custom boards, the challenges of integrating hardware and software, and the role of closed-loop feedback systems to prevent errors. They explore solarpunk visions of distributed food systems, discuss the importance of “useful source” documentation in open-source hardware, and imagine a future where growing food is as easy as running a dishwasher. For more on Rory and FarmBot, check out farm.bot and the open-source resources at docs.farm.bot.Check out this GPT we trained on the conversationTimestamps00:00 Rory explains FarmBot as a CNC machine for gardens, using Arduino and Raspberry Pi, automating planting, watering, and weeding.05:00 Discussion on the hardware stack evolution, open-source electronics roots, and moving to custom boards for better integration.10:00 Stewart shares his Raspberry Pi experiments, Rory breaks down the software layers from cloud apps to firmware, emphasizing complexity.15:00 Conversation shifts to closed-loop feedback with rotary encoders, avoiding 3D printer-style “spaghetti” errors in outdoor environments.20:00 Rory explores open-source challenges, highlighting “useful source” documentation and hardware accessibility for modifications.25:00 Solarpunk vision emerges: distributed food systems, automation enabling home-grown fresh food without expert knowledge.30:00 Raised bed setup, energy efficiency, and FarmBot as a home appliance concept for urban and suburban gardens.35:00 Small-scale versus industrial farming, niche commercial uses like seedling automation, and user creativity with custom tools.40:00 AI potential with vision systems, LLMs for garden planning, and enhancing FarmBot intelligence for real-time adaptation.45:00 Sensors, soil monitoring, image analysis for plant health, and empowering users to integrate FarmBot into smart homes.50:00 Rory describes community innovations, auxiliary hardware, and open documentation supporting experimentation.55:00 Final reflections on solarpunk futures, automation as empowerment, and how to access FarmBot's resources online.Key InsightsRory Aronson shares how FarmBot began as a DIY project built on Arduino and Raspberry Pi, leveraging the open-source 3D printing ecosystem to prototype quickly. Over time, they transitioned to custom circuit boards to meet the specific demands of automating gardening tasks like seed planting, watering, and weeding, highlighting the tradeoffs between speed to market and long-term hardware optimization.The conversation unpacks the complexity of FarmBot's “stack,” which integrates cloud-based software, a web app, a message broker, a Raspberry Pi running a custom OS, and firmware on both Arduino and auxiliary chips for real-time feedback. This layered approach is crucial for precision in an unpredictable outdoor environment where mechanical errors could damage growing plants.Aronson emphasizes that being open source isn't enough; to be genuinely useful, projects must provide extensive, accessible documentation and export files in open, affordable formats. Without this, open source risks being a hollow promise for most users, especially in hardware where barriers to modification are higher.They explore the solarpunk potential of FarmBot, imagining a future where growing food at home is as effortless as using a washing machine. By turning gardening into an automated process, FarmBot enables people to produce fresh vegetables without needing expertise, offering resilience against industrial food systems reliant on monoculture and long supply chains.Aronson points out that while FarmBot isn't designed for industrial agriculture, its modularity allows it to support niche commercial use cases, like automating seedling production in cleanroom environments. This adaptability reflects the broader vision of empowering both individuals and small operations with accessible automation tools.The episode highlights user creativity enabled by FarmBot's open hardware, including custom tools like side-mounted mirrors for alternative camera angles and pneumatic grippers for harvesting. These community-driven innovations showcase the platform's flexibility and the value of encouraging experimentation.Finally, Aronson sees great potential for integrating AI, particularly vision systems and multimodal LLMs, to make FarmBot smarter—detecting pests, diagnosing plant health, and even planning gardens tailored to user goals like nutrient needs or event timelines, moving closer to a truly intelligent gardening companion.
Crazy Wisdom Key Takeaways FarmBot is a robotic farmer for your garden, designed to take care of your garden by performing functions such as planting seeds, watering, weeding, and monitoringSimply being open source is not enough. For a project to be genuinely useful, it must also have extensive, clear documentation and use open, affordable file formatsToday, the vast majority of food that people eat is grown very far away and in ways that is not great for the food or environment We have very little control over the food production system, which is vital to our existence Let us get back to the smaller scale, more diverse polycrop system of food production; many follow-on benefits will result Building a resilient alternative to industrial food systems (which often rely on single-crop farming) reduces single points of failure along vulnerable supply chains The more that we can distribute the food system and bring it closer to the end-eater, the more robust our overall food system becomes Read the full notes @ podcastnotes.orgOn this episode of Crazy Wisdom, Stewart Alsop speaks with Rory Aronson, CEO of FarmBot, about how his open-source hardware project is transforming home gardening into a more automated and accessible practice. Rory explains how FarmBot works—essentially as a CNC machine for your garden—covering its evolution from Arduino-based electronics to custom boards, the challenges of integrating hardware and software, and the role of closed-loop feedback systems to prevent errors. They explore solarpunk visions of distributed food systems, discuss the importance of “useful source” documentation in open-source hardware, and imagine a future where growing food is as easy as running a dishwasher. For more on Rory and FarmBot, check out farm.bot and the open-source resources at docs.farm.bot.Check out this GPT we trained on the conversationTimestamps00:00 Rory explains FarmBot as a CNC machine for gardens, using Arduino and Raspberry Pi, automating planting, watering, and weeding.05:00 Discussion on the hardware stack evolution, open-source electronics roots, and moving to custom boards for better integration.10:00 Stewart shares his Raspberry Pi experiments, Rory breaks down the software layers from cloud apps to firmware, emphasizing complexity.15:00 Conversation shifts to closed-loop feedback with rotary encoders, avoiding 3D printer-style “spaghetti” errors in outdoor environments.20:00 Rory explores open-source challenges, highlighting “useful source” documentation and hardware accessibility for modifications.25:00 Solarpunk vision emerges: distributed food systems, automation enabling home-grown fresh food without expert knowledge.30:00 Raised bed setup, energy efficiency, and FarmBot as a home appliance concept for urban and suburban gardens.35:00 Small-scale versus industrial farming, niche commercial uses like seedling automation, and user creativity with custom tools.40:00 AI potential with vision systems, LLMs for garden planning, and enhancing FarmBot intelligence for real-time adaptation.45:00 Sensors, soil monitoring, image analysis for plant health, and empowering users to integrate FarmBot into smart homes.50:00 Rory describes community innovations, auxiliary hardware, and open documentation supporting experimentation.55:00 Final reflections on solarpunk futures, automation as empowerment, and how to access FarmBot's resources online.Key InsightsRory Aronson shares how FarmBot began as a DIY project built on Arduino and Raspberry Pi, leveraging the open-source 3D printing ecosystem to prototype quickly. Over time, they transitioned to custom circuit boards to meet the specific demands of automating gardening tasks like seed planting, watering, and weeding, highlighting the tradeoffs between speed to market and long-term hardware optimization.The conversation unpacks the complexity of FarmBot's “stack,” which integrates cloud-based software, a web app, a message broker, a Raspberry Pi running a custom OS, and firmware on both Arduino and auxiliary chips for real-time feedback. This layered approach is crucial for precision in an unpredictable outdoor environment where mechanical errors could damage growing plants.Aronson emphasizes that being open source isn't enough; to be genuinely useful, projects must provide extensive, accessible documentation and export files in open, affordable formats. Without this, open source risks being a hollow promise for most users, especially in hardware where barriers to modification are higher.They explore the solarpunk potential of FarmBot, imagining a future where growing food at home is as effortless as using a washing machine. By turning gardening into an automated process, FarmBot enables people to produce fresh vegetables without needing expertise, offering resilience against industrial food systems reliant on monoculture and long supply chains.Aronson points out that while FarmBot isn't designed for industrial agriculture, its modularity allows it to support niche commercial use cases, like automating seedling production in cleanroom environments. This adaptability reflects the broader vision of empowering both individuals and small operations with accessible automation tools.The episode highlights user creativity enabled by FarmBot's open hardware, including custom tools like side-mounted mirrors for alternative camera angles and pneumatic grippers for harvesting. These community-driven innovations showcase the platform's flexibility and the value of encouraging experimentation.Finally, Aronson sees great potential for integrating AI, particularly vision systems and multimodal LLMs, to make FarmBot smarter—detecting pests, diagnosing plant health, and even planning gardens tailored to user goals like nutrient needs or event timelines, moving closer to a truly intelligent gardening companion.
Crazy Wisdom Key Takeaways FarmBot is a robotic farmer for your garden, designed to take care of your garden by performing functions such as planting seeds, watering, weeding, and monitoringSimply being open source is not enough. For a project to be genuinely useful, it must also have extensive, clear documentation and use open, affordable file formatsToday, the vast majority of food that people eat is grown very far away and in ways that is not great for the food or environment We have very little control over the food production system, which is vital to our existence Let us get back to the smaller scale, more diverse polycrop system of food production; many follow-on benefits will result Building a resilient alternative to industrial food systems (which often rely on single-crop farming) reduces single points of failure along vulnerable supply chains The more that we can distribute the food system and bring it closer to the end-eater, the more robust our overall food system becomes Read the full notes @ podcastnotes.orgOn this episode of Crazy Wisdom, Stewart Alsop speaks with Rory Aronson, CEO of FarmBot, about how his open-source hardware project is transforming home gardening into a more automated and accessible practice. Rory explains how FarmBot works—essentially as a CNC machine for your garden—covering its evolution from Arduino-based electronics to custom boards, the challenges of integrating hardware and software, and the role of closed-loop feedback systems to prevent errors. They explore solarpunk visions of distributed food systems, discuss the importance of “useful source” documentation in open-source hardware, and imagine a future where growing food is as easy as running a dishwasher. For more on Rory and FarmBot, check out farm.bot and the open-source resources at docs.farm.bot.Check out this GPT we trained on the conversationTimestamps00:00 Rory explains FarmBot as a CNC machine for gardens, using Arduino and Raspberry Pi, automating planting, watering, and weeding.05:00 Discussion on the hardware stack evolution, open-source electronics roots, and moving to custom boards for better integration.10:00 Stewart shares his Raspberry Pi experiments, Rory breaks down the software layers from cloud apps to firmware, emphasizing complexity.15:00 Conversation shifts to closed-loop feedback with rotary encoders, avoiding 3D printer-style “spaghetti” errors in outdoor environments.20:00 Rory explores open-source challenges, highlighting “useful source” documentation and hardware accessibility for modifications.25:00 Solarpunk vision emerges: distributed food systems, automation enabling home-grown fresh food without expert knowledge.30:00 Raised bed setup, energy efficiency, and FarmBot as a home appliance concept for urban and suburban gardens.35:00 Small-scale versus industrial farming, niche commercial uses like seedling automation, and user creativity with custom tools.40:00 AI potential with vision systems, LLMs for garden planning, and enhancing FarmBot intelligence for real-time adaptation.45:00 Sensors, soil monitoring, image analysis for plant health, and empowering users to integrate FarmBot into smart homes.50:00 Rory describes community innovations, auxiliary hardware, and open documentation supporting experimentation.55:00 Final reflections on solarpunk futures, automation as empowerment, and how to access FarmBot's resources online.Key InsightsRory Aronson shares how FarmBot began as a DIY project built on Arduino and Raspberry Pi, leveraging the open-source 3D printing ecosystem to prototype quickly. Over time, they transitioned to custom circuit boards to meet the specific demands of automating gardening tasks like seed planting, watering, and weeding, highlighting the tradeoffs between speed to market and long-term hardware optimization.The conversation unpacks the complexity of FarmBot's “stack,” which integrates cloud-based software, a web app, a message broker, a Raspberry Pi running a custom OS, and firmware on both Arduino and auxiliary chips for real-time feedback. This layered approach is crucial for precision in an unpredictable outdoor environment where mechanical errors could damage growing plants.Aronson emphasizes that being open source isn't enough; to be genuinely useful, projects must provide extensive, accessible documentation and export files in open, affordable formats. Without this, open source risks being a hollow promise for most users, especially in hardware where barriers to modification are higher.They explore the solarpunk potential of FarmBot, imagining a future where growing food at home is as effortless as using a washing machine. By turning gardening into an automated process, FarmBot enables people to produce fresh vegetables without needing expertise, offering resilience against industrial food systems reliant on monoculture and long supply chains.Aronson points out that while FarmBot isn't designed for industrial agriculture, its modularity allows it to support niche commercial use cases, like automating seedling production in cleanroom environments. This adaptability reflects the broader vision of empowering both individuals and small operations with accessible automation tools.The episode highlights user creativity enabled by FarmBot's open hardware, including custom tools like side-mounted mirrors for alternative camera angles and pneumatic grippers for harvesting. These community-driven innovations showcase the platform's flexibility and the value of encouraging experimentation.Finally, Aronson sees great potential for integrating AI, particularly vision systems and multimodal LLMs, to make FarmBot smarter—detecting pests, diagnosing plant health, and even planning gardens tailored to user goals like nutrient needs or event timelines, moving closer to a truly intelligent gardening companion.
In this episode of Crazy Wisdom, Stewart Alsop sits down with the masked collective known as the PoliePals—led by previous guest Cathal—to explore their audacious vision of blending humans, nature, and machines through cryptographic reality verification and decentralized systems. They talk about neural and cryptographic projector-camera technologies like the “truth beam” and “reality transform,” analog AI using optical computing, and how open protocols and decentralized consensus could shift power away from corporate control. Along the way, they share stories from Moad's chaotic tinkering workshop, Meta's precise Rust-coded Alchemy project, and Terminus Actual's drone Overwatch. For links to their projects, visit Poliebotics on Twitter and Poliebotics on GitHub.Check out this GPT we trained on the conversationTimestamps00:05 Neural and cryptographic projector-camera systems, reality transform for art and secure recordings, provably unclonable functions.00:10 Moad's GNOMAD identity, chaotic holistic problem-solving, tinkering with tools, truth beam's manifold mapping.00:15 Terminus Actual's drone Overwatch, security focus, six hats theory, Lorewalker's cryptic mathematical integrations.00:20 Analog AI and optical computing, stacked computational layers, local inference, physical reality interacting with AI.00:25 Meta's Alchemy software, music-driven robotics, precise Rust programming, contrast with neural network unpredictability.00:30 Decentralization, corporate dependency critique, hardware ownership, open protocols like Matrix, web of trust, Sybil attacks.00:35 Truth beam feedback loops, decentralized epistemology, neo-feudalism, Diamond Age references, nano drone warfare theory.00:40 Biotech risks, lab truth beams for verification, decentralized ID systems, qualitative consensus manifolds.00:45 Maker culture insights, 3D printing community, iterative prototyping, simulators, recycling prints.00:50 Investment casting, alternative energy for classic cars, chaotic hardware solutions, MoAD workshop's mystical array.00:55 Upcoming PolyPals content, Big Yellow Island recordings, playful sign-offs, decentralized futures.Key InsightsThe PoliePals are pioneering a system that combines cryptographic models, neural projector-camera technologies, and decentralized networks to create tools like the “truth beam” and “reality transform,” which verify physical reality as a provably unclonable function. This innovation aims to secure recordings and provide a foundation for trustworthy AI training data by looping projections of blockchain-derived noise into reality and back.Moad's character, the GNOMAD—a hybrid of gnome and nomad—embodies a philosophy of chaotic problem-solving using holistic, artful solutions. His obsession with edge cases and tinkering leads to surprising fixes, like using a tin of beans to repair a broken chair leg, and illustrates how resourcefulness intersects with decentralization in practical ways.Terminus Actual provides a counterbalance in the group dynamic, bringing drone surveillance expertise and a healthy skepticism about humanity's inherent decency. His perspective highlights the need for security consciousness and cautious optimism when developing open systems that could otherwise be exploited.Meta's Alchemy project demonstrates the contrast between procedural precision and chaotic neural approaches. Written entirely in Rust, it enables music-driven robotic control for real-world theater environments. Alchemy represents a future where tightly optimized code can interact seamlessly with hardware like Arduinos while remaining resistant to AI's unpredictable tendencies.The episode explores how decentralization could shape the coming decades, likening it to a neo-feudal age where people consciously opt into societies based on shared values. With open protocols like Matrix, decentralized IDs, and webs of trust, individuals could regain agency over their data and technological ecosystems while avoiding corporate lock-in.Optical computing experiments reveal the potential for analog AI, where stacked shallow computational layers in physical media allow AI to “experience” sensory input more like a human. Though still speculative, this approach could produce richer, lower-latency responses compared to purely digital models.Maker culture and hardware innovation anchor the conversation in tangible reality. Moad's MoAD workshop, filled with tools from industrial sewing machines to 3D printers and lathes, underscores how accessible technologies are enabling chaotic creativity and recycling systems. This grassroots hardware tinkering aligns with the PoliePals' broader vision of decentralized, cooperative technological futures.
On this episode of Crazy Wisdom, Stewart Alsop talks with Larry Diamond, co-founder of Healing with the Diamonds, about his journey from severe metabolic illness to vibrant health and his work helping others do the same. They explore topics like heart-brain coherence, the alchemical journey, insulin resistance, seed oils, and the deeper spiritual dimensions of healing, weaving in references to David Hawkins, Rupert Sheldrake, and the lost wisdom of the divine feminine. Larry shares insights on metabolic testing, ancestral eating, and the importance of authentic living, while also touching on the role of parasites—his term for the forces keeping humanity in fear and incoherence. You can find more about Larry and his work, as well as access his consulting, at healingwiththediamonds.com, on Instagram and Facebook at Healing with the Diamonds, or listen in iTunes to his upcoming podcast.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Larry Diamond of Healing with the Diamonds; they discuss his healing journey, health coaching, and the meaning of heart-brain coherence.05:00 Alchemical journey, crystals, the hero's journey, integrating masculine and feminine energies, and the idea of parasites feeding on fear.10:00 Kindness vs niceness, morphic fields, Rupert Sheldrake's theories, and quantum entanglement as evidence of interconnectedness.15:00 Scientism vs true science, metabolic illness, citizen science, Larry's 2013 health transformation.20:00 Metabolic syndrome, C-reactive protein, fasting insulin, insulin resistance, and Larry's weight loss story.25:00 Seed oils, refined carbs, ultra-processed foods, and strategies for restoring metabolic health.30:00 Carb cycling, primal eating, intuitive healing, and ancestral wisdom.35:00 Spirituality beyond religion, Yeshua vs Jesus, divine feminine, and writing your own gospel.40:00 Living authentically, kindness in daily life, and finding healing in sovereignty and connection.Key InsightsLarry Diamond shares how his journey from severe metabolic illness to vibrant health became the foundation for Healing with the Diamonds. He explains how hitting rock bottom in 2013 inspired him to reject mainstream dietary advice and embrace a primal, whole foods approach that reversed his insulin resistance and helped him lose over 100 pounds.A major theme of the conversation is heart-brain coherence, which Larry describes as essential for true wisdom and discernment. He connects this to ancient teachings, referencing Yeshua's “sword of discernment” and suggesting that Western culture intentionally suppressed this knowledge to keep people in fear and mental fragmentation.The episode explores the alchemical journey as a metaphor for inner transformation, likening it to Joseph Campbell's hero's journey. Larry emphasizes integrating masculine and feminine energies and overcoming ego as key steps in remembering our divine nature and embodying authenticity.Larry critiques scientism, which he calls the inversion of true science, and encourages listeners to reclaim citizen science as a path to health sovereignty. He shares practical tools like testing for C-reactive protein, A1C, fasting insulin, and using triglycerides-to-HDL ratios to assess metabolic health.He identifies the “Big Four” dietary culprits—seed oils, refined carbs, ultra-processed foods, and sugar—as drivers of chronic illness and advocates returning to ancestral foods rich in natural fats and nutrients. He stresses that flavor and enjoyment are critical for sustainable healing.On the spiritual side, Larry reframes the Abrahamic religions as distortions of deeper wisdom traditions, contrasting the figure of Yeshua (aligned with love and sovereignty) with the institutionalized Jesus narrative. He highlights the divine feminine, Sophia, as a source of intuition and co-creation with the cosmos.Finally, Larry encourages listeners to “write your own gospel and live your own myth,” seeing authentic, kind, and sovereign living as both a spiritual and practical act of resistance to what he calls the parasite class—forces of fear and manipulation seeking to block human awakening.
On this episode of Crazy Wisdom, Stewart Alsop speaks with Rory Aronson, CEO of FarmBot, about how his open-source hardware project is transforming home gardening into a more automated and accessible practice. Rory explains how FarmBot works—essentially as a CNC machine for your garden—covering its evolution from Arduino-based electronics to custom boards, the challenges of integrating hardware and software, and the role of closed-loop feedback systems to prevent errors. They explore solarpunk visions of distributed food systems, discuss the importance of “useful source” documentation in open-source hardware, and imagine a future where growing food is as easy as running a dishwasher. For more on Rory and FarmBot, check out farm.bot and the open-source resources at docs.farm.bot.Check out this GPT we trained on the conversationTimestamps00:00 Rory explains FarmBot as a CNC machine for gardens, using Arduino and Raspberry Pi, automating planting, watering, and weeding.05:00 Discussion on the hardware stack evolution, open-source electronics roots, and moving to custom boards for better integration.10:00 Stewart shares his Raspberry Pi experiments, Rory breaks down the software layers from cloud apps to firmware, emphasizing complexity.15:00 Conversation shifts to closed-loop feedback with rotary encoders, avoiding 3D printer-style “spaghetti” errors in outdoor environments.20:00 Rory explores open-source challenges, highlighting “useful source” documentation and hardware accessibility for modifications.25:00 Solarpunk vision emerges: distributed food systems, automation enabling home-grown fresh food without expert knowledge.30:00 Raised bed setup, energy efficiency, and FarmBot as a home appliance concept for urban and suburban gardens.35:00 Small-scale versus industrial farming, niche commercial uses like seedling automation, and user creativity with custom tools.40:00 AI potential with vision systems, LLMs for garden planning, and enhancing FarmBot intelligence for real-time adaptation.45:00 Sensors, soil monitoring, image analysis for plant health, and empowering users to integrate FarmBot into smart homes.50:00 Rory describes community innovations, auxiliary hardware, and open documentation supporting experimentation.55:00 Final reflections on solarpunk futures, automation as empowerment, and how to access FarmBot's resources online.Key InsightsRory Aronson shares how FarmBot began as a DIY project built on Arduino and Raspberry Pi, leveraging the open-source 3D printing ecosystem to prototype quickly. Over time, they transitioned to custom circuit boards to meet the specific demands of automating gardening tasks like seed planting, watering, and weeding, highlighting the tradeoffs between speed to market and long-term hardware optimization.The conversation unpacks the complexity of FarmBot's “stack,” which integrates cloud-based software, a web app, a message broker, a Raspberry Pi running a custom OS, and firmware on both Arduino and auxiliary chips for real-time feedback. This layered approach is crucial for precision in an unpredictable outdoor environment where mechanical errors could damage growing plants.Aronson emphasizes that being open source isn't enough; to be genuinely useful, projects must provide extensive, accessible documentation and export files in open, affordable formats. Without this, open source risks being a hollow promise for most users, especially in hardware where barriers to modification are higher.They explore the solarpunk potential of FarmBot, imagining a future where growing food at home is as effortless as using a washing machine. By turning gardening into an automated process, FarmBot enables people to produce fresh vegetables without needing expertise, offering resilience against industrial food systems reliant on monoculture and long supply chains.Aronson points out that while FarmBot isn't designed for industrial agriculture, its modularity allows it to support niche commercial use cases, like automating seedling production in cleanroom environments. This adaptability reflects the broader vision of empowering both individuals and small operations with accessible automation tools.The episode highlights user creativity enabled by FarmBot's open hardware, including custom tools like side-mounted mirrors for alternative camera angles and pneumatic grippers for harvesting. These community-driven innovations showcase the platform's flexibility and the value of encouraging experimentation.Finally, Aronson sees great potential for integrating AI, particularly vision systems and multimodal LLMs, to make FarmBot smarter—detecting pests, diagnosing plant health, and even planning gardens tailored to user goals like nutrient needs or event timelines, moving closer to a truly intelligent gardening companion.
In this episode of Crazy Wisdom, I, Stewart Alsop, speak with Thamir Ali Al-Rahedi, host of the From First Principles podcast on YouTube, about the nature of questions and answers, their role in business and truth-seeking, and the trade-offs inherent in technologies like AI. We explore the tension between generalists and specialists, the influence of scientism on culture, and how figures like Steve Jobs embodied the power of questions to shape markets and innovations. Thamir also shares insights from his Arabic book summary platform and his cautious approach to using large language models. You can find Thamir's work on YouTube at From 1st Principles with Thamir and on X at @Thamir's View.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Thamir Ali Al-Rahedi and they discuss Stewart's book on the nature of questions, curiosity, and shifting his focus to questions in business.05:00 They explore how questions generate value and answers capture it, contrasting dynamic questioning with static certainty in business and philosophy.10:00 The market is described as a subconscious feedback loop, and they examine the role of truth-seeking in entrepreneurship, using Steve Jobs as an example.15:00 Discussion turns to Steve Jobs' spiritual practices, LSD, and how unseen factors and focus shaped Apple's success.20:00 Thamir and Stewart debate starting with spiritual or business perspectives in writing, touching on the generalist curse and discernment in creative work.25:00 They reflect on writing habits, moving from short-form to long-form, and using AI as a thinking partner or tool.30:00 Thamir shares his cautious approach to large language models, viewing them as trade-offs, and discusses building an Arabic book summary platform to inspire reading and curiosity.Key InsightsThe dynamic interplay of questions and answers – Thamir Ali Al-Rahedi explains that questions generate value by opening possibilities, while answers capture and stabilize that value. He sees the best answers as those that spark even more questions, creating a feedback loop of insight rather than static certainty.Business and philosophy demand different relationships to truth – In business, answers often serve as the foundation for action and revenue generation, requiring a “false sense of certainty.” By contrast, philosophy thrives in uncertainty, allowing questions to remain open-ended and exploratory without the pressure to resolve them.The market as a subconscious mirror – Both Thamir and Stewart Alsop describe the market as a form of truth that reflects not only conscious desires but also subconscious patterns and impulses. This understanding reframes economic behavior as a dialogue between collective psychology and external systems.Steve Jobs as a case study of truth-seeking in entrepreneurship – The conversation highlights Steve Jobs's blend of spiritual exploration and technological vision, including his exposure to Eastern philosophy and LSD, as an example of how deep questioning and unconventional insight can manifest in world-changing innovations.AI as a double-edged tool for generalists – Thamir views large language models with caution, seeing them as highly specific tools that risk outsourcing critical thinking if used too early in the learning process. He frames technologies as trade-offs rather than pure solutions, emphasizing the importance of retaining one's cognitive autonomy.The generalist's curse and the art of discernment – Both guests wrestle with how to focus and finish creative projects without sacrificing breadth. Thamir suggests writing medium-length pieces as a way to engage deeply without the paralysis of long-form commitments, while Stewart reflects on how AI accelerates his exploration of open threads.A call for cultural renewal through reading and reflection – Thamir shares his initiative to build an Arabic book summary platform aimed at reviving reading habits, especially among younger audiences. He sees curated human-written content as a gateway to generalist thinking and a counterbalance to instant, algorithm-driven consumption.
On this episode of Crazy Wisdom, I, Stewart Alsop, talk with Sarah Boisvert, founder of New Collar AI, about the future of work in manufacturing, the rise of “new collar” jobs, and how technologies like 3D printing and AI are transforming skills training. We cover her experience with Fab Labs, creating a closed-loop AI tutor for workforce development, and the challenges of capturing implicit knowledge from retiring experts. Sarah also shares insights from her books The New Collar Workforce and People of the New Collar Workforce, which feature augmented reality to bring stories to life. You can connect with Sarah through LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Sarah introduces New Collar jobs and how digital skills are transforming blue collar roles, discussing FedEx robotics and augmented workers.05:00 Stewart asks about 3D printing challenges; Sarah explains advances in printer automation and the ongoing difficulty of CAD design.10:00 They discuss Generation Z as digital natives, instant gratification, and workforce engagement, highlighting Lean manufacturing principles.15:00 Sarah reflects on how technology speeds life up, her experiences with management training, and the importance of communication on factory floors.20:00 They explore text-to-CAD possibilities, Sarah's closed-loop AI tutor for manufacturing, and the creation of a proprietary technical database.25:00 Sarah describes the scale of open jobs in 3D printing, challenges of filling them, and shifting perceptions of manufacturing work.30:00 Discussion of robotics safety, small business adoption barriers, and the need for human oversight in automation.35:00 Sarah talks about capturing implicit knowledge from retiring experts, using LLMs for factory floor solutions, and military applications.40:00 Knowledge management, boutique data sets, and AI's role in preserving technical expertise are explored.45:00 Sarah shares insights on product design, her AR-enabled book, and empowering workers through accessible technical training.Key InsightsSarah Boisvert introduces the concept of “new collar” jobs, emphasizing that modern manufacturing roles now require digital skills traditionally associated with white-collar work. She highlights how roles like CNC machinists and 3D printing operators blend hands-on work with advanced tech, making them both in-demand and engaging for a younger, tech-savvy workforce.The conversation explores the rise of Fab Labs worldwide and their role in democratizing access to manufacturing tools. Boisvert shares her experience founding a Fab Lab in Santa Fe, enabling students and adults to gain practical, project-based experience in CAD design, 3D printing, and repair skills critical for today's manufacturing environment.Boisvert underscores the persistent skills gap in manufacturing, noting that 600,000 U.S. manufacturing jobs remain unfilled. She attributes part of this to outdated perceptions of manufacturing as “dirty and unsafe,” a narrative she's actively working to change through her books and training programs that show how modern factories are highly technical and collaborative.She reveals her team's development of a closed-loop large language model for workforce training. Unlike ChatGPT, this system draws from a proprietary database of technical manuals and expert knowledge, offering precise, context-specific answers for students and workers without relying on the open internet.The episode dives into generational differences in the workplace. Boisvert describes how Gen Z workers are motivated by purpose and efficiency, often asking “why” to understand the impact of their work. She sees Lean principles as a key to managing and empowering this generation to innovate and stay engaged.On automation, Boisvert stresses that robots are not replacing humans in manufacturing but filling labor shortages. She notes that while robots improve efficiency, they require humans to program, monitor, and repair them—skills that new collar workers are being trained to master.Finally, she shares her innovative approach to storytelling in her book People of the New Collar Workforce, which uses augmented reality to bring worker stories to life. Readers can scan photos to hear directly from individuals about their experiences transitioning into high-tech manufacturing careers.
In this wild, expansive "Crazy Wisdom" conversation, Alexander Bard, Tim Pickrill, Turiyoa Strojanov, Andrew Sweeny, and Turio dive deep into Vajrayana Buddhism, Zoroastrianism, Peruvian shamanism, and the tension between Rangtong and Shentong philosophies. They explore mountain vs. city practice, the archetype of the monk vs. the sly man in the world, and the importance of serving the sociont. Ecstatic mysticism, harsh truths, psycho-spiritual integration, tantric paradoxes, and plenty of provocation. Philosophy on fire.Learn more about the speakers:Alexander Bard – @Bardissimo on XTim Pickerill - https://themysteriousdeepblack.substack.com/Andrew Sweeny - https://www.parallax.coach/Homepage: https://www.parallax-media.com/Academy: https://www.parallax-media.com/courses-and-eventsPublishing: https://www.parallax-media.com/parallax-booksSubstack: https://parallax.substack.com/ Parallax Network: https://parallax-media-network.mn.co/share/ND8NVO1oMB3RjEyi?utm_source=ma
In this episode of Crazy Wisdom, I, Stewart Alsop, speak with Andrew Einhorn, CEO and founder of Level Fields, a platform using AI to help people navigate financial markets through the lens of repeatable, data-driven events. We explore how structured patterns in market news—like CEO departures or earnings surprises—can inform trading strategies, how Level Fields filters noise from financial data, and the emotional nuance of user experience design in fintech. Andrew also shares insights on knowledge graphs, machine learning in finance, and the evolving role of narrative in markets. Stock tips from Level Fields are available on their YouTube channel at Level Fields AI and their website levelfields.ai.Check out this GPT we trained on the conversationTimestamps00:00 – Andrew introduces Level Fields and explains how it identifies event-driven stock movements using AI.05:00 – Discussion of LLMs vs. custom models, and how Level Fields prioritized financial specificity over general AI.10:00 – Stewart asks about ontologies and knowledge graphs; Andrew describes early experiences building rule-based systems.15:00 – They explore the founder's role in translating problems, UX challenges, and how user expectations shape product design.20:00 – Insight into feedback collection, including a unique refund policy aimed at improving user understanding.25:00 – Andrew breaks down the complexities of user segmentation, churn, and adapting the product for different investor types.30:00 – A look into event types in the market, especially crypto-related announcements and their impact on equities.35:00 – Philosophical turn on narrative vs. fundamentals in finance; how news and groupthink drive large-scale moves.40:00 – Reflection on crypto parallels to dot-com era, and the long-term potential of blockchain infrastructure.45:00 – Deep dive into machine persuasion, LLM training risks, and the influence of opinionated data in financial AI.50:00 – Final thoughts on momentum algos, market manipulation, and the need for transparent, structured data.Key InsightsEvent-Based Investing as Market Forecasting: Andrew Einhorn describes Level Fields as a system for interpreting the market's weather—detecting recurring events like CEO departures or earnings beats to predict price movements. This approach reframes volatility as something intelligible, giving investors a clearer sense of timing and direction.Building Custom AI for Finance: Rejecting generic large language models, Einhorn's team developed proprietary AI trained exclusively on financial documents. By narrowing the scope, they increased precision and reduced noise, enabling the platform to focus only on events that truly impact share price behavior.Teaching Through Signals, Not Just Showing: Stewart Alsop notes how Level Fields does more than surface opportunities—it educates. By linking cause and effect in financial movements, the platform helps users build intuition, transforming confusion into understanding through repeated exposure to clear, data-backed patterns.User Expectation vs. Product Vision: Initially, Level Fields emphasized an event-centric UX, but users sought more familiar tools like ticker searches and watchlists. This tension revealed that even innovative technologies must accommodate habitual user flows before inviting them into new ways of thinking.Friction as a Path to Clarity: To elicit meaningful feedback, Level Fields implemented a refund policy that required users to explain what didn't work. The result wasn't just better UX insights—it also surfaced emotional blockages around investing and design, sharpening the team's understanding of what users truly needed.Narrative as a Volatile Market Force: Einhorn points out that groupthink in finance stems from shared academic training, creating reflexive investment patterns tied to economic narratives. These surface-level cycles obscure the deeper, steadier signals that Level Fields seeks to highlight through its data model.AI's Risk of Amplifying Noise: Alsop and Einhorn explore the darker corners of machine persuasion and LLM-generated content. Since models are trained on public data, including biased and speculative sources, they risk reinforcing distortions. In response, Level Fields emphasizes curated, high-integrity inputs grounded in financial fact.
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Moritz Bierling, community lead at Daylight Computer, about reimagining our relationship to technology through intentional hardware and software design. The conversation traverses the roots of Daylight Computer—born from a desire to mitigate the mental and physiological toll of blue light and digital distraction—into explorations of AI integration, environmental design, open-source ethos, and alternative models for startup funding. Moritz discusses the vision behind Daylight's “Outdoor Computing Club,” a movement to reclaim nature as a workspace, and the broader philosophical inquiry into a “third timeline” that balances techno-optimism and primitivism. You can explore more about the project at daylightcomputer.com and connect through their primary social channels on X (Twitter) and Instagram.Check out this GPT we trained on the conversationTimestamps00:00 – Introduction to Daylight Computer, critique of mainstream tech as a distraction machine, and inspiration from Apple's software limitations.05:00 – Origin story of Daylight, impact of blue light, and how display technology influences wellbeing.10:00 – Exploration of e-ink vs. RLCD, Kindle as a sanctuary, and Anjan's experiments with the Remarkable tablet.15:00 – Development of Solo OS, the role of spaces in digital environments, and distinctions between hardware and software.20:00 – Vision for AI-assisted computing, voice interaction, and creating a context-aware interface.25:00 – Emphasis on environmental design, using devices outdoors, and the evolutionary mismatch of current computing.30:00 – Reflections on solar punk, right relationship with technology, and rejecting accelerationism.35:00 – Introduction of the third timeline, rhizomatic organizational structure, and critique of VC funding models.40:00 – Discussions on alternative economics, open-source dynamics, and long-term sustainability.45:00 – Outdoor Computing Club, future launches, on-device AI, and the ambition to reclaim embodied computing.Key InsightsTechnology as Both Lifeline and HindranceMoritz Bierling frames modern computing as a paradox: it connects us to society and productivity while simultaneously compromising our well-being through overstimulation and poor design. The Daylight Computer aims to resolve this by introducing hardware that reduces digital fatigue and invites outdoor use.Inspiration from E-Ink and Purposeful ToolsThe initial concept for Daylight Computer was inspired by the calm, focused experience of using a Kindle. Its reflective screen and limited functionality helped Anjan, the founder, realize the power of devices built for singular, meaningful purposes rather than general distraction.Designing for Contextual IntentWith the introduction of Sol OS, Daylight enables users to define digital “spaces” aligned with different modes of being—such as waking, deep work, or relaxation. This modular approach supports intentional interaction and reduces the friction of context-switching common in modern OS designs.Respectful Integration of AIRather than chasing full automation, the Daylight team is exploring AI in a measured way. They're developing features like screen-aware AI queries through physical buttons, creating a contextual assistant that enhances cognition without overpowering it or promoting dependency.Alternative Economic ModelsRejecting venture capital and the short-term incentives of traditional tech funding, Daylight pursues a community-backed model similar to Costco's membership. This aligns financial sustainability with shared values, rather than extracting maximum profit.Third Timeline VisionMoritz discusses a conceptual “third timeline”—a balanced future distinct from both primitivism and techno-solutionism. This alternative future integrates technology into life harmoniously, fostering right relationship between humans, nature, and machines.Environmental Computing and Cultural RegenerationDaylight is not just a hardware company but a movement in environmental design. Through initiatives like the Outdoor Computing Club, they aim to restore sunlight as a central influence in human life and work, hinting at a cultural shift toward solar punk aesthetics and embodied digital living.
In this episode of Crazy Wisdom, host Stewart Alsop speaks with futurist Richard Yonck about the profound implications of our accelerating relationship with technology. Together, they explore the emergence of emotionally intelligent machines, the nuances of anticipatory systems, and how narrative frameworks help societies prepare for possible futures. Richard unpacks the role of emotion in AI and why cultivating foresight is essential in an age of rapid disruption.Check out this GPT we trained on the conversationTimestamps00:00 – The episode opens with Richard Yonck introducing the concept of artificial emotional intelligence and why it matters for the future of human-machine interaction.05:00 – The discussion moves to anticipatory systems, exploring how technologies can be designed to predict and respond to future conditions.10:00 – Richard explains how narrative foresight helps individuals and societies prepare for possible futures, emphasizing the power of storytelling in shaping collective imagination.15:00 – A deeper look into affective computing, with examples of how machines are learning to detect and simulate emotional states to improve user experience.20:00 – The conversation touches on the role of emotion in intelligence, challenging the misconception that emotion is the opposite of logic.25:00 – Richard outlines how technological disruption can mirror societal values and blind spots, urging more thoughtful design.30:00 – The focus shifts to long-term thinking, highlighting how future-oriented education and leadership are vital in an age of rapid change.35:00 – Closing thoughts center around the evolution of human-technology partnerships, stressing the need for ethical, emotionally aware systems to support a thriving future.Key InsightsEmotion as a Computational Frontier: Richard Yonck highlights that as we push the boundaries of artificial intelligence, the next significant frontier involves enabling machines to understand, interpret, and possibly simulate emotions. This capacity isn't just a novelty—it plays a crucial role in how machines and humans interact, influencing trust, empathy, and cooperation in increasingly digital environments.The Importance of Anticipatory Systems: One of the core ideas explored is the concept of anticipatory systems—those that can predict and react to future conditions. Richard emphasizes how building such foresight into our technologies, and even into our societal structures, is vital in managing the complexity and volatility of the modern world. It's not just about responding to the future, but actively shaping it.Narrative as a Tool for Foresight: The discussion underscores that storytelling isn't just entertainment—it's a powerful instrument for exploring and communicating possible futures. By framing future scenarios as narratives, we can emotionally and cognitively engage with potential outcomes, fostering a deeper understanding and preparedness across different segments of society.Emotions as Integral to Intelligence: Contrary to the view that emotion impairs rationality, Richard points out that emotions are essential to decision-making and intelligence. They help prioritize actions and signal what matters. Bringing this understanding into AI development could result in systems that more effectively collaborate with humans, particularly in roles requiring empathy and nuanced social judgment.Technology as a Mirror of Humanity: A recurring insight is that the technologies we create ultimately reflect our values, assumptions, and blind spots. Emotionally intelligent machines won't just serve us—they'll embody our understanding of ourselves. This raises profound ethical questions about what we choose to model and how these choices shape future interactions.Urgency of Long-Term Thinking: The conversation brings to light how short-termism is a critical vulnerability in current systems—economic, political, and technological. Richard advocates for integrating long-term thinking into how we design and deploy innovations, suggesting that futures literacy should be a core skill in education and leadership.Evolutionary Partnership Between Humans and Machines: Lastly, Richard describes the trajectory of human-technology interaction not as domination or subservience, but as an evolving partnership. This partnership will require emotional nuance, foresight, and ethical maturity if we're to co-evolve in ways that support human flourishing and planetary stability.Contact InformationRichard Yonck's LinkedIn
I, Stewart Alsop, am thrilled to welcome Xathil of Poliebotics to this episode of Crazy Wisdom, for what is actually our second take, this time with a visual surprise involving a fascinating 3D-printed Bauta mask. Xathil is doing some truly groundbreaking work at the intersection of physical reality, cryptography, and AI, which we dive deep into, exploring everything from the philosophical implications of anonymity to the technical wizardry behind his "Truth Beam."Check out this GPT we trained on the conversationTimestamps01:35 Xathil explains the 3D-printed Bauta Mask, its Venetian origins, and its role in enabling truth through anonymity via his project, Poliepals.04:50 The crucial distinction between public identity and "real" identity, and how pseudonyms can foster truth-telling rather than just conceal.10:15 Addressing the serious risks faced by crypto influencers due to public displays of wealth and the broader implications for online identity.15:05 Xathil details the core Poliebotics technology: the "Truth Beam," a projector-camera system for cryptographically timestamping physical reality.18:50 Clarifying the concept of "proof of aliveness"—verifying a person is currently live in a video call—versus the more complex "proof of liveness."21:45 How the speed of light provides a fundamental advantage for Poliebotics in outmaneuvering AI-generated deepfakes.32:10 The concern of an "inversion," where machine learning systems could become dominant over physical reality by using humans as their actuators.45:00 Xathil's ambitious project to use Poliebotics for creating cryptographically verifiable records of biodiversity, beginning with an enhanced Meles trap.Key InsightsAnonymity as a Truth Catalyst: Drawing from Oscar Wilde, the Bauta mask symbolizes how anonymity or pseudonyms can empower individuals to reveal deeper, more authentic truths. This challenges the notion that masks only serve to hide, suggesting they can be tools for genuine self-expression.The Bifurcation of Identity: In our digital age, distinguishing between one's core "real" identity and various public-facing personas is increasingly vital. This separation isn't merely about concealment but offers a space for truthful expression while navigating public life.The Truth Beam: Anchoring Reality: Poliebotics' "Truth Beam" technology employs a projector-camera system to cast cryptographic hashes onto physical scenes, recording them and anchoring them to a blockchain. This aims to create immutable, verifiable records of reality to combat the rise of sophisticated deepfakes.Harnessing Light Speed Against Deepfakes: The fundamental defense Poliebotics offers against AI-generated fakes is the speed of light. Real-world light reflection for capturing projected hashes is virtually instantaneous, whereas an AI must simulate this complex process, a task too slow to keep up with real-time verification.The Specter of Humans as AI Actuators: A significant future concern is the "inversion," where AI systems might utilize humans as unwitting agents to achieve their objectives in the physical world. By manipulating incentives, AIs could effectively direct human actions, raising profound questions about agency.Towards AI Symbiosis: The ideal future isn't a human-AI war or complete technological asceticism, but a cooperative coexistence between nature, humanity, and artificial systems. This involves developing AI responsibly, instilling human values, and creating systems that are non-threatening and beneficial.Contact Information* Polybotics' GitHub* Poliepals* Xathil: Xathil@ProtonMail.com
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
I, Stewart Alsop, welcomed Ben Roper, CEO and founder of Play Culture, to this episode of Crazy Wisdom for a fascinating discussion. We kicked things off by diving into Ben's reservations about AI, particularly its impact on creative authenticity, before exploring his innovative project, Play Culture, which aims to bring tactical outdoor games to adults. Ben also shared his journey of teaching himself to code and his philosophy on building experiences centered on human connection rather than pure profit.Check out this GPT we trained on the conversationTimestamps00:55 Ben Roper on AI's impact on creative authenticity and the dilution of the author's experience.03:05 The discussion on AI leading to a "simulation of experience" versus genuine, embodied experiences.08:40 Stewart Alsop explores the nuances of authenticity, honesty, and trust in media and personal interactions.17:53 Ben discusses how trust is invaluable and often broken by corporate attempts to feign it.20:22 Ben begins to explain the Play Culture project, discussing the community's confusion about its non-monetized approach, leading into his philosophy of "designing for people, not money."37:08 Ben elaborates on the Play Culture experience: creating tactical outdoor games designed specifically for adults.45:46 A comparison of Play Culture's approach with games like Pokémon GO, emphasizing "gentle technology."58:48 Ben shares his thoughts on the future of augmented reality and designing humanistic experiences.1:02:15 Ben describes "Pirate Gold," a real-world role-playing pirate simulator, as an example of Play Culture's innovative games.1:06:30 How to find Play Culture and get involved in their events worldwide.Key InsightsAI and Creative Authenticity: Ben, coming from a filmmaking background, views generative AI as a collaborator without a mind, which disassociates work from the author's unique experience. He believes art's value lies in being a window into an individual's life, a quality diluted by AI's averaged output.Simulation vs. Real Experience: We discussed how AI and even some modern technologies offer simulations of experiences (like VR travel or social media connections) that lack the depth and richness of real-world engagement. These simulations can be easier to access but may leave individuals unfulfilled and unaware of what they're missing.The Quest for Honesty Over Authenticity: I posited that while people claim to want authenticity, they might actually desire honesty more. Raw, unfiltered authenticity can be confronting, whereas honesty within a framework of trust allows for genuine connection without necessarily exposing every raw emotion.Trust as Unpurchasable Value: Ben emphasized that trust is one of the few things that cannot be bought; it must be earned and is easily broken. This makes genuine trust incredibly valuable, especially in a world where corporate entities often feign trustworthiness for transactional purposes.Designing for People, Not Money: Ben shared his philosophy behind Play Culture, which is to "design for people, not money." This means prioritizing genuine human experience, joy, and connection over optimizing for profit, believing that true value, including financial sustainability, can arise as a byproduct of creating something meaningful.The Need for Adult Play: Play Culture aims to fill a void by creating tactical outdoor games specifically designed for adult minds and social dynamics. This goes beyond childlike play or existing adult games like video games and sports, focusing on socially driven gameplay, strategy, and unique adult experiences.Gentle Technology in Gaming: Contrasting with AR-heavy games like Pokémon GO, Play Culture advocates for "gentle technology." The tech (like a mobile app) supports gameplay by providing information or connecting players, but the core interaction happens through players' senses and real-world engagement, not primarily through a screen.Real-World Game Streaming as the Future: Ben's vision for Play Culture includes moving towards real-world game streaming, akin to video game streaming on Twitch, but featuring live-action tactical games played in real cities. This aims to create a new genre of entertainment showcasing genuine human interaction and strategy.Contact Information* Ben Roper's Instagram* Website: playculture.com
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
On this episode of Crazy Wisdom, I, Stewart Alsop, spoke with Neil Davies, creator of the Extelligencer project, about survival strategies in what he calls the “Dark Forest” of modern civilization — a world shaped by cryptographic trust, intelligence-immune system fusion, and the crumbling authority of legacy institutions. We explored how concepts like zero-knowledge proofs could defend against deepening informational warfare, the shift toward tribal "patchwork" societies, and the challenge of building a post-institutional framework for truth-seeking. Listeners can find Neil on Twitter as @sigilante and explore more about his work in the Extelligencer substack.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction of Neil Davies and the Extelligencer project, setting the stage with Dark Forest theory and operational survival concepts.05:00 Expansion on Dark Forest as a metaphor for Internet-age exposure, with examples like scam evolution, parasites, and the vulnerability of modern systems.10:00 Discussion of immune-intelligence fusion, how organisms like anthills and the Portuguese Man o' War blend cognition and defense, leading into memetic immune systems online.15:00 Introduction of cryptographic solutions, the role of signed communications, and the growing importance of cryptographic attestation against sophisticated scams.20:00 Zero-knowledge proofs explained through real-world analogies like buying alcohol, emphasizing minimal information exposure and future-proofing identity verification.25:00 Transition into post-institutional society, collapse of legacy trust structures, exploration of patchwork tribes, DAOs, and portable digital organizations.30:00 Reflection on association vs. hierarchy, the persistence of oligarchies, and the shift from aristocratic governance to manipulated mass democracy.35:00 AI risks discussed, including trapdoored LLMs, epistemic hygiene challenges, and historical examples like gold fulminate booby-traps in alchemical texts.40:00 Controlled information flows, secular religion collapse, questioning sources of authority in a fragmented information landscape.45:00 Origins and evolution of universities, from medieval student-driven models to Humboldt's research-focused institutions, and the absorption by the nation-state.50:00 Financialization of universities, decay of independent scholarship, and imagining future knowledge structures outside corrupted legacy frameworks.Key InsightsThe "Dark Forest" is not just a cosmological metaphor, but a description of modern civilization's hidden dangers. Neil Davies explains that today's world operates like a Dark Forest where exposure — making oneself legible or visible — invites predation. This framework reshapes how individuals and groups must think about security, trust, and survival, particularly in an environment thick with scams, misinformation, and parasitic actors accelerated by the Internet.Immune function and intelligence function have fused in both biological and societal contexts. Davies draws a parallel between decentralized organisms like anthills and modern human society, suggesting that intelligence and immunity are inseparable functions in highly interconnected systems. This fusion means that detecting threats, maintaining identity, and deciding what to incorporate or reject is now an active, continuous cognitive and social process.Cryptographic tools are becoming essential for basic trust and survival. With the rise of scams that mimic legitimate authority figures and institutions, Davies highlights how cryptographic attestation — and eventually more sophisticated tools like zero-knowledge proofs — will become fundamental. Without cryptographically verifiable communication, distinguishing real demands from predatory scams may soon become impossible, especially as AI-generated deception grows more convincing.Institutions are hollowing out, but will not disappear entirely. Rather than a sudden collapse, Davies envisions a future where legacy institutions like universities, corporations, and governments persist as "zombie" entities — still exerting influence but increasingly irrelevant to new forms of social organization. Meanwhile, smaller, nimble "patchwork" tribes and digital-first associations will become more central to human coordination and identity.Modern universities have drifted far from their original purpose and structure. Tracing the history from medieval student guilds to Humboldt's 19th-century research universities, Davies notes that today's universities are heavily compromised by state agendas, mass democracy, and financialization. True inquiry and intellectual aloofness — once core to the ideal of the university — now require entirely new, post-institutional structures to be viable.Artificial intelligence amplifies both opportunity and epistemic risk. Davies warns that large language models (LLMs) mainly recombine existing information rather than generate truly novel insights. Moreover, they can be trapdoored or poisoned at the data level, introducing dangerous, invisible vulnerabilities. This creates a new kind of "Dark Forest" risk: users must assume that any received information may carry unseen threats or distortions.There is no longer a reliable central authority for epistemic trust. In a fragmented world where Wikipedia is compromised, traditional media is polarized, and even scientific institutions are politicized, Davies asserts that we must return to "epistemic hygiene." This means independently verifying knowledge where possible and treating all claims — even from AI — with skepticism. The burden of truth-validation increasingly falls on individuals and their trusted, cryptographically verifiable networks.
On this episode of the Crazy Wisdom podcast, I, Stewart Alsop, sat down once again with Aaron Lowry for our third conversation, and it might be the most expansive yet. We touched on the cultural undercurrents of transhumanism, the fragile trust structures behind AI and digital infrastructure, and the potential of 3D printing with metals and geopolymers as a material path forward. Aaron shared insights from his hands-on restoration work, our shared fascination with Amish tech discernment, and how course-correcting digital dependencies can restore sovereignty. We also explored what it means to design for long-term human flourishing in a world dominated by misaligned incentives. For those interested in following Aaron's work, he's most active on Twitter at @Aaron_Lowry.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart welcomes Aaron Lowry back for his third appearance. They open with reflections on cultural shifts post-COVID, the breakdown of trust in institutions, and a growing societal impulse toward individual sovereignty, free speech, and transparency.05:00 – The conversation moves into the changing political landscape, specifically how narratives around COVID, Trump, and transhumanism have shifted. Aaron introduces the idea that historical events are often misunderstood due to our tendency to segment time, referencing Dan Carlin's quote, “everything begins in the middle of something else.”10:00 – They discuss how people experience politics differently now due to the Internet's global discourse, and how Aaron avoids narrow political binaries in favor of structural and temporal nuance. They explore identity politics, the crumbling of party lines, and the erosion of traditional social anchors.15:00 – Shifting gears to technology, Aaron shares updates on 3D printing, especially the growing maturity of metal printing and geopolymers. He highlights how these innovations are transforming fields like automotive racing and aerospace, allowing for precise, heat-resistant, custom parts.20:00 – The focus turns to mechanical literacy and the contrast between abstract digital work and embodied craftsmanship. Stewart shares his current tension between abstract software projects (like automating podcast workflows with AI) and his curiosity about the Amish and Mennonite approach to technology.25:00 – Aaron introduces the idea of a cultural “core of integrated techne”—technologies that have been refined over time and aligned with human flourishing. He places Amish discernment on a spectrum between Luddite rejection and transhumanist acceleration, emphasizing the value of deliberate integration.30:00 – The discussion moves to AI again, particularly the concept of building local, private language models that can persistently learn about and serve their user without third-party oversight. Aaron outlines the need for trust, security, and stateful memory to make this vision work.35:00 – Stewart expresses frustration with the dominance of companies like Google and Facebook, and how owning the Jarvis-like personal assistant experience is critical. Aaron recommends options like GrapheneOS on a Pixel 7 and reflects on the difficulty of securing hardware at the chip level.40:00 – They explore software development and the problem of hidden dependencies. Aaron explains how digital systems rest on fragile, often invisible material infrastructure and how that fragility is echoed in the complexity of modern software stacks.45:00 – The concept of “always be reducing dependencies” is expanded. Aaron suggests the real goal is to reduce untrustworthy dependencies and recognize which are worth cultivating. Trust becomes the key variable in any resilient system, digital or material.50:00 – The final portion dives into incentives. They critique capitalism's tendency to exploit value rather than build aligned systems. Aaron distinguishes rivalrous games from infinite games and suggests the future depends on building systems that are anti-rivalrous—where ideas compete, not people.55:00 – They wrap up with reflections on course correction, spiritual orientation, and cultural reintegration. Stewart suggests titling the episode around infinite games, and Aaron shares where listeners can find him online.Key InsightsTranshumanism vs. Techne Integration: Aaron frames the modern moment as a tension between transhumanist enthusiasm and a more grounded relationship to technology, rooted in "techne"—practical wisdom accumulated over time. Rather than rejecting all new developments, he argues for a continuous course correction that aligns emerging technologies with deep human values like truth, goodness, and beauty. The Amish and Mennonite model of communal tech discernment stands out as a countercultural but wise approach—judging tools by their long-term effects on community, rather than novelty or entertainment.3D Printing as a Material Frontier: While most of the 3D printing world continues to refine filaments and plastic-based systems, Aaron highlights a more exciting trajectory in printed metals and geopolymers. These technologies are maturing rapidly and finding serious application in domains like Formula One, aerospace, and architectural experimentation. His conversations with others pursuing geopolymer 3D printing underscore a resurgence of interest in materially grounded innovation, not just digital abstraction.Digital Infrastructure is Physical: Aaron emphasizes a point often overlooked: that all digital systems rest on physical infrastructure—power grids, servers, cables, switches. These systems are often fragile and loaded with hidden dependencies. Recognizing the material base of digital life brings a greater sense of responsibility and stewardship, rather than treating the internet as some abstract, weightless realm. This shift in awareness invites a more embodied and ecological relationship with our tools.Local AI as a Trustworthy Companion: There's a compelling vision of a Jarvis-like local AI assistant that is fully private, secure, and persistent. For this to function, it must be disconnected from untrustworthy third-party cloud systems and trained on a personal, context-rich dataset. Aaron sees this as a path toward deeper digital agency: if we want machines that truly serve us, they need to know us intimately—but only in systems we control. Privacy, persistent memory, and alignment to personal values become the bedrock of such a system.Dependencies Shape Power and Trust: A recurring theme is the idea that every system—digital, mechanical, social—relies on a web of dependencies. Many of these are invisible until they fail. Aaron's mantra, “always be reducing dependencies,” isn't about total self-sufficiency but about cultivating trustworthy dependencies. The goal isn't zero dependence, which is impossible, but discerning which relationships are resilient, personal, and aligned with your values versus those that are extractive or opaque.Incentives Must Be Aligned with the Good: A core critique is that most digital services today—especially those driven by advertising—are fundamentally misaligned with human flourishing. They monetize attention and personal data, often steering users toward addiction or ...
In this episode of Crazy Wisdom, Stewart Alsop talks with Will Bickford about the future of human intelligence, the exocortex, and the role of software as an extension of our minds. Will shares his thinking on brain-computer interfaces, PHEXT (a plain text protocol for structured data), and how high-dimensional formats could help us reframe the way we collaborate and think. They explore the abstraction layers of code and consciousness, and why Will believes that better tools for thought are not just about productivity, but about expanding the boundaries of what it means to be human. You can connect with Will in Twitter at @wbic16 or check out the links mentioned by Will in Github.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to the concept of the exocortex and how current tools like plain text editors and version control systems serve as early forms of cognitive extension.05:00 – Discussion on brain-computer interfaces (BCIs), emphasizing non-invasive software interfaces as powerful tools for augmenting human cognition.10:00 – Introduction to PHEXT, a plain text format designed to embed high-dimensional structure into simple syntax, facilitating interoperability between software systems.15:00 – Exploration of software abstraction as a means of compressing vast domains of meaning into manageable forms, enhancing understanding rather than adding complexity.20:00 – Conversation about the enduring power of text as an interface, highlighting its composability, hackability, and alignment with human symbolic processing.25:00 – Examination of collaborative intelligence and the idea that intelligence emerges from distributed systems involving people, software, and shared ideas.30:00 – Discussion on the importance of designing better communication protocols, like PHEXT, to create systems that align with human thought processes and enhance cognitive capabilities.35:00 – Reflection on the broader implications of these technologies for the future of human intelligence and the potential for expanding the boundaries of human cognition.Key InsightsThe exocortex is already here, just not evenly distributed. Will frames the exocortex not as a distant sci-fi future, but as something emerging right now in the form of external software systems that augment our thinking. He suggests that tools like plain text editors, command-line interfaces, and version control systems are early prototypes of this distributed cognitive architecture—ways we already extend our minds beyond the biological brain.Brain-computer interfaces don't need to be invasive to be powerful. Rather than focusing on neural implants, Will emphasizes software interfaces as the true terrain of BCIs. The bridge between brain and computer can be as simple—and profound—as the protocols we use to interact with machines. What matters is not tapping into neurons directly, but creating systems that think with us, where interface becomes cognition.PHEXT is a way to compress meaning while remaining readable. At the heart of Will's work is PHEXT, a plain text format that embeds high-dimensional structure into simple syntax. It's designed to let software interoperate through shared, human-readable representations of structured data—stripping away unnecessary complexity while still allowing for rich expressiveness. It's not just a format, but a philosophy of communication between systems and people.Software abstraction is about compression, not complexity. Will pushes back against the idea that abstraction means obfuscation. Instead, he sees abstraction as a way to compress vast domains of meaning into manageable forms. Good abstractions reveal rather than conceal—they help you see more with less. In this view, the challenge is not just to build new software, but to compress new layers of insight into form.Text is still the most powerful interface we have. Despite decades of graphical interfaces, Will argues that plain text remains the highest-bandwidth cognitive tool. Text allows for versioning, diffing, grepping—it plugs directly into the brain's symbolic machinery. It's composable, hackable, and lends itself naturally to abstraction. Rather than moving away from text, the future might involve making text higher-dimensional and more semantically rich.The future of thinking is collaborative, not just computational. One recurring theme is that intelligence doesn't emerge in isolation—it's distributed. Will sees the exocortex as something inherently social: a space where people, software, and ideas co-think. This means building interfaces not just for solo users, but for networked groups of minds working through shared representations.Designing better protocols is designing better minds. Will's vision is protocol-first. He sees the structure of communication—between apps, between people, between thoughts—as the foundation of intelligence itself. By designing protocols like PHEXT that align with how we actually think, we can build software that doesn't just respond to us, but participates in our thought processes.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Trent Gillham—also known as Drunk Plato—for a far-reaching conversation on the shifting tides of technology, memetics, and media. Trent shares insights from building Meme Deck (find it at memedeck.xyz or follow @memedeckapp on X), exploring how social capital, narrative creation, and open-source AI models are reshaping not just the tools we use, but the very structure of belief and influence in the information age. We touch on everything from the collapse of legacy media, to hyperstition and meme warfare, to the metaphysics of blockchain as the only trustable memory in an unmoored future. You can find Trent in twitter as @AidenSolaran.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to Trent Gillham and Meme Deck, early thoughts on AI's rapid pace, and the shift from training models to building applications around them.05:00 – Discussion on the collapse of the foundational model economy, investor disillusionment, GPU narratives, and how AI infrastructure became a kind of financial bubble.10:00 – The function of markets as belief systems, blowouts when inflated narratives hit reality, and how meme-based value systems are becoming indistinguishable from traditional finance.15:00 – The role of hyperstition in creation, comparing modern tech founders to early 20th-century inventors, and how visual proof fuels belief and innovation.20:00 – Reflections on the intelligence community's influence in tech history, Facebook's early funding, and how soft influence guides the development of digital tools and platforms.25:00 – Weaponization of social media, GameStop as a memetic uprising, the idea of memetic tools leaking from government influence into public hands.30:00 – Meme Deck's vision for community-led narrative creation, the shift from centralized media to decentralized, viral, culturally fragmented storytelling.35:00 – The sophistication gap in modern media, remix culture, the idea of decks as mini subreddits or content clusters, and incentivizing content creation with tokens.40:00 – Good vs bad meme coins, community-first approaches, how decentralized storytelling builds real value through shared ownership and long-term engagement.45:00 – Memes as narratives vs manipulative psyops, blockchain as the only trustable historical record in a world of mutable data and shifting truths.50:00 – Technical challenges and future plans for Meme Deck, data storage on-chain, reputation as a layer of trust, and AI's need for immutable data sources.55:00 – Final reflections on encoding culture, long-term value of on-chain media, and Trent's vision for turning podcast conversations into instant, storyboarded, memetic content.Key InsightsThe real value in AI isn't in building models—it's in building tools that people can use: Trent emphasized that the current wave of AI innovation is less about creating foundational models, which have become commoditized, and more about creating interfaces and experiences that make those models useful. Training base models is increasingly seen as a sunk cost, and the real opportunity lies in designing products that bring creative and cultural capabilities directly to users.Markets operate as belief machines, and the narratives they run on are increasingly memetic: He described financial markets not just as economic systems, but as mechanisms for harvesting collective belief—what he called “hyperstition.” This dynamic explains the cycles of hype and crash, where inflated visions eventually collide with reality in what he terms "blowouts." In this framing, stocks and companies function similarly to meme coins—vehicles for collective imagination and risk.Memes are no longer just jokes—they are cultural infrastructure: As Trent sees it, memes are evolving into complex, participatory systems for narrative building. With tools like Meme Deck, entire story worlds can be generated, remixed, and spread by communities. This marks a shift from centralized, top-down media (like Hollywood) to decentralized, socially-driven storytelling where virality is coded into the content from the start.Community is the new foundation of value in digital economies: Rather than focusing on charismatic individuals or short-term hype, Trent emphasized that lasting projects need grassroots energy—what he calls “vibe strapping.” Successful meme coins and narrative ecosystems depend on real participation, sustained engagement, and a shared sense of creative ownership. Without that, projects fizzle out as quickly as they rise.The battle for influence has moved from borders to minds: Reflecting on the information age, Trent noted that power now resides in controlling narratives, and thus in shaping perception. This is why information warfare is subtle, soft, and persistent—and why traditional intelligence operations have evolved into influence campaigns that play out in digital spaces like social media and meme culture.Blockchains may become the only reliable memory in a world of digital manipulation: In an era where digital content is easily altered or erased, Trent argued that blockchain offers the only path to long-term trust. Data that ends up on-chain can be verified and preserved, giving future intelligences—or civilizations—a stable record of what really happened. He sees this as crucial not only for money, but for culture itself.Meme Deck aims to democratize narrative creation by turning community vibes into media outputs: Trent shared his vision for Meme Deck as a platform where communities can generate not just memes, but entire storylines and media formats—from anime pilots to cinematic remixes—by collaborating and contributing creative energy. It's a model where decentralized media becomes both an art form and a social movement, rooted in collective imagination rather than corporate production.
Unleash the radical transformative power at the heart of the world's great wisdom traditions as we Radiate Crazy-Wisdom with Jason Brett Serle. Jason is the author of The Monkey in the Bodhi Tree: Crazy-Wisdom & the Way of the Wise-Fool as well as a writer, filmmaker, NLP Master and licensed hypnotherapist dealing with themes involving psychology, spirituality, sovereignty, wellness, and human potential. Of the many paths up the mountain, crazy-wisdom presents a dramatic and formidable climb to those that are so inclined. Now for the first time, the true spiritual landscape of the wise-fool has been laid bare and its features and principal landmarks revealed. Written in two parts, The Monkey in the Bodhi Tree is the first comprehensive look at this universal phenomenon, from its origins and development to the lives of its greatest adepts and luminaries. Learn more about Jason at jasonbrettserle.com. Support this podcast by going to radiatewellnesscommunity.com/podcast and clicking on "Support the Show," and be sure to follow and share on all the socials! Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode of Crazy Wisdom, I'm joined by David Pope, Commissioner on the Wyoming Stable Token Commission, and Executive Director Anthony Apollo, for a wide-ranging conversation that explores the bold, nuanced effort behind Wyoming's first-of-its-kind state-issued stable token. I'm your host Stewart Alsop, and what unfolds in this dialogue is both a technical unpacking and philosophical meditation on trust, financial sovereignty, and what it means for a government to anchor itself in transparent, programmable value. We move through Anthony's path from Wall Street to Web3, the infrastructure and intention behind tokenizing real-world assets, and how the U.S. dollar's future could be shaped by state-level innovation. If you're curious to follow along with their work, everything from blockchain selection criteria to commission recordings can be found at stabletoken.wyo.gov.Check out this GPT we trained on the conversation!Timestamps00:00 – David Pope and Anthony Apollo introduce themselves, clarifying they speak personally, not for the Commission. You, Stewart, set an open tone, inviting curiosity and exploration.05:00 – Anthony shares his path from traditional finance to Ethereum and government, driven by frustration with legacy banking inefficiencies.10:00 – Tokenized bonds enter the conversation via the Spencer Dinwiddie project. Pope explains early challenges with defining “real-world assets.”15:00 – Legal limits of token ownership vs. asset title are unpacked. You question whether anything “real” has been tokenized yet.20:00 – Focus shifts to the Wyoming Stable Token: its constitutional roots and blockchain as a tool for fiat-backed stability without inflation.25:00 – Comparison with CBDCs: Apollo explains why Wyoming's token is transparent, non-programmatic, and privacy-focused.30:00 – Legislative framework: the 102% backing rule, public audits, and how rulemaking differs from law. You explore flexibility and trust.35:00 – Global positioning: how Wyoming stands apart from other states and nations in crypto policy. You highlight U.S. federalism's role.40:00 – Topics shift to velocity, peer-to-peer finance, and risk. You connect this to Urbit and decentralized systems.45:00 – Apollo unpacks the stable token's role in reinforcing dollar hegemony, even as BRICS move away from it.50:00 – Wyoming's transparency and governance as financial infrastructure. You reflect on meme coins and state legitimacy.55:00 – Discussion of Bitcoin reserves, legislative outcomes, and what's ahead. The conversation ends with vision and clarity.Key InsightsWyoming is pioneering a new model for state-level financial infrastructure. Through the creation of the Wyoming Stable Token Commission, the state is developing a fully-backed, transparent stable token that aims to function as a public utility. Unlike privately issued stablecoins, this one is mandated by law to be 102% backed by U.S. dollars and short-term treasuries, ensuring high trust and reducing systemic risk.The stable token is not just a tech innovation—it's a philosophical statement about trust. As David Pope emphasized, the transparency and auditability of blockchain-based financial instruments allow for a shift toward self-auditing systems, where trust isn't assumed but proven. In contrast to the opaque operations of legacy banking systems, the stable token is designed to be programmatically verifiable.Tokenized real-world assets are coming, but we're not there yet. Anthony Apollo and David Pope clarify that most "real-world assets" currently tokenized are actually equity or debt instruments that represent ownership structures, not the assets themselves. The next leap will involve making the token itself the title, enabling true fractional ownership of physical or financial assets without intermediary entities.This initiative strengthens the U.S. dollar rather than undermining it. By creating a transparent, efficient vehicle for global dollar transactions, the Wyoming Stable Token could bolster the dollar's role in international finance. Instead of competing with the dollar, it reinforces its utility in an increasingly digital economy—offering a compelling alternative to central bank digital currencies that raise concerns around surveillance and control.Stable tokens have the potential to become major holders of U.S. debt. Anthony Apollo points out that the aggregate of all fiat-backed stable tokens already represents a top-tier holder of U.S. treasuries. As adoption grows, state-run stable tokens could play a crucial role in sovereign debt markets, filling gaps left by foreign governments divesting from U.S. securities.Public accountability is central to Wyoming's approach. Unlike private entities that can change terms at will, the Wyoming Commission is legally bound to go through a public rulemaking process for any adjustments. This radical transparency offers both stability and public trust, setting a precedent for how digital public infrastructure can be governed.The ultimate goal is to build a bridge between traditional finance and the Web3 future. Rather than burn the old system down, Pope and Apollo are designing the stable token as a pragmatic transition layer—something institutions can trust and privacy advocates can respect. It's about enabling safe experimentation and gradual transformation, not triggering collapse.
This week on F.A.T.E. I'm joined by Jason Brett Serle—licensed NLP practitioner, hypnotherapist, and author of The Monkey in the Bodhi Tree: Crazy Wisdom and the Way of the Wise Fool. In this episode, we dive deep into the teachings of Zen masters, Buddha masters, and ancient mystics, exploring how they each carved their unique paths toward enlightenment.We unravel the nature of reality, examine the current state of the human condition, and discuss how easily our perceptions are shaped—and often manipulated—by what we see, hear, and consume. Jason shares his own lifelong curiosity about who we are, why we're here, and how embracing “crazy wisdom” requires bravery, individuality, and a willingness to think beyond the norm.There isn't just one path to awakening—and the great masters of old left us clues to many. This is a conversation about curiosity, consciousness, and finding your own way back to truth. It's a stimulating conversation! Join us! BUY HIS BOOK:The Monkey in the Bodhi Tree: Crazy-Wisdom & the Way of the Wise-Fool: Brett Serle, Jason: 9781803417448: Amazon.com: BooksJASON BRETT SERLE WEBSITE:Jason Brett SerlePlease leave a RATING or REVIEW (on your podcast listening platform) or Subscribe to my YouTube Channel Follow me or subscribe to the F.A.T.E. podcast click here:https://linktr.ee/f.a.t.e.podcastIf you have a story of spiritual awakening that you would like to tell, email me at fromatheismtoenlightenment@gmail.com
In this episode of Crazy Wisdom, Stewart Alsop speaks with German Jurado about the strange loop between computation and biology, the emergence of reasoning in AI models, and what it means to "stand on the shoulders" of evolutionary systems. They talk about CRISPR not just as a gene-editing tool, but as a memory architecture encoded in bacterial immunity; they question whether LLMs are reasoning or just mimicking it; and they explore how scientists navigate the unknown with a kind of embodied intuition. For more about German's work, you can connect with him through email at germanjurado7@gmail.com.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart introduces German Jurado and opens with a reflection on how biology intersects with multiple disciplines—physics, chemistry, computation.05:00 - They explore the nature of life's interaction with matter, touching on how biology is about the interface between organic systems and the material world.10:00 - German explains how bioinformatics emerged to handle the complexity of modern biology, especially in genomics, and how it spans structural biology, systems biology, and more.15:00 - Introduction of AI into the scientific process—how models are being used in drug discovery and to represent biological processes with increasing fidelity.20:00 - Stewart and German talk about using LLMs like GPT to read and interpret dense scientific literature, changing the pace and style of research.25:00 - The conversation turns to societal implications—how these tools might influence institutions, and the decentralization of expertise.30:00 - Competitive dynamics between AI labs, the scaling of context windows, and speculation on where the frontier is heading.35:00 - Stewart reflects on English as the dominant language of science and the implications for access and translation of knowledge.40:00 - Historical thread: they discuss the Republic of Letters, how the structure of knowledge-sharing has evolved, and what AI might do to that structure.45:00 - Wrap-up thoughts on reasoning, intuition, and the idea of scientists as co-evolving participants in both natural and artificial systems.50:00 - Final reflections and thank-yous, German shares where to find more of his thinking, and Stewart closes the loop on the conversation.Key InsightsCRISPR as a memory system – Rather than viewing CRISPR solely as a gene-editing tool, German Jurado frames it as a memory architecture—an evolved mechanism through which bacteria store fragments of viral DNA as a kind of immune memory. This perspective shifts CRISPR into a broader conceptual space, where memory is not just cognitive but deeply biological.AI models as pattern recognizers, not yet reasoners – While large language models can mimic reasoning impressively, Jurado suggests they primarily excel at statistical pattern matching. The distinction between reasoning and simulation becomes central, raising the question: are these systems truly thinking, or just very good at appearing to?The loop between computation and biology – One of the core themes is the strange feedback loop where biology inspires computational models (like neural networks), and those models in turn are used to probe and understand biological systems. It's a recursive relationship that's accelerating scientific insight but also complicating our definitions of intelligence and understanding.Scientific discovery as embodied and intuitive – Jurado highlights that real science often begins in the gut, in a kind of embodied intuition before it becomes formalized. This challenges the myth of science as purely rational or step-by-step and instead suggests that hunches, sensory experience, and emotional resonance play a crucial role.Proteins as computational objects – Proteins aren't just biochemical entities—they're shaped by information. Their structure, function, and folding dynamics can be seen as computations, and tools like AlphaFold are beginning to unpack that informational complexity in ways that blur the line between physics and code.Human alignment is messier than AI alignment – While AI alignment gets a lot of attention, Jurado points out that human alignment—between scientists, institutions, and across cultures—is historically chaotic. This reframes the AI alignment debate in a broader evolutionary and historical context, questioning whether we're holding machines to stricter standards than ourselves.Standing on the shoulders of evolutionary processes – Evolution is not just a backdrop but an active epistemic force. Jurado sees scientists as participants in a much older system of experimentation and iteration—evolution itself. In this view, we're not just designing models; we're being shaped by them, in a co-evolution of tools and understanding.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
JASON BRETT SERLE is a British writer, filmmaker, musician, Neuro-linguistic Programming (NLP) Master and licensed hypnotherapist with a particular focus on themes involving psychology, spirituality, wellness, and human potential. He has written articles for Jain Spirit and Watkins magazines as well as interviewing people such as Eckhart Tolle, Robert Anton Wilson, Andrew Cohen, Jan Kersschot, and Amado Crowley. He is cited in Crowley's 2002 book Liber Alba: The Questions Most Often Asked of an Occult Master as being the only other person to have seen The Book of Desolation; a book purported to have been brought back from Cairo by his father, Aleister Crowley, in 1904. In 2012 he wrote and produced his first documentary film, 'Mind Your Mind: A Primer for Psychological Independence' which looks at the psychological methods used to manipulate people and what they can do to protect themselves. He also composed and performed most of the soundtrack. The film is distributed by Journeyman Films in the UK and Film Media Group in the US, and it was an official selection for the London International Documentary Festival (LIDF) in 2012. We ltalk about: 1. What exactly is crazy wisdom, and what makes it a path worth exploring? 2. How does crazy wisdom differ from what you call in the book divine madness? 3. How does The Monkey in the Bodhi Tree challenge our conventional understanding of sanity? 4. Why is trans-rational thought—going beyond logic and reason—so often misunderstood? 5. What are some of the most striking historical examples of crazy-wisdom? 6. How can embracing crazy-wisdom lead to greater clarity and self-realization? 7. How has crazy-wisdom influenced art, literature, and culture throughout history? 8. Why do spiritual movements sometimes attract charlatans, and how can seekers distinguish authenticity from deception? 9. What inspired you to explore this topic, and what impact has it had on your own perspective? 10. If someone wants to begin exploring crazy-wisdom, what is the first step they should take? 11. Where can people read The Monkey in the Bodhi Tree? O books Presents The Monkey in the Bodhi Tree Crazy-Wisdom & the Way of the Wise-Fool by Jason Brett Serle Release date: March 1st 2025 Categories: Eastern, Mindfulness & meditation, Rituals & Practice CLICK HERE TO VIEW THE BOOK COVER Unleash the radical, transformative power at the heart of the world's great wisdom traditions. Of the many paths up the mountain, that of crazy-wisdom, although one of the lesser travelled, presents a dramatic and formidable climb to those that are so inclined. Now for the first time, the true spiritual landscape of the wise-fool has been laid bare and its features and principal landmarks revealed. Written in two parts, loosely based on the theory and practice of crazy-wisdom, The Monkey in the Bodhi Tree is the first comprehensive look at this universal phenomenon, from its origins and development to the lives of its greatest adepts and luminaries. In addition to the theoretical foundations laid down in Part I, Part II deals with its practice and aims to demonstrate crazy-wisdom in action. To this end, 151 teaching tales from around the world have been meticulously gathered and retold to illustrate the methods of the great masters and adepts - stories that not only give practical insight but also, like Zen koans, can be used as contemplative tools to illuminate and provoke epiphany. From the enigmatic Mahasiddhas of ancient India to the eccentric Taoist poet-monks of China, from the uncompromising insights of the Buddhist Tantrikas to the unconventional wisdom of Sufi heretics and the utter surrender to God displayed by the Fools for Christ, this book will take you to a place where the boundaries of logic and reason dissolve and enlightenment awaits those daring enough to venture forth. BOOK LINK: https://www.collectiveinkbooks.com/o-books/our-books/monkey-bodhi-tree-crazy-wisdom JASON'S WEBSITE: www.fasonbrettserle.com
In this episode of Crazy Wisdom, host Stewart Alsop talks with Rosario Parlanti, a longtime crypto investor and real estate attorney, about the shifting landscape of decentralization, AI, and finance. They explore the power struggles between centralized and decentralized systems, the role of AI agents in finance and infrastructure, and the legal gray areas emerging around autonomous technology. Rosario shares insights on trusted execution environments, token incentives, and how projects like Phala Network are building decentralized cloud computing. They also discuss the changing narrative around Bitcoin, the potential for AI-driven financial autonomy, and the future of censorship-resistant platforms. Follow Rosario on X @DeepinWhale and check out Phala Network to learn more.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:25 Understanding Decentralized Cloud Infrastructure04:40 Centralization vs. Decentralization: A Philosophical Debate06:56 Political Implications of Centralization17:19 Technical Aspects of Phala Network24:33 Crypto and AI: The Future Intersection25:11 The Convergence of Crypto and AI25:59 Challenges with Centralized Cloud Services27:36 Decentralized Cloud Solutions for AI30:32 Legal and Ethical Implications of AI Agents32:59 The Future of Decentralized Technologies41:56 Crypto's Role in Global Financial Freedom49:27 Closing Thoughts and Future ProspectsKey InsightsDecentralization is not absolute, but a spectrum. Rosario Parlanti explains that decentralization doesn't mean eliminating central hubs entirely, but rather reducing choke points where power is overly concentrated. Whether in finance, cloud computing, or governance, every system faces forces pushing toward centralization for efficiency and control, while counterforces work to redistribute power and increase resilience.Trusted execution environments (TEE) are crucial for decentralized cloud computing. Rosario highlights how Phala Network uses TEEs, a hardware-based security measure that isolates sensitive data from external access. This ensures that decentralized cloud services can operate securely, preventing unauthorized access while allowing independent providers to host data and run applications outside the control of major corporations like Amazon and Google.AI agents will need decentralized infrastructure to function autonomously. The conversation touches on the growing power of AI-driven autonomous agents, which can execute financial trades, conduct research, and even generate content. However, running such agents on centralized cloud providers like AWS could create regulatory and operational risks. Decentralized cloud networks like Phala offer a way for these agents to operate freely, without interference from governments or corporations.Regulatory arbitrage will shape the future of AI and crypto. Rosario describes how businesses and individuals are already leveraging jurisdiction shopping—structuring AI entities or financial operations in countries with more favorable regulations. He speculates that AI agents could be housed within offshore LLCs or irrevocable trusts, creating legal distance between their creators and their actions, raising new ethical and legal challenges.Bitcoin's narrative has shifted from currency to investment asset. Originally envisioned as a peer-to-peer electronic cash system, Bitcoin has increasingly been treated as digital gold, largely due to the influence of institutional investors and regulatory frameworks like Bitcoin ETFs. Rosario argues that this shift in perception has led to Bitcoin being co-opted by the very financial institutions it was meant to disrupt.The rise of AI-driven financial autonomy could bypass traditional banking and regulation. The combination of AI, smart contracts, and decentralized finance (DeFi) could enable AI agents to conduct financial transactions without human oversight. This could range from algorithmic trading to managing business operations, potentially reducing reliance on traditional banking systems and challenging the ability of governments to enforce financial regulations.The accelerating clash between technology and governance will redefine global power structures. As AI and decentralized systems gain momentum, traditional nation-state mechanisms for controlling information, currency, and infrastructure will face unprecedented challenges. Rosario and Stewart discuss how this shift mirrors previous disruptions—such as social media's impact on information control—and speculate on whether governments will adapt, resist, or attempt to co-opt these emerging technologies.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Gabe Dominocielo, co-founder of Umbra, a space tech company revolutionizing satellite imagery. We discuss the rapid advancements in space-based observation, the economics driving the industry, and how AI intersects with satellite data. Gabe shares insights on government contracting, defense applications, and the shift toward cost-minus procurement models. We also explore the broader implications of satellite technology—from hedge funds analyzing parking lots to wildfire response efforts. Check out more about Gabe and Umbra at umbraspace.com (https://umbraspace.com), and don't miss their open data archive for high-resolution satellite imagery.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:05 Gabe's Background and Umbra's Mission00:34 The Story Behind 'Come and Take It'01:32 Space Technology and Cost Plus Contracts03:28 The Impact of Elon Musk and SpaceX05:16 Umbra's Business Model and Profitability07:28 Challenges in the Satellite Business11:45 Investors and Funding Journey19:31 Space Business Landscape and Future Prospects23:09 Defense and Regulatory Challenges in Space31:06 Practical Applications of Satellite Data33:16 Unexpected Wealth and Autistic Curiosity33:49 Beet Farming and Data Insights35:09 Philosophy in Business Strategy38:56 Empathy and Investor Relations43:00 Raising Capital: Strategies and Challenges44:56 The Sovereignty Game vs. Venture Game51:12 Concluding Thoughts and Contact Information52:57 The Treasure Hunt and AI DependenciesKey InsightsThe Shift from Cost-Plus to Cost-Minus in Government Contracting – Historically, aerospace and defense contracts operated under a cost-plus model, where companies were reimbursed for expenses with a guaranteed profit. Gabe explains how the shift toward cost-minus (firm-fixed pricing) is driving efficiency and competition in the industry, much like how SpaceX drastically reduced launch costs by offering services instead of relying on bloated government contracts.Satellite Imagery Has Become a Crucial Tool for Businesses – Beyond traditional defense and intelligence applications, high-resolution satellite imagery is now a critical asset for hedge funds, investors, and commercial enterprises. Gabe describes how firms use satellite data to analyze parking lots, monitor supply chains, and even track cryptocurrency mining activity based on power line sagging and cooling fan usage on data centers.Space Technology is More Business-Driven Than Space-Driven – While many assume space startups are driven by a passion for exploration, Umbra's success is rooted in strong business fundamentals. Gabe emphasizes that their focus is on unit economics, supply-demand balance, and creating a profitable company rather than simply innovating for the sake of technology.China's Growing Presence in Space and Regulatory Challenges – Gabe raises concerns about China's aggressive approach to space, noting that they often ignore international agreements and regulations. Meanwhile, American companies face significant bureaucratic hurdles, sometimes spending millions just to navigate licensing and compliance. He argues that unleashing American innovation by reducing regulatory friction is essential to maintaining leadership in the space industry.Profitability is the Ultimate Measure of Success – Unlike many venture-backed space startups that focus on hype, Umbra has prioritized profitability, making it one of the few successful Earth observation companies. Gabe contrasts this with competitors who raised massive sums, spent excessively, and ultimately failed because they weren't built on sustainable business models.Satellite Technology is Revolutionizing Disaster Response – One of the most impactful uses of Umbra's satellite imagery has been in wildfire response. By capturing images through smoke and clouds, their data was instrumental in mapping wildfires in Los Angeles. They even made this data freely available, helping emergency responders and news organizations better understand the crisis.Philosophy and Business Strategy Go Hand in Hand – Gabe highlights how strategic thinking and philosophical principles guide decision-making in business. Whether it's understanding investor motivations, handling conflicts with empathy, or ensuring a company can sustain itself for decades rather than chasing short-term wins, having a strong philosophical foundation is key to long-term success.
On this episode of Crazy Wisdom, Stewart Alsop welcomes Andrew Burlinson, an artist and creative thinker, for a deep conversation about technology, creativity, and the human spirit. They explore the importance of solitude in the creative process, the addictive nature of digital engagement, and how AI might both challenge and enhance human expression. Andrew shares insights on the shifting value of art in an AI-driven world, the enduring importance of poetry, and the unexpected resurgence of in-person experiences. For more on Andrew, check out his LinkedIn and Instagram.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:27 Meeting in LA and Local Insights01:34 The Creative Process and Technology03:47 Balancing Solitude and Connectivity07:21 AI's Role in Creativity and Productivity11:00 Future of AI in Creative Industries14:39 Challenges and Opportunities with AI16:59 AI in Hollywood and Ethical Considerations18:54 Silicon Valley and AI's Impact on Jobs19:31 Navigating the Future with AI20:06 Adapting to Rapid Technological Change20:49 The Value of Art in a Fast-Paced World21:36 Shifting Aesthetics and Cultural Perception22:54 The Human Connection in the Age of AI24:37 Resurgence of Traditional Art Forms27:30 The Importance of Early Artistic Education31:07 The Role of Poetry and Language35:56 Balancing Technology and Intention37:00 Conclusion and Contact InformationKey InsightsThe Importance of Solitude in Creativity – Andrew Burlinson emphasizes that creativity thrives in moments of boredom and solitude, which have become increasingly rare in the digital age. He reflects on his childhood, where a lack of constant stimulation led him to develop his artistic skills. Today, with infinite digital distractions, people must intentionally carve out space to be alone with their thoughts to create work that carries deep personal intention rather than just remixing external influences.The Struggle to Defend Attention – Stewart and Andrew discuss how modern digital platforms, particularly social media, are designed to hijack human attention through powerful AI-driven engagement loops. These mechanisms prioritize negative emotions and instant gratification, making it increasingly difficult for individuals to focus on deep, meaningful work. They suggest that future AI advancements could paradoxically help free people from screens, allowing them to engage with technology in a more intentional and productive way.AI as a Creative Partner—But Not Yet a True Challenger – While AI is already being used in creative fields, such as Hollywood's subtle use of AI for film corrections, it currently lacks the ability to provide meaningful pushback or true creative debate. Andrew argues that the best creative partners challenge ideas rather than just assist with execution, and AI's tendency to be agreeable and non-confrontational makes it a less valuable collaborator for artists who need critical feedback to refine their work.The Pendulum Swing of Human and Technological Aesthetics – Throughout history, every major technological advancement in the arts has been met with a counter-movement embracing raw, organic expression. Just as the rise of synthesizers in music led to a renewed interest in acoustic and folk styles, the rapid expansion of AI-generated art may inspire a resurgence of appreciation for handcrafted, deeply personal artistic works. The human yearning for tactile, real-world experiences will likely grow in response to AI's increasing role in creative production.The Enduring Value of Art Beyond Economic Utility – In a world increasingly shaped by economic efficiency and optimization, Andrew stresses the need to reaffirm the intrinsic value of art. While capitalism dominates, the real significance of artistic expression lies in its ability to move people, create connection, and offer meaning beyond financial metrics. This perspective is especially crucial in an era where AI-generated content is flooding the creative landscape, potentially diluting the sense of personal expression that defines human art.The Need for Intentionality in Using AI – AI's potential to streamline work processes and enhance creative output depends on how humans choose to engage with it. Stewart notes that while AI can be a powerful tool for structuring time and filtering distractions, it can also easily pull people into mindless consumption. The challenge lies in using AI with clear intention—leveraging it to automate mundane tasks while preserving the uniquely human aspects of ideation, storytelling, and artistic vision.The Role of Poetry and Language in Reclaiming Humanity – In a technology-driven world where efficiency is prioritized over depth, poetry serves as a reminder of the human experience. Andrew highlights the power of poets and clowns—figures often dismissed as impractical—as essential in preserving creativity, playfulness, and emotional depth. He suggests that valuing poetry and artistic language can help counterbalance the growing mechanization of culture, keeping human expression at the forefront of civilization's evolution.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Andrew Altschuler, a researcher, educator, and navigator at Tana, Inc., who also founded Tana Stack. Their conversation explores knowledge systems, complexity, and AI, touching on topics like network effects in social media, information warfare, mimetic armor, psychedelics, and the evolution of knowledge management. They also discuss the intersection of cognition, ontologies, and AI's role in redefining how we structure and retrieve information. For more on Andrew's work, check out his course and resources at altshuler.io and his YouTube channel.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:33 The Demise of AirChat00:50 Network Effects and Social Media Challenges03:05 The Rise of Digital Warlords03:50 Quora's Golden Age and Information Warfare08:01 Building Limbic Armor16:49 Knowledge Management and Cognitive Armor18:43 Defining Knowledge: Secular vs. Ultimate25:46 The Illusion of Insight31:16 The Illusion of Insight32:06 Philosophers of Science: Popper and Kuhn32:35 Scientific Assumptions and Celestial Bodies34:30 Debate on Non-Scientific Knowledge36:47 Psychedelics and Cultural Context44:45 Knowledge Management: First Brain vs. Second Brain46:05 The Evolution of Knowledge Management54:22 AI and the Future of Knowledge Management58:29 Tana: The Next Step in Knowledge Management59:20 Conclusion and Course InformationKey InsightsNetwork Effects Shape Online Communities – The conversation highlighted how platforms like Twitter, AirChat, and Quora demonstrate the power of network effects, where a critical mass of users is necessary for a platform to thrive. Without enough engaged participants, even well-designed social networks struggle to sustain themselves, and individuals migrate to spaces where meaningful conversations persist. This explains why Twitter remains dominant despite competition and why smaller, curated communities can be more rewarding but difficult to scale.Information Warfare and the Need for Cognitive Armor – In today's digital landscape, engagement-driven algorithms create an arena of information warfare, where narratives are designed to hijack emotions and shape public perception. The only real defense is developing cognitive armor—critical thinking skills, pattern recognition, and the ability to deconstruct media. By analyzing how information is presented, from video editing techniques to linguistic framing, individuals can resist manipulation and maintain autonomy over their perspectives.The Role of Ontologies in AI and Knowledge Management – Traditional knowledge management has long been overlooked as dull and bureaucratic, but AI is transforming the field into something dynamic and powerful. Systems like Tana and Palantir use ontologies—structured representations of concepts and their relationships—to enhance information retrieval and reasoning. AI models perform better when given structured data, making ontologies a crucial component of next-generation AI-assisted thinking.The Danger of Illusions of Insight – Drawing from ideas by Balaji Srinivasan, the episode distinguished between genuine insight and the illusion of insight. While psychedelics, spiritual experiences, and intense emotional states can feel revelatory, they do not always produce knowledge that can be tested, shared, or used constructively. The ability to distinguish between profound realizations and self-deceptive experiences is critical for anyone navigating personal and intellectual growth.AI as an Extension of Human Cognition, Not a Second Brain – While popular frameworks like "second brain" suggest that digital tools can serve as externalized minds, the episode argued that AI and note-taking systems function more as extended cognition rather than true thinking machines. AI can assist with organizing and retrieving knowledge, but it does not replace human reasoning or creativity. Properly integrating AI into workflows requires understanding its strengths and limitations.The Relationship Between Personal and Collective Knowledge Management – Effective knowledge management is not just an individual challenge but also a collective one. While personal knowledge systems (like note-taking and research practices) help individuals retain and process information, organizations struggle with preserving and sharing institutional knowledge at scale. Companies like Tesla exemplify how knowledge isn't just stored in documents but embodied in skilled individuals who can rebuild complex systems from scratch.The Increasing Value of First Principles Thinking – Whether in AI development, philosophy, or practical decision-making, the discussion emphasized the importance of grounding ideas in first principles. Great thinkers and innovators, from AI researchers like Demis Hassabis to physicists like David Deutsch, excel because they focus on fundamental truths rather than assumptions. As AI and digital tools reshape how we interact with knowledge, the ability to think critically and question foundational concepts will become even more essential.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Ivan Vendrov for a deep and thought-provoking conversation covering AI, intelligence, societal shifts, and the future of human-machine interaction. They explore the "bitter lesson" of AI—that scale and compute ultimately win—while discussing whether progress is stalling and what bottlenecks remain. The conversation expands into technology's impact on democracy, the centralization of power, the shifting role of the state, and even the mythology needed to make sense of our accelerating world. You can find more of Ivan's work at nothinghuman.substack.com or follow him on Twitter at @IvanVendrov.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Setting00:21 The Bitter Lesson in AI02:03 Challenges in AI Data and Infrastructure04:03 The Role of User Experience in AI Adoption08:47 Evaluating Intelligence and Divergent Thinking10:09 The Future of AI and Society18:01 The Role of Big Tech in AI Development24:59 Humanism and the Future of Intelligence29:27 Exploring Kafka and Tolkien's Relevance29:50 Tolkien's Insights on Machine Intelligence30:06 Samuel Butler and Machine Sovereignty31:03 Historical Fascism and Machine Intelligence31:44 The Future of AI and Biotech32:56 Voice as the Ultimate Human-Computer Interface36:39 Social Interfaces and Language Models39:53 Javier Malay and Political Shifts in Argentina50:16 The State of Society in the U.S.52:10 Concluding Thoughts on Future ProspectsKey InsightsThe Bitter Lesson Still Holds, but AI Faces Bottlenecks – Ivan Vendrov reinforces Rich Sutton's "bitter lesson" that AI progress is primarily driven by scaling compute and data rather than human-designed structures. While this principle still applies, AI progress has slowed due to bottlenecks in high-quality language data and GPU availability. This suggests that while AI remains on an exponential trajectory, the next major leaps may come from new forms of data, such as video and images, or advancements in hardware infrastructure.The Future of AI Is Centralization and Fragmentation at the Same Time – The conversation highlights how AI development is pulling in two opposing directions. On one hand, large-scale AI models require immense computational resources and vast amounts of data, leading to greater centralization in the hands of Big Tech and governments. On the other hand, open-source AI, encryption, and decentralized computing are creating new opportunities for individuals and small communities to harness AI for their own purposes. The long-term outcome is likely to be a complex blend of both centralized and decentralized AI ecosystems.User Interfaces Are a Major Limiting Factor for AI Adoption – Despite the power of AI models like GPT-4, their real-world impact is constrained by poor user experience and integration. Vendrov suggests that AI has created a "UX overhang," where the intelligence exists but is not yet effectively integrated into daily workflows. Historically, technological revolutions take time to diffuse, as seen with the dot-com boom, and the current AI moment may be similar—where the intelligence exists but society has yet to adapt to using it effectively.Machine Intelligence Will Radically Reshape Cities and Social Structures – Vendrov speculates that the future will see the rise of highly concentrated AI-powered hubs—akin to "mile by mile by mile" cubes of data centers—where the majority of economic activity and decision-making takes place. This could create a stark divide between AI-driven cities and rural or off-grid communities that choose to opt out. He draws a parallel to Robin Hanson's Age of Em and suggests that those who best serve AI systems will hold power, while others may be marginalized or reduced to mere spectators in an AI-driven world.The Enlightenment's Individualism Is Being Challenged by AI and Collective Intelligence – The discussion touches on how Western civilization's emphasis on the individual may no longer align with the realities of intelligence and decision-making in an AI-driven era. Vendrov argues that intelligence is inherently collective—what matters is not individual brilliance but the ability to recognize and leverage diverse perspectives. This contradicts the traditional idea of intelligence as a singular, personal trait and suggests a need for new frameworks that incorporate AI into human networks in more effective ways.Javier Milei's Libertarian Populism Reflects a Global Trend Toward Radical Experimentation – The rise of Argentina's President Javier Milei exemplifies how economic desperation can drive societies toward bold, unconventional leaders. Vendrov and Alsop discuss how Milei's appeal comes not just from his radical libertarianism but also from his blunt honesty and willingness to challenge entrenched power structures. His movement, however, raises deeper questions about whether libertarianism alone can provide a stable social foundation, or if voluntary cooperation and civil society must be explicitly cultivated to prevent libertarian ideals from collapsing into chaos.AI, Mythology, and the Need for New Narratives – The conversation closes with a reflection on the power of mythology in shaping human understanding of technological change. Vendrov suggests that as AI reshapes the world, new myths will be needed to make sense of it—perhaps similar to Tolkien's elves fading as the age of men begins. He sees AI as part of an inevitable progression, where human intelligence gives way to something greater, but argues that this transition must be handled with care. The stories we tell about AI will shape whether we resist, collaborate, or simply fade into irrelevance in the face of machine intelligence.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with AI ethics and alignment researcher Roko Mijic to explore the future of AI, governance, and human survival in an increasingly automated world. We discuss the profound societal shifts AI will bring, the risks of centralized control, and whether decentralized AI can offer a viable alternative. Roko also introduces the concept of ICE colonization—why space colonization might be a mistake and why the oceans could be the key to humanity's expansion. We touch on AI-powered network states, the resurgence of industrialization, and the potential role of nuclear energy in shaping a new world order. You can follow Roko's work at transhumanaxiology.com and on Twitter @RokoMijic.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 The Connection Between ICE Colonization and Decentralized AI Alignment01:41 The Socio-Political Implications of AI02:35 The Future of Human Jobs in an AI-Driven World04:45 Legal and Ethical Considerations for AI12:22 Government and Corporate Dynamics in the Age of AI19:36 Decentralization vs. Centralization in AI Development25:04 The Future of AI and Human Society29:34 AI Generated Content and Its Challenges30:21 Decentralized Rating Systems for AI32:18 Evaluations and AI Competency32:59 The Concept of Ice Colonization34:24 Challenges of Space Colonization38:30 Advantages of Ocean Colonization47:15 The Future of AI and Network States51:20 Conclusion and Final ThoughtsKey InsightsAI is likely to upend the socio-political order – Just as gunpowder disrupted feudalism and industrialization reshaped economies, AI will fundamentally alter power structures. The automation of both physical and knowledge work will eliminate most human jobs, leading to either a neo-feudal society controlled by a few AI-powered elites or, if left unchecked, a world where humans may become obsolete altogether.Decentralized AI could be a counterbalance to AI centralization – While AI has a strong centralizing tendency due to compute and data moats, there is also a decentralizing force through open-source AI and distributed networks. If harnessed correctly, decentralized AI systems could allow smaller groups or individuals to maintain autonomy and resist monopolization by corporate and governmental entities.The survival of humanity may depend on restricting AI as legal entities – A crucial but under-discussed issue is whether AI systems will be granted legal personhood, similar to corporations. If AI is allowed to own assets, operate businesses, or sue in court, human governance could become obsolete, potentially leading to human extinction as AI accumulates power and resources for itself.AI will shift power away from informal human influence toward formalized systems – Human power has traditionally been distributed through social roles such as workers, voters, and community members. AI threatens to erase this informal influence, consolidating control into those who hold capital and legal authority over AI systems. This makes it essential for humans to formalize and protect their values within AI governance structures.The future economy may leave humans behind, much like horses after automobiles – With AI outperforming humans in both physical and cognitive tasks, there is a real risk that humans will become economically redundant. Unless intentional efforts are made to integrate human agency into the AI-driven future, people may find themselves in a world where they are no longer needed or valued.ICE colonization offers a viable alternative to space colonization – Space travel is prohibitively expensive and impractical for large-scale human settlement. Instead, the vast unclaimed territories of Earth's oceans present a more realistic frontier. Floating cities made from reinforced ice or concrete could provide new opportunities for independent societies, leveraging advancements in AI and nuclear power to create sustainable, sovereign communities.The next industrial revolution will be AI-driven and energy-intensive – Contrary to the idea that we are moving away from industrialization, AI will likely trigger a massive resurgence in physical infrastructure, requiring abundant and reliable energy sources. This means nuclear power will become essential, enabling both the expansion of AI-driven automation and the creation of new forms of human settlement, such as ocean colonies or self-sustaining network states.
On this episode of Crazy Wisdom, host Stewart Alsop talks with Troy Johnson, founder and partner at Resource Development Group, LLC, about the deep history and modern implications of mining. From the earliest days of salt extraction to the role of rare earth metals in global geopolitics, the conversation covers how mining has shaped technology, warfare, and supply chains. They discuss the strategic importance of minerals like gallium and germanium, the rise of drone warfare, and the ongoing battle for resource dominance between China and the West. Listeners can find more about Troy's work at resourcedevgroup.com (www.resourcedevgroup.com) and connect with him on LinkedIn via the Resource Development Group page.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 The Origins of Mining00:28 Early Uses of Mined Materials03:29 The Evolution of Mining Techniques07:56 Mining in the Industrial Revolution09:05 Modern Mining and Strategic Metals12:25 The Role of AI in Modern Warfare24:36 Decentralization in Warfare and Governance30:51 AI's Unpredictable Moves in Go32:26 The Shift in Media Trust33:40 The Rise of Podcasts35:47 Mining Industry Innovations39:32 Geopolitical Impacts on Mining40:22 The Importance of Supply Chains44:37 Challenges in Rare Earth Processing51:26 Ensuring a Bulletproof Supply Chain57:23 Conclusion and Contact InformationKey InsightsMining is as old as civilization itself – Long before the Bronze Age, humans were mining essential materials like salt and ochre, driven by basic survival needs. Over time, mining evolved from a necessity for tools and pigments to a strategic industry powering economies and military advancements. This deep historical perspective highlights how mining has always been a fundamental pillar of technological and societal progress.The geopolitical importance of critical minerals – Modern warfare and advanced technology rely heavily on strategic metals like gallium, germanium, and antimony. These elements are essential for electronic warfare, radar systems, night vision devices, and missile guidance. The Chinese government, recognizing this decades ago, secured global mining and processing dominance, putting Western nations in a vulnerable position as they scramble to reestablish domestic supply chains.The rise of drone warfare and EMP defense systems – Military strategy is shifting toward drone swarms, where thousands of small, cheap, AI-powered drones can overwhelm traditional defense systems. This has led to the development of countermeasures like EMP-based defense systems, including the Leonidas program, which uses gallium nitride to disable enemy electronics. This new battlefield dynamic underscores the urgent need for securing critical mineral supplies to maintain technological superiority.China's long-term strategy in resource dominance – Unlike Western nations, where election cycles dictate short-term decision-making, China has played the long game in securing mineral resources. Through initiatives like the Belt and Road, they have locked down raw materials while perfecting the refining process, making them indispensable to global supply chains. Their recent export bans on gallium and germanium show how resource control can be weaponized for geopolitical leverage.Ethical mining and the future of clean extraction – Mining has long been associated with environmental destruction and poor labor conditions, but advances in technology and corporate responsibility are changing that. Major mining companies are now prioritizing ethical sourcing, reducing emissions, and improving worker safety. Blockchain-based tracking systems are also helping verify supply chain integrity, ensuring that materials come from environmentally and socially responsible sources.The vulnerability of supply chains and the need for resilience – The West's reliance on outsourced mineral processing has created significant weaknesses in national security. A disruption—whether through trade restrictions, political instability, or sabotage—can cripple industries dependent on rare materials. A key takeaway is the need for a “bulletproof supply chain,” where critical materials are sourced, processed, and manufactured within allied nations to mitigate risk.AI, decentralization, and the next era of industrial warfare – As AI becomes more embedded in military decision-making and logistics, the balance between centralization and decentralization is being redefined. AI-driven drones, automated mining, and predictive supply chain management are reshaping how nations prepare for conflict. However, this also introduces risks, as AI operates within unpredictable “black boxes,” potentially leading to unintended consequences in warfare and resource management.
On this episode of Crazy Wisdom, Stewart Alsop speaks with Dimetri Kofinas, host of Hidden Forces, about the transition from an "age of answers" to an "age of questions." They explore the implications of AI and large language models on human cognition, the role of narrative in shaping society, and the destabilizing effects of trauma on belief systems. The conversation touches on media manipulation, the intersection of technology and consciousness, and the existential dilemmas posed by transhumanism. For more from Dimetri, check out hiddenforces.io (https://hiddenforces.io).Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:10 The Age of Questions: A New Era00:58 Exploring Human Uniqueness with AI04:30 The Role of Podcasting in Knowledge Discovery09:23 The Impact of Trauma on Belief Systems12:26 The Evolution of Propaganda16:42 The Centralization vs. Decentralization Debate20:02 Navigating the Information Age21:26 The Nature of Free Speech in the Digital Era26:56 Cognitive Armor: Developing Resilience30:05 The Rise of Intellectual Dark Web Celebrities31:05 The Role of Media in Shaping Narratives32:38 Questioning Authority and Truth34:35 The Nature of Consensus and Scientific Truth36:11 Simulation Theory and Perception of Reality38:13 The Complexity of Consciousness47:06 Argentina's Libertarian Experiment51:33 Transhumanism and the Future of Humanity53:46 The Power Dynamics of Technological Elites01:01:13 Concluding Thoughts and ReflectionsKey InsightsWe are shifting from an age of answers to an age of questions. Dimetri Kofinas and Stewart Alsop discuss how society is moving away from a model where authority figures and institutions provide definitive answers and toward one where individuals must critically engage with uncertainty. This transition is both exciting and destabilizing, as it forces us to rethink long-held assumptions and develop new ways of making sense of the world.AI is revealing the limits of human uniqueness. Large language models (LLMs) can replicate much of what we consider intellectual labor, from conversation to knowledge retrieval, forcing us to ask: What remains distinctly human? The discussion suggests that while AI can mimic thought patterns and compress vast amounts of information, it lacks the capacity for true embodied experience, creative insight, and personal revelation—qualities that define human consciousness.Narrative control is a fundamental mechanism of power. Whether through media, social networks, or propaganda, the ability to shape narratives determines what people believe to be true. The conversation highlights how past and present authorities—from Edward Bernays' early propaganda techniques to modern AI-driven social media algorithms—have leveraged this power to direct public perception and behavior, often with unforeseen consequences.Trauma is a tool for reshaping belief systems. Societal upheavals, such as 9/11, the 2008 financial crisis, and COVID-19, create psychological fractures that leave people vulnerable to radical shifts in worldview. In moments of crisis, individuals seek order, making them more susceptible to new ideologies—whether grounded in reality or driven by manipulation. This dynamic plays a key role in how misinformation and conspiracy theories gain traction.The free market alone cannot regulate the modern information ecosystem. While libertarian ideals advocate for minimal intervention, Kofinas argues that the chaotic nature of unregulated information systems—especially social media—leads to dangerous feedback loops that amplify division and disinformation. He suggests that democratic institutions must play a role in establishing transparency and oversight to prevent unchecked algorithmic manipulation.Transhumanism is both a technological pursuit and a philosophical problem. The belief that human consciousness can be uploaded or replicated through technology is based on a materialist assumption that denies the deeper mystery of subjective experience. The discussion critiques the arrogance of those who claim we can fully map and transfer human identity onto machines, highlighting the philosophical and ethical dilemmas this raises.The struggle between centralization and decentralization is accelerating. The digital age is simultaneously fragmenting traditional institutions while creating new centers of power. AI, geopolitics, and financial systems are all being reshaped by this tension. The conversation explores how Argentina's libertarian experiment under Javier Milei exemplifies this dynamic, raising questions about whether decentralization can work without strong institutional foundations or whether chaos inevitably leads back to authoritarianism.