Have you found yourself experiencing a lot of stress recently? Do you want to be more creative? Stewart Alsop interviews successful creatives to find out how they work with and manage the stress that is inherent in creative work. He investigates the questions: "What is the connection between stress…
mindfulness, unfiltered, yoga, profound, spiritual, deeper, phenomenal, authentic, highly recommended, interested, journey, interviews, insightful, worth, excited, topics, guests, thank, life.
Listeners of Crazy Wisdom that love the show mention: stewart,The Crazy Wisdom podcast is a refreshing and thought-provoking exploration into the complexities of life and consciousness. Hosted by Stewart, the podcast delves into various topics such as spirituality, personal development, and mindfulness. Stewart's ability to navigate deep conversations with his guests is commendable, creating a space where listeners can gain insights from different perspectives.
One of the best aspects of this podcast is the wide range of guests that Stewart brings on. From renowned spiritual teachers to everyday individuals on their own personal journeys, each episode offers unique insights and wisdom. The interviews are raw, authentic, and exploratory, providing listeners with an opportunity to connect with the struggles and triumphs of others. The episodes featuring Kapil Gupta are particularly profound, as they showcase unfiltered truths that challenge societal norms.
Furthermore, Stewart's humble demeanor and genuine curiosity shine through in his interviews. He asks thought-provoking questions that encourage his guests to delve deeper into their experiences and beliefs. The podcast is a treasure trove of knowledge for those interested in first principles thinking or seeking a higher vibration lifestyle.
However, one potential drawback of this podcast is its niche appeal. The topics covered may not resonate with everyone, as they delve into esoteric concepts that require an open mind. Additionally, some episodes may come off as too abstract or philosophical for those seeking more concrete advice or practical tips.
In conclusion, The Crazy Wisdom podcast is a must-listen for anyone on a journey of self-discovery and personal growth. Stewart's ability to facilitate meaningful conversations with a diverse array of guests allows listeners to explore complex topics in an accessible way. While some episodes may not appeal to everyone's preferences or interests, there is undoubtedly valuable wisdom to be gained from engaging with this podcast.
I, Stewart Alsop, am thrilled to welcome Xathil of Poliebotics to this episode of Crazy Wisdom, for what is actually our second take, this time with a visual surprise involving a fascinating 3D-printed Bauta mask. Xathil is doing some truly groundbreaking work at the intersection of physical reality, cryptography, and AI, which we dive deep into, exploring everything from the philosophical implications of anonymity to the technical wizardry behind his "Truth Beam."Check out this GPT we trained on the conversationTimestamps01:35 Xathil explains the 3D-printed Bauta Mask, its Venetian origins, and its role in enabling truth through anonymity via his project, Poliepals.04:50 The crucial distinction between public identity and "real" identity, and how pseudonyms can foster truth-telling rather than just conceal.10:15 Addressing the serious risks faced by crypto influencers due to public displays of wealth and the broader implications for online identity.15:05 Xathil details the core Poliebotics technology: the "Truth Beam," a projector-camera system for cryptographically timestamping physical reality.18:50 Clarifying the concept of "proof of aliveness"—verifying a person is currently live in a video call—versus the more complex "proof of liveness."21:45 How the speed of light provides a fundamental advantage for Poliebotics in outmaneuvering AI-generated deepfakes.32:10 The concern of an "inversion," where machine learning systems could become dominant over physical reality by using humans as their actuators.45:00 Xathil's ambitious project to use Poliebotics for creating cryptographically verifiable records of biodiversity, beginning with an enhanced Meles trap.Key InsightsAnonymity as a Truth Catalyst: Drawing from Oscar Wilde, the Bauta mask symbolizes how anonymity or pseudonyms can empower individuals to reveal deeper, more authentic truths. This challenges the notion that masks only serve to hide, suggesting they can be tools for genuine self-expression.The Bifurcation of Identity: In our digital age, distinguishing between one's core "real" identity and various public-facing personas is increasingly vital. This separation isn't merely about concealment but offers a space for truthful expression while navigating public life.The Truth Beam: Anchoring Reality: Poliebotics' "Truth Beam" technology employs a projector-camera system to cast cryptographic hashes onto physical scenes, recording them and anchoring them to a blockchain. This aims to create immutable, verifiable records of reality to combat the rise of sophisticated deepfakes.Harnessing Light Speed Against Deepfakes: The fundamental defense Poliebotics offers against AI-generated fakes is the speed of light. Real-world light reflection for capturing projected hashes is virtually instantaneous, whereas an AI must simulate this complex process, a task too slow to keep up with real-time verification.The Specter of Humans as AI Actuators: A significant future concern is the "inversion," where AI systems might utilize humans as unwitting agents to achieve their objectives in the physical world. By manipulating incentives, AIs could effectively direct human actions, raising profound questions about agency.Towards AI Symbiosis: The ideal future isn't a human-AI war or complete technological asceticism, but a cooperative coexistence between nature, humanity, and artificial systems. This involves developing AI responsibly, instilling human values, and creating systems that are non-threatening and beneficial.Contact Information* Polybotics' GitHub* Poliepals* Xathil: Xathil@ProtonMail.com
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
I, Stewart Alsop, welcomed Ben Roper, CEO and founder of Play Culture, to this episode of Crazy Wisdom for a fascinating discussion. We kicked things off by diving into Ben's reservations about AI, particularly its impact on creative authenticity, before exploring his innovative project, Play Culture, which aims to bring tactical outdoor games to adults. Ben also shared his journey of teaching himself to code and his philosophy on building experiences centered on human connection rather than pure profit.Check out this GPT we trained on the conversationTimestamps00:55 Ben Roper on AI's impact on creative authenticity and the dilution of the author's experience.03:05 The discussion on AI leading to a "simulation of experience" versus genuine, embodied experiences.08:40 Stewart Alsop explores the nuances of authenticity, honesty, and trust in media and personal interactions.17:53 Ben discusses how trust is invaluable and often broken by corporate attempts to feign it.20:22 Ben begins to explain the Play Culture project, discussing the community's confusion about its non-monetized approach, leading into his philosophy of "designing for people, not money."37:08 Ben elaborates on the Play Culture experience: creating tactical outdoor games designed specifically for adults.45:46 A comparison of Play Culture's approach with games like Pokémon GO, emphasizing "gentle technology."58:48 Ben shares his thoughts on the future of augmented reality and designing humanistic experiences.1:02:15 Ben describes "Pirate Gold," a real-world role-playing pirate simulator, as an example of Play Culture's innovative games.1:06:30 How to find Play Culture and get involved in their events worldwide.Key InsightsAI and Creative Authenticity: Ben, coming from a filmmaking background, views generative AI as a collaborator without a mind, which disassociates work from the author's unique experience. He believes art's value lies in being a window into an individual's life, a quality diluted by AI's averaged output.Simulation vs. Real Experience: We discussed how AI and even some modern technologies offer simulations of experiences (like VR travel or social media connections) that lack the depth and richness of real-world engagement. These simulations can be easier to access but may leave individuals unfulfilled and unaware of what they're missing.The Quest for Honesty Over Authenticity: I posited that while people claim to want authenticity, they might actually desire honesty more. Raw, unfiltered authenticity can be confronting, whereas honesty within a framework of trust allows for genuine connection without necessarily exposing every raw emotion.Trust as Unpurchasable Value: Ben emphasized that trust is one of the few things that cannot be bought; it must be earned and is easily broken. This makes genuine trust incredibly valuable, especially in a world where corporate entities often feign trustworthiness for transactional purposes.Designing for People, Not Money: Ben shared his philosophy behind Play Culture, which is to "design for people, not money." This means prioritizing genuine human experience, joy, and connection over optimizing for profit, believing that true value, including financial sustainability, can arise as a byproduct of creating something meaningful.The Need for Adult Play: Play Culture aims to fill a void by creating tactical outdoor games specifically designed for adult minds and social dynamics. This goes beyond childlike play or existing adult games like video games and sports, focusing on socially driven gameplay, strategy, and unique adult experiences.Gentle Technology in Gaming: Contrasting with AR-heavy games like Pokémon GO, Play Culture advocates for "gentle technology." The tech (like a mobile app) supports gameplay by providing information or connecting players, but the core interaction happens through players' senses and real-world engagement, not primarily through a screen.Real-World Game Streaming as the Future: Ben's vision for Play Culture includes moving towards real-world game streaming, akin to video game streaming on Twitch, but featuring live-action tactical games played in real cities. This aims to create a new genre of entertainment showcasing genuine human interaction and strategy.Contact Information* Ben Roper's Instagram* Website: playculture.com
I, Stewart Alsop, am thrilled to welcome Leon Coe back to the Crazy Wisdom Podcast for a second deep dive. This time, we journeyed from the Renaissance and McLuhan's media theories straight into the heart of theology, church history, and the very essence of faith, exploring how ancient wisdom and modern challenges intertwine. It was a fascinating exploration, touching on everything from apostolic succession to the nature of sin and the search for meaning in a secular age.Check out this GPT we trained on the conversationTimestamps00:43 I kick things off by asking Leon about the Renaissance, Martin Luther, and the profound impact of the printing press on religion.01:02 Leon Coe illuminates Marshall McLuhan's insights on how technologies, like print, shape our consciousness and societal structures.03:25 Leon takes us back to early Church history, discussing the Church's life and sacraments, including the Didache, well before the Bible's formal canonization.06:00 Leon explains the scriptural basis for Peter as the "rock" of the Church, the foundation for the office of the papacy.07:06 We delve into the concept of apostolic succession, where Leon describes the unbroken line of ordination from the apostles.11:57 Leon clarifies Jesus's relationship to the Law, referencing Matthew 5:17 where Jesus states he came to fulfill, not abolish, the Law.12:20 I reflect on the intricate dance of religion, culture, and technology, and the sometimes bewildering, "cosmic joke" nature of our current reality.16:46 I share my thoughts on secularism potentially acting as a new, unacknowledged religion, and how it often leaves a void in our search for purpose.19:28 Leon introduces what he calls the "most terrifying verse in the Bible," Matthew 7:21, emphasizing the importance of doing the Father's will.24:21 Leon discusses the Eucharist as the new Passover, drawing connections to Jewish tradition and Jesus's institution of this central sacrament.Key InsightsTechnology's Shaping Power: McLuhan's Enduring Relevance. Leon highlighted how Marshall McLuhan's theories are crucial for understanding history. The shift from an oral, communal society to an individualistic one via the printing press, for instance, directly fueled the Protestant Reformation by enabling personal interpretation of scripture, moving away from a unified Church authority.The Early Church's Foundation: Life Before the Canon. Leon emphasized that for roughly 300 years before the Bible was officially canonized, the Church was actively functioning. It had established practices, sacraments (like baptism and the Eucharist), and teachings, as evidenced by texts like the Didache, demonstrating a lived faith independent of a finalized scriptural canon.Peter and Apostolic Succession: The Unbroken Chain. A core point from Leon was Jesus designating Peter as the "rock" upon which He would build His Church. This, combined with the principle of apostolic succession—the laying on of hands in an unbroken line from the apostles—forms the Catholic and Orthodox claim to authoritative teaching and sacramental ministry.Fulfillment, Not Abolition: Jesus and the Law. Leon clarified that Jesus, as stated in Matthew 5:17, came not to abolish the Old Testament Law but to fulfill it. This means the Mosaic Law finds its ultimate meaning and completion in Christ, who institutes a New Covenant.Secularism's Spiritual Vacuum: A Modern Religion? I, Stewart, posited that modern secularism, while valuing empiricism, often acts like a new religion that explicitly rejects the spiritual and miraculous. Leon agreed this can lead to a sense of emptiness, as humans inherently long for purpose and connection to a creator, a void secularism struggles to fill.The Criticality of God's Will: Beyond Lip Service. Leon pointed to Matthew 7:21 ("Not everyone who says to me, ‘Lord, Lord,' will enter the kingdom of heaven...") as a stark reminder. True faith requires more than verbal profession; it demands actively doing the will of the Father, implying that actions and heartfelt commitment are essential for salvation.The Eucharist as Central: The New Passover and Real Presence. Leon passionately explained the Eucharist as the new Passover, instituted by Christ. Referencing John 6, he stressed the Catholic belief in the Real Presence—that the bread and wine become the literal body and blood of Christ—which is essential for spiritual life and communion with God.Reconciliation and Purity: Restoring Communion. Leon explained the Sacrament of Reconciliation (Confession) as a vital means, given through the Church's apostolic ministry, to restore communion with God after sin. He also touched upon Purgatory as a state of purification for overcoming attachments to sin, ensuring one is perfectly ordered to God before entering Heaven.Contact Information* Leon Coe: @LeonJCoe on Twitter (X)
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
I, Stewart Alsop, welcomed Alex Levin, CEO and co-founder of Regal, to this episode of the Crazy Wisdom Podcast to discuss the fascinating world of AI phone agents. Alex shared some incredible insights into how AI is already transforming customer interactions and what the future holds for company agents, machine-to-machine communication, and even the nature of knowledge itself.Check out this GPT we trained on the conversation!Timestamps00:29 Alex Levin shares that people are often more honest with AI agents than human agents, especially regarding payments.02:41 The surprising persistence of voice as a preferred channel for customer interaction, and how AI is set to revolutionize it.05:15 Discussion of the three types of AI agents: personal, work, and company agents, and how conversational AI will become the main interface with brands.07:12 Exploring the shift to machine-to-machine interactions and how AI changes what knowledge humans need versus what machines need.10:56 The looming challenge of centralization versus decentralization in AI, and how Americans often prioritize experience over privacy.14:11 Alex explains how tokenized data can offer personalized experiences without compromising specific individual privacy.25:44 Voice is predicted to become the primary way we interact with brands and technology due to its naturalness and efficiency.33:21 Why AI agents are easier to implement in contact centers due to different entropy compared to typical software.38:13 How Regal ensures AI agents stay on script and avoid "hallucinations" by proper training and guardrails.46:11 The technical challenges in replicating human conversational latency and nuances in AI voice interactions.Key InsightsAI Elicits HonestyPeople tend to be more forthright with AI agents, particularly in financially sensitive situations like discussing overdue payments. Alex speculates this is because individuals may feel less judged by an AI, leading to more truthful disclosures compared to interactions with human agents.Voice is King, AI is its HeirDespite predictions of its decline, voice remains a dominant channel for customer interactions. Alex believes that within three to five years, AI will handle as much as 90% of these voice interactions, transforming customer service with its efficiency and availability.The Rise of Company AgentsThe primary interface with most brands is expected to shift from websites and apps to conversational AI agents. This is because voice is a more natural, faster, and emotive way for humans to interact, a behavior already seen in younger generations.Machine-to-Machine FutureWe're moving towards a world where AI agents representing companies will interact directly with AI agents representing consumers. This "machine-to-machine" (M2M) paradigm will redefine commerce and the nature of how businesses and customers engage.Ontology of KnowledgeAs AI systems process vast amounts of information, creating a clear "ontology of knowledge" becomes crucial. This means structuring and categorizing information so AI can understand the context and user's underlying intent, rather than just processing raw data.Tokenized Data for PrivacyA potential solution to privacy concerns is "tokenized data." Instead of providing AI with specific personal details, users could share generalized tokens (e.g., "high-intent buyer in 30s") that allow for personalized experiences without revealing sensitive, identifiable information.AI Highlights Human InconsistenciesImplementing AI often brings to light existing inconsistencies or unacknowledged issues within a company. For instance, AI might reveal discrepancies between official scripts and how top-performing human agents actually communicate, forcing companies to address these differences.Influence as a Key Human SkillIn a future increasingly shaped by AI, Sam Altman (via Alex) suggests that the ability to "influence" others will be a paramount human skill. This uniquely human trait will be vital, whether for interacting with other people or for guiding and shaping AI systems.Contact Information* Regal AI: regal.ai* Email: hello@regal.ai* LinkedIn: www.linkedin.com/in/alexlevin1/
In this episode of the Crazy Wisdom Podcast, Stewart Alsop III talks with Bobby Healy, CEO and co-founder of Manna Drone Delivery, about the evolving frontier where the digital meets the physical—specifically, the promise and challenges of autonomous drone logistics. They explore how regulatory landscapes are shaping the pace of drone delivery adoption globally, why Europe is ahead of the U.S., and what it takes to build scalable infrastructure for airborne logistics. The conversation also touches on the future of aerial mobility, the implications of automation for local commerce, and the philosophical impacts of deflationary technologies. For more about Bobby and Manna, visit mana.aero or follow Bobby on Twitter at @RealBobbyHealy.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart Alsop introduces Bobby Healy and opens with the promise vs. reality of drone tech; Healy critiques early overpromising and sets the stage for today's tech maturity.05:00 – Deep dive into FAA vs. EASA regulation, highlighting the regulatory bottleneck in the U.S. and the agility of the EU's centralized model.10:00 – Comparison of airspace complexity between the U.S. and Europe; Healy explains why drone scaling is easier in the EU's less crowded sky.15:00 – Discussion of urban vs. suburban deployment, the ground risk challenge, and why automated (not fully autonomous) operations are still standard.20:00 – Exploration of pilot oversight, the role of remote monitoring, and how the system is already profitable per flight.25:00 – LLMs and vibe coding accelerate software iteration; Healy praises AI-powered development, calling it transformative for engineers and founders.30:00 – Emphasis on local delivery revolution; small businesses are beating Amazon with ultra-fast drone drop-offs.35:00 – Touches on Latin America's opportunity, Argentina's regulatory climate, and localized drone startups.40:00 – Clarifies noise and privacy concerns; drone presence is minimal and often unnoticed, especially in suburbs.45:00 – Final thoughts on airspace utilization, ground robots, and the deflationary effect of drone logistics on global commerce.Key InsightsDrone Delivery's Real Bottleneck is Regulation, Not Technology: While drone delivery technology has matured significantly—with off-the-shelf components now industrial-grade and reliable—the real constraint is regulatory. Bobby Healy emphasizes that in the U.S., drone delivery is several years behind Europe, not due to a lack of technological readiness, but because of a slower-moving and more complex regulatory environment governed by the FAA. In contrast, Europe benefits from a nimble, centralized aviation regulator (EASA), which has enabled faster deployment by treating regulation as the foundational "product" that allows the industry to launch.The U.S. Airspace is Inherently More Complex: Healy draws attention to the density and fragmentation of U.S. airspace as a major challenge. From private planes to hobbyist aircraft and military operations, the sheer volume and variety of stakeholders complicate the regulatory path. Even though the FAA has created a solid framework (e.g., Part 108), implementing and scaling it across such a vast and fragmented system is slow. This puts the U.S. at a disadvantage, even though it holds the largest market potential for drone delivery.Drone Logistics is Already Economically Viable at a Small Scale: Unlike many emerging technologies, drone delivery is already profitable on a per-flight basis. Healy notes that Manna's drones, operating primarily in suburban areas, achieve unit economics that allow them to scale without needing to replace human pilots yet. These remote pilots still play a role for oversight and legal compliance, but full autonomy is technically ready and likely to be adopted within a few years. This puts Manna ahead of competitors, including some well-funded giants.Suburban and Rural Areas Will Benefit Most from Drone Delivery First: The initial commercial impact of drone delivery is strongest in high-density suburban regions where traditional logistics are inefficient. These environments allow for easy takeoff and landing without the spatial constraints of dense urban cores. Healy explains that rooftops, parking lots, and small-scale launch zones can already support dozens of flights per hour. Over time, this infrastructure could rebalance urban and rural economies by enabling local producers and retailers to compete effectively with large logistics giants.Drone Logistics Will Redefine Local Commerce: One of the most compelling outcomes discussed is how drone delivery changes the playing field for small, local businesses. Healy shares an example of a local Irish bookstore now beating Amazon on delivery speed thanks to Manna's platform. With a six-minute turnaround from purchase to backyard delivery, drone logistics could dramatically lower barriers to entry for small businesses, giving them access to modern fulfillment without needing massive infrastructure.Massive Deflation in Logistics Could Lead to Broader Economic Shifts: Healy argues that drone delivery, like AI, will drive a deflationary wave across sectors. By reducing the marginal cost of transportation to near zero, this technology could increase consumption and economic activity while also creating new jobs and opportunities in non-urban areas. This shift resembles the broad societal transformation brought on by the spread of electricity in the early 20th century—ubiquitous, enabling, and invisible.Drones Could Transform Defense Strategy Through “Mutually Assured Defense”: In a thought-provoking segment, Healy discusses how cheap, scalable drone technology might shift the geopolitical landscape. Instead of focusing solely on destruction, drones could enable countries to build robust “defense clouds” over their borders—creating a deterrent similar to nuclear weapons but more accessible and less catastrophic. He proposes that wide-scale deployment of autonomous defensive drones could prevent conflicts by making invasion logistically impossible.
On this episode of Crazy Wisdom, I, Stewart Alsop, spoke with Neil Davies, creator of the Extelligencer project, about survival strategies in what he calls the “Dark Forest” of modern civilization — a world shaped by cryptographic trust, intelligence-immune system fusion, and the crumbling authority of legacy institutions. We explored how concepts like zero-knowledge proofs could defend against deepening informational warfare, the shift toward tribal "patchwork" societies, and the challenge of building a post-institutional framework for truth-seeking. Listeners can find Neil on Twitter as @sigilante and explore more about his work in the Extelligencer substack.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction of Neil Davies and the Extelligencer project, setting the stage with Dark Forest theory and operational survival concepts.05:00 Expansion on Dark Forest as a metaphor for Internet-age exposure, with examples like scam evolution, parasites, and the vulnerability of modern systems.10:00 Discussion of immune-intelligence fusion, how organisms like anthills and the Portuguese Man o' War blend cognition and defense, leading into memetic immune systems online.15:00 Introduction of cryptographic solutions, the role of signed communications, and the growing importance of cryptographic attestation against sophisticated scams.20:00 Zero-knowledge proofs explained through real-world analogies like buying alcohol, emphasizing minimal information exposure and future-proofing identity verification.25:00 Transition into post-institutional society, collapse of legacy trust structures, exploration of patchwork tribes, DAOs, and portable digital organizations.30:00 Reflection on association vs. hierarchy, the persistence of oligarchies, and the shift from aristocratic governance to manipulated mass democracy.35:00 AI risks discussed, including trapdoored LLMs, epistemic hygiene challenges, and historical examples like gold fulminate booby-traps in alchemical texts.40:00 Controlled information flows, secular religion collapse, questioning sources of authority in a fragmented information landscape.45:00 Origins and evolution of universities, from medieval student-driven models to Humboldt's research-focused institutions, and the absorption by the nation-state.50:00 Financialization of universities, decay of independent scholarship, and imagining future knowledge structures outside corrupted legacy frameworks.Key InsightsThe "Dark Forest" is not just a cosmological metaphor, but a description of modern civilization's hidden dangers. Neil Davies explains that today's world operates like a Dark Forest where exposure — making oneself legible or visible — invites predation. This framework reshapes how individuals and groups must think about security, trust, and survival, particularly in an environment thick with scams, misinformation, and parasitic actors accelerated by the Internet.Immune function and intelligence function have fused in both biological and societal contexts. Davies draws a parallel between decentralized organisms like anthills and modern human society, suggesting that intelligence and immunity are inseparable functions in highly interconnected systems. This fusion means that detecting threats, maintaining identity, and deciding what to incorporate or reject is now an active, continuous cognitive and social process.Cryptographic tools are becoming essential for basic trust and survival. With the rise of scams that mimic legitimate authority figures and institutions, Davies highlights how cryptographic attestation — and eventually more sophisticated tools like zero-knowledge proofs — will become fundamental. Without cryptographically verifiable communication, distinguishing real demands from predatory scams may soon become impossible, especially as AI-generated deception grows more convincing.Institutions are hollowing out, but will not disappear entirely. Rather than a sudden collapse, Davies envisions a future where legacy institutions like universities, corporations, and governments persist as "zombie" entities — still exerting influence but increasingly irrelevant to new forms of social organization. Meanwhile, smaller, nimble "patchwork" tribes and digital-first associations will become more central to human coordination and identity.Modern universities have drifted far from their original purpose and structure. Tracing the history from medieval student guilds to Humboldt's 19th-century research universities, Davies notes that today's universities are heavily compromised by state agendas, mass democracy, and financialization. True inquiry and intellectual aloofness — once core to the ideal of the university — now require entirely new, post-institutional structures to be viable.Artificial intelligence amplifies both opportunity and epistemic risk. Davies warns that large language models (LLMs) mainly recombine existing information rather than generate truly novel insights. Moreover, they can be trapdoored or poisoned at the data level, introducing dangerous, invisible vulnerabilities. This creates a new kind of "Dark Forest" risk: users must assume that any received information may carry unseen threats or distortions.There is no longer a reliable central authority for epistemic trust. In a fragmented world where Wikipedia is compromised, traditional media is polarized, and even scientific institutions are politicized, Davies asserts that we must return to "epistemic hygiene." This means independently verifying knowledge where possible and treating all claims — even from AI — with skepticism. The burden of truth-validation increasingly falls on individuals and their trusted, cryptographically verifiable networks.
On this episode, Stewart Alsop talks with Suman Kanuganti, founder of Personal.ai and a pioneer in AI for accessibility and human-machine collaboration. Together, they explore how Suman's journey from launching Aira to building Personal.ai reflects a deeper mission of creating technology that enhances memory, communication, and personal empowerment. They touch on entrepreneurship, inclusive design, and the future of AI as a personal extension of human potential. For more information, visit the Personal.ai website or connect with Suman on LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to Suman Kanuganti and the vision behind Personal.ai, setting the stage with AI for accessibility and personal empowerment.05:00 Discussing the startup journey, the leap from corporate life to entrepreneurship, and the founding of Aira with a focus on inclusive technology.10:00 Deep dive into communication empowerment, how Aira built independence for the blind community, and lessons learned from solving real-world problems.15:00 Transitioning from Aira to Personal.ai, exploring memory extension and the future of personal communication through AI models.20:00 Addressing privacy, ownership of personal data, and why trust is fundamental in the development of personalized AI systems.25:00 Vision of human-machine collaboration, future scenarios where AI supports memory, creativity, and human potential without replacing human agency.30:00 Closing reflections on entrepreneurship, building technology with deep purpose, and how inclusive design drives innovation for everyone.Key InsightsPersonalized AI is the Next Evolution in Human Communication: Suman Kanuganti emphasizes that AI is moving beyond generic tools and into deeply personal territory, where each individual can have an AI modeled after their own thoughts, memories, and style of communication. This evolution is aimed at making technology an extension of the self rather than a replacement.Accessibility Technologies Have Broader Applications: Through his work with Aira, Suman discovered that building tools for accessibility often results in innovations that serve a much wider audience. By designing with people with disabilities in mind, entrepreneurs can create more universally empowering technologies that enhance independence for everyone.Entrepreneurship Requires a Deep Sense of Purpose: Suman's transition from corporate engineering to entrepreneurship was fueled by a personal desire to create meaningful change. He highlights that a strong mission—like empowering individuals through technology—helps sustain entrepreneurs through the inevitable challenges and uncertainties of building startups.Memory Is a Key Frontier for AI Development: One of the core ideas discussed is that memory preservation and recall is an essential human function that AI can augment. Personal.ai aims to assist individuals by organizing and retrieving personal memories and knowledge, offering a future where mental workload is reduced without losing personal agency.Building Trust Is Critical in Personal AI: Suman stresses that for AI to become truly personal and trusted, users must retain ownership and control over their data. Personal.ai is designed with privacy and individual autonomy at its core, reflecting a future where users dictate how their information is stored, accessed, and shared.The Best Innovations Come from Solving Specific, Real Problems: Rather than chasing trends, Suman advocates for entrepreneurs to focus on tangible problems they understand deeply. His success with Aira stemmed from addressing a clear need in the blind community, and that same principle now drives the mission behind Personal.ai—addressing the growing problem of information overload and memory fragmentation.Human-AI Symbiosis Will Define the Future: Suman paints a future where humans and AI work symbiotically, each complementing the other's strengths. Instead of replacing human intelligence, the best AI systems will support cognitive functions like memory, creativity, and communication, ultimately expanding what individuals can achieve personally and professionally.
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop III, speak with David Packham, CEO and co-founder of Chintai, about the real-world implications of tokenizing assets—from real estate and startup equity to institutional finance and beyond. David shares insights from his time inside Goldman Sachs during the 2008 crash, his journey into blockchain starting in 2016, and how Chintai is now helping reshape the financial system through compliant, blockchain-based infrastructure. We talk about the collapse of institutional trust, the weirdness of meme coins, the possible obsolescence of IPOs, and the deeper societal shifts underway. For more on David and Chintai, check out chintai.io and chintainexus.com.Check out this GPT we trained on the conversation!Timestamps00:00 – David Packham introduces Chintai and explains the vision of tokenizing real world assets, highlighting the failure of early promises and the need for real transformation in finance. 05:00 – The conversation turns to accredited investors, regulatory controls, and how Chintai ensures compliance while preserving self-custody and smart contract-level restrictions. 10:00 – Discussion of innovative asset models like yield-bearing tokens tied to Manhattan real estate and tokenized private funds, showing how commercial use cases are overtaking DeFi gimmicks. 15:00 – Packham unpacks how liquidity is reshaping startup equity, potentially making IPOs obsolete by offering secondary markets and early investor exits through tokenization. 20:00 – The focus shifts to global crypto hubs. Singapore's limitations, US entrepreneurial resurgence, and Hong Kong's return to crypto leadership come up. 25:00 – Stewart and David discuss the broader decentralization of institutions, including government finance on blockchain, and the surprising effect of CBDCs in China. 30:00 – They explore the cultural dimensions of decentralization, including the network state, societal decline, and the importance of shared values for cohesion. 35:00 – Wrapping up, they touch on the philosophy of investment vs. speculation, the corruption of fiat systems, and the potential for real-world assets to stabilize crypto portfolios.Key InsightsTokenization is transforming access to financial markets: David Packham explains how tokenizing real-world assets—like real estate, private debt, and startup equity—can unlock previously illiquid sectors. Through blockchain, assets become tradable, accessible, and transparent, with innovations like fractional ownership and yield-bearing tokens making markets more efficient. Chintai, his company, enables this transformation by providing compliant infrastructure for institutions and investors to engage with these assets securely.The era of IPOs may be nearing its end: Packham suggests that traditional IPOs, with their delayed liquidity and gatekeeping, are becoming obsolete. With blockchain, companies can now tokenize equity and provide liquidity earlier in their lifecycle. This changes the game for startups and investors alike, enabling ongoing access to investment opportunities and exits without needing to go public in the conventional sense.The crypto industry is maturing beyond speculation: Reflecting on the shift from the ideologically driven early days of crypto to the speculative fervor of ICOs, NFTs, and meme coins, Packham calls for a return to fundamentals. He envisions a future where crypto supports real economic activity, especially through projects that build infrastructure for compliant, meaningful use cases. Degenerate gambling, he argues, may coexist with more serious ventures, but the latter will shape the future.Decentralization is challenging traditional power structures: The conversation touches on how blockchain can reduce favoritism and control in financial systems. Packham highlights how tools like permissioned ledgers and smart contracts can enforce fairness, resist corruption, and enhance access. He contrasts this with legacy systems, which often protect elite interests, drawing on his own experience at Goldman Sachs during the 2008 crisis.Global leadership in crypto is shifting: While Singapore positioned itself as a key crypto hub, Packham notes its lack of entrepreneurial culture compared to the U.S. and China. He observes that regulatory openness is important, but business culture and capital depth are decisive. The U.S. has reemerged as a key player, showing renewed interest and drive, while Hong Kong and China continue to move boldly in this space.The societal impact of financial technology is profound: The episode explores how blockchain might influence governance and societal organization. From the potential tokenization of government operations to more transparent fiscal policies, Packham sees emerging possibilities for better systems—though he warns against naive techno-utopianism. He reflects on the dual-edged nature of technologies like CBDCs, which can enhance transparency but also increase state control.Cultural values matter in shaping the future: The conversation ends on a philosophical note, examining the tension between decentralization, cultural identity, and immigration. Packham emphasizes that shared values and cultural cohesion are crucial for societal stability. He challenges idealistic notions like the “network state” by pointing out that human nature and cultural alignment still play a major role in the success or failure of social systems.
In this episode, I, Stewart Alsop III, sat down with AJ Beckner to walk through how non-technical founders can build a deeper understanding of their codebase using AI tools like Cursor and Claude. We explored the reality of navigating an IDE as a beginner, demystified Git and GitHub version control, and walked through practical ways to clone a repo, open it safely in Cursor, and start asking questions about your app's structure and functionality without breaking anything. AJ shared his curiosity about finding specific text in his app and how to track that down across branches. We also looked at using AI-powered tools for tasks like dependency analysis and visualizing app architecture, with a focus on empowering non-devs to gain confidence and clarity in their product's code. You can connect with AJ through Twitter at @thisistheaj.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart introduces Cursor as a fork of Visual Studio Code and explains the concept of an IDE to AJ, who has zero prior experience. They talk about the complexity of coding and the importance of developer curiosity.05:00 – They walk through cloning a GitHub repository using the git clone command. Stewart highlights that AJ won't break anything and introduces the idea of a local playground for exploration.10:00 – Stewart explains Git vs GitHub, the purpose of version control, and how to use the terminal for navigation. They begin setting up the project in Cursor using the terminal rather than GUI options.15:00 – They realize only a README was cloned, leading to a discussion about branches—specifically the difference between main and development branches—and how to clone the right one.20:00 – Using git fetch, they get access to the development branch. Stewart explains how to disconnect from Git safely to avoid pushing changes.25:00 – AJ and Stewart begin exploring Cursor's AI features, including the chat interface. Stewart encourages AJ to start asking natural-language questions about the app structure.30:00 – Stewart demonstrates how to ask for a dependency analysis and create mermaid diagrams for visualizing how app modules are connected.35:00 – They begin identifying specific UI components, including finding and editing the home screen title. AJ uploads a screenshot to use as reference in Cursor.40:00 – They successfully trace the UI text to an index.tsx file and discuss the layout's dependency structure. AJ learns how to use search and command-F effectively.45:00 – They begin troubleshooting issues with Claude's GitHub integration, exploring Claude MCP servers and configuration files to fix broken tools.50:00 – Stewart guides AJ through using npm to install missing packages, explains what Node Package Manager is, and reflects on the interconnected nature of modern development.55:00 – Final troubleshooting steps and next steps. Stewart suggests bringing in Phil for deeper debugging. AJ reflects on how empowered he now feels navigating the codebase.Key InsightsYou don't need to be a developer to understand your app's codebase: AJ Beckner starts the session with zero familiarity with IDEs, but through Stewart's guidance, he begins navigating Cursor and GitHub confidently. The key idea is that non-technical founders can develop real intuition about their code—enough to communicate better with developers, find what they need, and build trust with the systems behind their product.Cursor makes AI-native development accessible to beginners: One of the biggest unlocks in this episode is seeing how Cursor, a VS Code fork with AI baked in, can answer questions about your codebase in plain English. By cloning the GitHub repo and indexing it, AJ is able to ask, “Where do I change this text in the app?” and get direct, actionable guidance. Stewart points out that this shifts the role of a founder from passively waiting on answers to actively exploring and editing.Version control doesn't have to be scary—with the right framing: Git and GitHub come across as overwhelming to many non-engineers, but Stewart breaks it down simply: Git is the local system that helps keep changes organized and non-destructive, and GitHub is the cloud-based sharing tool layered on top. Together, they allow safe experimentation, like cloning a development branch and disconnecting it from the main repo to create a playground environment.Branching strategies reflect how work gets done behind the scenes: The episode includes a moment of discovery: AJ cloned the main branch and only got a README. Stewart explains that the real work often lives in a “development” branch, while “main” is kept stable for production. Understanding this distinction helps AJ (and listeners) know where to look when trying to understand how features are actually being built and tested.Command line basics give you superpowers: Rather than relying solely on visual tools, Stewart introduces AJ to the terminal—explaining simple commands like cd, git clone, and git fetch—and emphasizes that the terminal has been the backbone of developer work for decades. It's empowering to learn that you can use just a few lines of text to download and explore an entire app.Modern coding is less about code and more about managing complexity: A recurring theme in the conversation is the sheer number of dependencies, frameworks, and configuration files that make up any modern app. Stewart compares this to a reflection of modern life—interconnected and layered. Understanding this complexity (rather than being defeated by it) becomes a mindset that AJ embraces as part of becoming technically fluent.AI will keep lowering the bar to entry, but learning fundamentals still matters: Stewart shares how internal OpenAI coding models went from being some of the worst performers two years ago to now ranking among the top 50 in the world. While this progress promises an easier future for non-devs, Stewart emphasizes the value of understanding what's happening under the hood. Tools like Claude and Cursor are incredibly powerful, but knowing what they're doing—and when to be skeptical—is still key.
On this episode of the Crazy Wisdom podcast, I, Stewart Alsop, sat down once again with Aaron Lowry for our third conversation, and it might be the most expansive yet. We touched on the cultural undercurrents of transhumanism, the fragile trust structures behind AI and digital infrastructure, and the potential of 3D printing with metals and geopolymers as a material path forward. Aaron shared insights from his hands-on restoration work, our shared fascination with Amish tech discernment, and how course-correcting digital dependencies can restore sovereignty. We also explored what it means to design for long-term human flourishing in a world dominated by misaligned incentives. For those interested in following Aaron's work, he's most active on Twitter at @Aaron_Lowry.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart welcomes Aaron Lowry back for his third appearance. They open with reflections on cultural shifts post-COVID, the breakdown of trust in institutions, and a growing societal impulse toward individual sovereignty, free speech, and transparency.05:00 – The conversation moves into the changing political landscape, specifically how narratives around COVID, Trump, and transhumanism have shifted. Aaron introduces the idea that historical events are often misunderstood due to our tendency to segment time, referencing Dan Carlin's quote, “everything begins in the middle of something else.”10:00 – They discuss how people experience politics differently now due to the Internet's global discourse, and how Aaron avoids narrow political binaries in favor of structural and temporal nuance. They explore identity politics, the crumbling of party lines, and the erosion of traditional social anchors.15:00 – Shifting gears to technology, Aaron shares updates on 3D printing, especially the growing maturity of metal printing and geopolymers. He highlights how these innovations are transforming fields like automotive racing and aerospace, allowing for precise, heat-resistant, custom parts.20:00 – The focus turns to mechanical literacy and the contrast between abstract digital work and embodied craftsmanship. Stewart shares his current tension between abstract software projects (like automating podcast workflows with AI) and his curiosity about the Amish and Mennonite approach to technology.25:00 – Aaron introduces the idea of a cultural “core of integrated techne”—technologies that have been refined over time and aligned with human flourishing. He places Amish discernment on a spectrum between Luddite rejection and transhumanist acceleration, emphasizing the value of deliberate integration.30:00 – The discussion moves to AI again, particularly the concept of building local, private language models that can persistently learn about and serve their user without third-party oversight. Aaron outlines the need for trust, security, and stateful memory to make this vision work.35:00 – Stewart expresses frustration with the dominance of companies like Google and Facebook, and how owning the Jarvis-like personal assistant experience is critical. Aaron recommends options like GrapheneOS on a Pixel 7 and reflects on the difficulty of securing hardware at the chip level.40:00 – They explore software development and the problem of hidden dependencies. Aaron explains how digital systems rest on fragile, often invisible material infrastructure and how that fragility is echoed in the complexity of modern software stacks.45:00 – The concept of “always be reducing dependencies” is expanded. Aaron suggests the real goal is to reduce untrustworthy dependencies and recognize which are worth cultivating. Trust becomes the key variable in any resilient system, digital or material.50:00 – The final portion dives into incentives. They critique capitalism's tendency to exploit value rather than build aligned systems. Aaron distinguishes rivalrous games from infinite games and suggests the future depends on building systems that are anti-rivalrous—where ideas compete, not people.55:00 – They wrap up with reflections on course correction, spiritual orientation, and cultural reintegration. Stewart suggests titling the episode around infinite games, and Aaron shares where listeners can find him online.Key InsightsTranshumanism vs. Techne Integration: Aaron frames the modern moment as a tension between transhumanist enthusiasm and a more grounded relationship to technology, rooted in "techne"—practical wisdom accumulated over time. Rather than rejecting all new developments, he argues for a continuous course correction that aligns emerging technologies with deep human values like truth, goodness, and beauty. The Amish and Mennonite model of communal tech discernment stands out as a countercultural but wise approach—judging tools by their long-term effects on community, rather than novelty or entertainment.3D Printing as a Material Frontier: While most of the 3D printing world continues to refine filaments and plastic-based systems, Aaron highlights a more exciting trajectory in printed metals and geopolymers. These technologies are maturing rapidly and finding serious application in domains like Formula One, aerospace, and architectural experimentation. His conversations with others pursuing geopolymer 3D printing underscore a resurgence of interest in materially grounded innovation, not just digital abstraction.Digital Infrastructure is Physical: Aaron emphasizes a point often overlooked: that all digital systems rest on physical infrastructure—power grids, servers, cables, switches. These systems are often fragile and loaded with hidden dependencies. Recognizing the material base of digital life brings a greater sense of responsibility and stewardship, rather than treating the internet as some abstract, weightless realm. This shift in awareness invites a more embodied and ecological relationship with our tools.Local AI as a Trustworthy Companion: There's a compelling vision of a Jarvis-like local AI assistant that is fully private, secure, and persistent. For this to function, it must be disconnected from untrustworthy third-party cloud systems and trained on a personal, context-rich dataset. Aaron sees this as a path toward deeper digital agency: if we want machines that truly serve us, they need to know us intimately—but only in systems we control. Privacy, persistent memory, and alignment to personal values become the bedrock of such a system.Dependencies Shape Power and Trust: A recurring theme is the idea that every system—digital, mechanical, social—relies on a web of dependencies. Many of these are invisible until they fail. Aaron's mantra, “always be reducing dependencies,” isn't about total self-sufficiency but about cultivating trustworthy dependencies. The goal isn't zero dependence, which is impossible, but discerning which relationships are resilient, personal, and aligned with your values versus those that are extractive or opaque.Incentives Must Be Aligned with the Good: A core critique is that most digital services today—especially those driven by advertising—are fundamentally misaligned with human flourishing. They monetize attention and personal data, often steering users toward addiction or ...
In this episode of Crazy Wisdom, Stewart Alsop talks with Will Bickford about the future of human intelligence, the exocortex, and the role of software as an extension of our minds. Will shares his thinking on brain-computer interfaces, PHEXT (a plain text protocol for structured data), and how high-dimensional formats could help us reframe the way we collaborate and think. They explore the abstraction layers of code and consciousness, and why Will believes that better tools for thought are not just about productivity, but about expanding the boundaries of what it means to be human. You can connect with Will in Twitter at @wbic16 or check out the links mentioned by Will in Github.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to the concept of the exocortex and how current tools like plain text editors and version control systems serve as early forms of cognitive extension.05:00 – Discussion on brain-computer interfaces (BCIs), emphasizing non-invasive software interfaces as powerful tools for augmenting human cognition.10:00 – Introduction to PHEXT, a plain text format designed to embed high-dimensional structure into simple syntax, facilitating interoperability between software systems.15:00 – Exploration of software abstraction as a means of compressing vast domains of meaning into manageable forms, enhancing understanding rather than adding complexity.20:00 – Conversation about the enduring power of text as an interface, highlighting its composability, hackability, and alignment with human symbolic processing.25:00 – Examination of collaborative intelligence and the idea that intelligence emerges from distributed systems involving people, software, and shared ideas.30:00 – Discussion on the importance of designing better communication protocols, like PHEXT, to create systems that align with human thought processes and enhance cognitive capabilities.35:00 – Reflection on the broader implications of these technologies for the future of human intelligence and the potential for expanding the boundaries of human cognition.Key InsightsThe exocortex is already here, just not evenly distributed. Will frames the exocortex not as a distant sci-fi future, but as something emerging right now in the form of external software systems that augment our thinking. He suggests that tools like plain text editors, command-line interfaces, and version control systems are early prototypes of this distributed cognitive architecture—ways we already extend our minds beyond the biological brain.Brain-computer interfaces don't need to be invasive to be powerful. Rather than focusing on neural implants, Will emphasizes software interfaces as the true terrain of BCIs. The bridge between brain and computer can be as simple—and profound—as the protocols we use to interact with machines. What matters is not tapping into neurons directly, but creating systems that think with us, where interface becomes cognition.PHEXT is a way to compress meaning while remaining readable. At the heart of Will's work is PHEXT, a plain text format that embeds high-dimensional structure into simple syntax. It's designed to let software interoperate through shared, human-readable representations of structured data—stripping away unnecessary complexity while still allowing for rich expressiveness. It's not just a format, but a philosophy of communication between systems and people.Software abstraction is about compression, not complexity. Will pushes back against the idea that abstraction means obfuscation. Instead, he sees abstraction as a way to compress vast domains of meaning into manageable forms. Good abstractions reveal rather than conceal—they help you see more with less. In this view, the challenge is not just to build new software, but to compress new layers of insight into form.Text is still the most powerful interface we have. Despite decades of graphical interfaces, Will argues that plain text remains the highest-bandwidth cognitive tool. Text allows for versioning, diffing, grepping—it plugs directly into the brain's symbolic machinery. It's composable, hackable, and lends itself naturally to abstraction. Rather than moving away from text, the future might involve making text higher-dimensional and more semantically rich.The future of thinking is collaborative, not just computational. One recurring theme is that intelligence doesn't emerge in isolation—it's distributed. Will sees the exocortex as something inherently social: a space where people, software, and ideas co-think. This means building interfaces not just for solo users, but for networked groups of minds working through shared representations.Designing better protocols is designing better minds. Will's vision is protocol-first. He sees the structure of communication—between apps, between people, between thoughts—as the foundation of intelligence itself. By designing protocols like PHEXT that align with how we actually think, we can build software that doesn't just respond to us, but participates in our thought processes.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Trent Gillham—also known as Drunk Plato—for a far-reaching conversation on the shifting tides of technology, memetics, and media. Trent shares insights from building Meme Deck (find it at memedeck.xyz or follow @memedeckapp on X), exploring how social capital, narrative creation, and open-source AI models are reshaping not just the tools we use, but the very structure of belief and influence in the information age. We touch on everything from the collapse of legacy media, to hyperstition and meme warfare, to the metaphysics of blockchain as the only trustable memory in an unmoored future. You can find Trent in twitter as @AidenSolaran.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to Trent Gillham and Meme Deck, early thoughts on AI's rapid pace, and the shift from training models to building applications around them.05:00 – Discussion on the collapse of the foundational model economy, investor disillusionment, GPU narratives, and how AI infrastructure became a kind of financial bubble.10:00 – The function of markets as belief systems, blowouts when inflated narratives hit reality, and how meme-based value systems are becoming indistinguishable from traditional finance.15:00 – The role of hyperstition in creation, comparing modern tech founders to early 20th-century inventors, and how visual proof fuels belief and innovation.20:00 – Reflections on the intelligence community's influence in tech history, Facebook's early funding, and how soft influence guides the development of digital tools and platforms.25:00 – Weaponization of social media, GameStop as a memetic uprising, the idea of memetic tools leaking from government influence into public hands.30:00 – Meme Deck's vision for community-led narrative creation, the shift from centralized media to decentralized, viral, culturally fragmented storytelling.35:00 – The sophistication gap in modern media, remix culture, the idea of decks as mini subreddits or content clusters, and incentivizing content creation with tokens.40:00 – Good vs bad meme coins, community-first approaches, how decentralized storytelling builds real value through shared ownership and long-term engagement.45:00 – Memes as narratives vs manipulative psyops, blockchain as the only trustable historical record in a world of mutable data and shifting truths.50:00 – Technical challenges and future plans for Meme Deck, data storage on-chain, reputation as a layer of trust, and AI's need for immutable data sources.55:00 – Final reflections on encoding culture, long-term value of on-chain media, and Trent's vision for turning podcast conversations into instant, storyboarded, memetic content.Key InsightsThe real value in AI isn't in building models—it's in building tools that people can use: Trent emphasized that the current wave of AI innovation is less about creating foundational models, which have become commoditized, and more about creating interfaces and experiences that make those models useful. Training base models is increasingly seen as a sunk cost, and the real opportunity lies in designing products that bring creative and cultural capabilities directly to users.Markets operate as belief machines, and the narratives they run on are increasingly memetic: He described financial markets not just as economic systems, but as mechanisms for harvesting collective belief—what he called “hyperstition.” This dynamic explains the cycles of hype and crash, where inflated visions eventually collide with reality in what he terms "blowouts." In this framing, stocks and companies function similarly to meme coins—vehicles for collective imagination and risk.Memes are no longer just jokes—they are cultural infrastructure: As Trent sees it, memes are evolving into complex, participatory systems for narrative building. With tools like Meme Deck, entire story worlds can be generated, remixed, and spread by communities. This marks a shift from centralized, top-down media (like Hollywood) to decentralized, socially-driven storytelling where virality is coded into the content from the start.Community is the new foundation of value in digital economies: Rather than focusing on charismatic individuals or short-term hype, Trent emphasized that lasting projects need grassroots energy—what he calls “vibe strapping.” Successful meme coins and narrative ecosystems depend on real participation, sustained engagement, and a shared sense of creative ownership. Without that, projects fizzle out as quickly as they rise.The battle for influence has moved from borders to minds: Reflecting on the information age, Trent noted that power now resides in controlling narratives, and thus in shaping perception. This is why information warfare is subtle, soft, and persistent—and why traditional intelligence operations have evolved into influence campaigns that play out in digital spaces like social media and meme culture.Blockchains may become the only reliable memory in a world of digital manipulation: In an era where digital content is easily altered or erased, Trent argued that blockchain offers the only path to long-term trust. Data that ends up on-chain can be verified and preserved, giving future intelligences—or civilizations—a stable record of what really happened. He sees this as crucial not only for money, but for culture itself.Meme Deck aims to democratize narrative creation by turning community vibes into media outputs: Trent shared his vision for Meme Deck as a platform where communities can generate not just memes, but entire storylines and media formats—from anime pilots to cinematic remixes—by collaborating and contributing creative energy. It's a model where decentralized media becomes both an art form and a social movement, rooted in collective imagination rather than corporate production.
On this episode of Crazy Wisdom, I'm joined by David Pope, Commissioner on the Wyoming Stable Token Commission, and Executive Director Anthony Apollo, for a wide-ranging conversation that explores the bold, nuanced effort behind Wyoming's first-of-its-kind state-issued stable token. I'm your host Stewart Alsop, and what unfolds in this dialogue is both a technical unpacking and philosophical meditation on trust, financial sovereignty, and what it means for a government to anchor itself in transparent, programmable value. We move through Anthony's path from Wall Street to Web3, the infrastructure and intention behind tokenizing real-world assets, and how the U.S. dollar's future could be shaped by state-level innovation. If you're curious to follow along with their work, everything from blockchain selection criteria to commission recordings can be found at stabletoken.wyo.gov.Check out this GPT we trained on the conversation!Timestamps00:00 – David Pope and Anthony Apollo introduce themselves, clarifying they speak personally, not for the Commission. You, Stewart, set an open tone, inviting curiosity and exploration.05:00 – Anthony shares his path from traditional finance to Ethereum and government, driven by frustration with legacy banking inefficiencies.10:00 – Tokenized bonds enter the conversation via the Spencer Dinwiddie project. Pope explains early challenges with defining “real-world assets.”15:00 – Legal limits of token ownership vs. asset title are unpacked. You question whether anything “real” has been tokenized yet.20:00 – Focus shifts to the Wyoming Stable Token: its constitutional roots and blockchain as a tool for fiat-backed stability without inflation.25:00 – Comparison with CBDCs: Apollo explains why Wyoming's token is transparent, non-programmatic, and privacy-focused.30:00 – Legislative framework: the 102% backing rule, public audits, and how rulemaking differs from law. You explore flexibility and trust.35:00 – Global positioning: how Wyoming stands apart from other states and nations in crypto policy. You highlight U.S. federalism's role.40:00 – Topics shift to velocity, peer-to-peer finance, and risk. You connect this to Urbit and decentralized systems.45:00 – Apollo unpacks the stable token's role in reinforcing dollar hegemony, even as BRICS move away from it.50:00 – Wyoming's transparency and governance as financial infrastructure. You reflect on meme coins and state legitimacy.55:00 – Discussion of Bitcoin reserves, legislative outcomes, and what's ahead. The conversation ends with vision and clarity.Key InsightsWyoming is pioneering a new model for state-level financial infrastructure. Through the creation of the Wyoming Stable Token Commission, the state is developing a fully-backed, transparent stable token that aims to function as a public utility. Unlike privately issued stablecoins, this one is mandated by law to be 102% backed by U.S. dollars and short-term treasuries, ensuring high trust and reducing systemic risk.The stable token is not just a tech innovation—it's a philosophical statement about trust. As David Pope emphasized, the transparency and auditability of blockchain-based financial instruments allow for a shift toward self-auditing systems, where trust isn't assumed but proven. In contrast to the opaque operations of legacy banking systems, the stable token is designed to be programmatically verifiable.Tokenized real-world assets are coming, but we're not there yet. Anthony Apollo and David Pope clarify that most "real-world assets" currently tokenized are actually equity or debt instruments that represent ownership structures, not the assets themselves. The next leap will involve making the token itself the title, enabling true fractional ownership of physical or financial assets without intermediary entities.This initiative strengthens the U.S. dollar rather than undermining it. By creating a transparent, efficient vehicle for global dollar transactions, the Wyoming Stable Token could bolster the dollar's role in international finance. Instead of competing with the dollar, it reinforces its utility in an increasingly digital economy—offering a compelling alternative to central bank digital currencies that raise concerns around surveillance and control.Stable tokens have the potential to become major holders of U.S. debt. Anthony Apollo points out that the aggregate of all fiat-backed stable tokens already represents a top-tier holder of U.S. treasuries. As adoption grows, state-run stable tokens could play a crucial role in sovereign debt markets, filling gaps left by foreign governments divesting from U.S. securities.Public accountability is central to Wyoming's approach. Unlike private entities that can change terms at will, the Wyoming Commission is legally bound to go through a public rulemaking process for any adjustments. This radical transparency offers both stability and public trust, setting a precedent for how digital public infrastructure can be governed.The ultimate goal is to build a bridge between traditional finance and the Web3 future. Rather than burn the old system down, Pope and Apollo are designing the stable token as a pragmatic transition layer—something institutions can trust and privacy advocates can respect. It's about enabling safe experimentation and gradual transformation, not triggering collapse.
In this episode of Crazy Wisdom, Stewart Alsop speaks with German Jurado about the strange loop between computation and biology, the emergence of reasoning in AI models, and what it means to "stand on the shoulders" of evolutionary systems. They talk about CRISPR not just as a gene-editing tool, but as a memory architecture encoded in bacterial immunity; they question whether LLMs are reasoning or just mimicking it; and they explore how scientists navigate the unknown with a kind of embodied intuition. For more about German's work, you can connect with him through email at germanjurado7@gmail.com.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart introduces German Jurado and opens with a reflection on how biology intersects with multiple disciplines—physics, chemistry, computation.05:00 - They explore the nature of life's interaction with matter, touching on how biology is about the interface between organic systems and the material world.10:00 - German explains how bioinformatics emerged to handle the complexity of modern biology, especially in genomics, and how it spans structural biology, systems biology, and more.15:00 - Introduction of AI into the scientific process—how models are being used in drug discovery and to represent biological processes with increasing fidelity.20:00 - Stewart and German talk about using LLMs like GPT to read and interpret dense scientific literature, changing the pace and style of research.25:00 - The conversation turns to societal implications—how these tools might influence institutions, and the decentralization of expertise.30:00 - Competitive dynamics between AI labs, the scaling of context windows, and speculation on where the frontier is heading.35:00 - Stewart reflects on English as the dominant language of science and the implications for access and translation of knowledge.40:00 - Historical thread: they discuss the Republic of Letters, how the structure of knowledge-sharing has evolved, and what AI might do to that structure.45:00 - Wrap-up thoughts on reasoning, intuition, and the idea of scientists as co-evolving participants in both natural and artificial systems.50:00 - Final reflections and thank-yous, German shares where to find more of his thinking, and Stewart closes the loop on the conversation.Key InsightsCRISPR as a memory system – Rather than viewing CRISPR solely as a gene-editing tool, German Jurado frames it as a memory architecture—an evolved mechanism through which bacteria store fragments of viral DNA as a kind of immune memory. This perspective shifts CRISPR into a broader conceptual space, where memory is not just cognitive but deeply biological.AI models as pattern recognizers, not yet reasoners – While large language models can mimic reasoning impressively, Jurado suggests they primarily excel at statistical pattern matching. The distinction between reasoning and simulation becomes central, raising the question: are these systems truly thinking, or just very good at appearing to?The loop between computation and biology – One of the core themes is the strange feedback loop where biology inspires computational models (like neural networks), and those models in turn are used to probe and understand biological systems. It's a recursive relationship that's accelerating scientific insight but also complicating our definitions of intelligence and understanding.Scientific discovery as embodied and intuitive – Jurado highlights that real science often begins in the gut, in a kind of embodied intuition before it becomes formalized. This challenges the myth of science as purely rational or step-by-step and instead suggests that hunches, sensory experience, and emotional resonance play a crucial role.Proteins as computational objects – Proteins aren't just biochemical entities—they're shaped by information. Their structure, function, and folding dynamics can be seen as computations, and tools like AlphaFold are beginning to unpack that informational complexity in ways that blur the line between physics and code.Human alignment is messier than AI alignment – While AI alignment gets a lot of attention, Jurado points out that human alignment—between scientists, institutions, and across cultures—is historically chaotic. This reframes the AI alignment debate in a broader evolutionary and historical context, questioning whether we're holding machines to stricter standards than ourselves.Standing on the shoulders of evolutionary processes – Evolution is not just a backdrop but an active epistemic force. Jurado sees scientists as participants in a much older system of experimentation and iteration—evolution itself. In this view, we're not just designing models; we're being shaped by them, in a co-evolution of tools and understanding.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, speak with Perry Knoppert, founder of The Octopus Movement, joining us from the Netherlands. We explore everything from octopus facts (like how they once had bones and decided to ditch them—wild, right?) to neurodivergence, non-linear thinking, the alien-like nature of both octopuses and AI, and how the future of education might finally reflect the chaos and creativity of human intelligence. Perry drops insight bombs on ADHD, dyslexia, chaos as a superpower, and even shares a wild idea about how frustration—not just ideas—can shape the world. You can connect with him and explore more at theoctopusmovement.org, and check out his playful venting app at tellTom.ink.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:31 Fascinating Facts About Octopi02:03 The Octopus Movement: Origins and Symbolism05:55 Exploring Neurodivergence and AI20:15 The Future of Education with AI29:48 Challenges in the Dutch Education System30:59 Educational Pathways in the US31:50 Exploring Neurodiversity32:34 The Origin of Neurodiversity34:34 Nomadic DNA and ADHD36:02 Personal Nomadic Experiences37:20 Cultural Insights from China41:59 Trust in Different Cultures44:20 The Foreigner Experience52:21 Artificial and Natural Intelligence55:11 The Octopus Movement and Tell Tom AppKey InsightsNeurodivergence isn't a superpower—it's a different lens on reality. Perry challenges the popular narrative that conditions like ADHD or dyslexia are inherently "superpowers." Instead, he sees them as part of a broader, complex human experience—often painful, often misunderstood, but rich with potential once liberated from linear systems that define what's "normal."AI is the beautiful product of linear thought—and it's freeing us from it. Perry reframes artificial intelligence not as a threat, but as the ultimate tool born from centuries of structured, logical thinking. With AI handling the systems and organization, humans are finally free to return to creativity, chaos, and nonlinear, intuitive modes of intelligence that machines can't touch.Octopuses are the ultimate symbol of curious misfits. The octopus—alien, adaptable, emotion-rich—becomes a metaphor for people who don't fit the mold. With three hearts, nine brains, and a decentralized nervous system, octopuses reflect the kind of intelligence and distributed awareness Perry celebrates in neurodivergent thinkers.Frustration is more generative than ideas. In one of the episode's most unexpected insights, Perry argues that frustration is a more powerful starting point for change than intellectual ideation. Ideas are often inert without action, while frustration is raw, emotional, and deeply human—fuel for meaningful transformation.Education needs to shift from repetition to creation. The current model of education—memorization, repetition, testing—serves linearity, not creativity. With AI taking over traditional knowledge tasks, Perry envisions classrooms where kids learn how their minds work, engage with the world directly, and practice making meaning instead of memorizing facts.Being a foreigner is a portal to freedom. Living in unfamiliar cultures (like Perry did in China or Stewart in Argentina) reveals the absurdities of our own norms and invites new ways of being. Foreignness becomes a superpower in itself—a space of lowered expectations, fewer assumptions, and greater possibility.Labels like “neurodivergent” are both helpful and illusory. While diagnostic labels can offer relief and clarity, Perry warns against attaching too tightly to them. These constructs are inventions of linear thought, useful for navigating systems but ultimately limiting when it comes to embracing the full, messy, nonlinear reality of being human.
On this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, sit down with Federico Ast, founder of Kleros, to explore how decentralized justice systems can resolve both crypto-native and real-world disputes. We talk about the pilot with the Supreme Court in Mendoza, Argentina, where Kleros is helping small claims courts resolve cases faster and more transparently, and how this ties into a broader vision for digital governance using tools like proof of humanity and soulbound tokens. We also get into the philosophical and institutional implications of building a digital republic, and how blockchain can offer new models of legitimacy and truth-making. Show notes and more about Federico's work can be found via his Twitter: @federicoast (https://twitter.com/federicoast) and by joining the Kleros Telegram community.Check out this GPT we trained on the conversation!00:00 Introduction and Guest Welcome00:38 Claros Pilot Program in Mendoza02:00 Claros and the Legal System05:13 Personal Journey into Crypto07:16 Challenges and Innovations in Kleros18:02 Proof of Humanity and Soulbound Tokens26:54 Incentives and Proof of Humanity27:01 Interesting DAO Court Cases27:21 Prediction Markets and Disputes31:36 Customer Service and Dispute Resolution38:21 Governance and Online Communities40:02 Future of Civilization and Technology47:16 Bounties and Legal Systems49:06 Conclusion and Contact InformationKey InsightsDecentralized Justice Can Bridge the Gap Between Traditional Legal Systems and Web3: Federico Ast explains how Kleros functions as a decentralized dispute resolution system, offering a faster, more transparent, and more accessible alternative to conventional courts. In places like Mendoza, Argentina, Kleros has been piloted in collaboration with the Supreme Court to help resolve small claims that would otherwise take years, demonstrating how blockchain tools can support real-world judicial systems rather than replace them.Crypto Tools Are Most Powerful When Rooted in Real-World Problems: Ast emphasizes that his motivation for building in the blockchain space came not from hype but from firsthand experience with institutional inefficiencies in Argentina—such as corruption, inaccessible courts, and predatory financial systems. For him, crypto is a means to address these structural issues, not an end in itself. This grounded approach contrasts with many in the space who begin with the technology and try to retrofit a use case.Proof of Humanity and Soulbound Tokens Expand the Scope of Legitimate Governance: To address concerns over who gets to participate in decentralized juries, Kleros integrates identity verification through Proof of Humanity and uses non-transferable Soulbound Tokens to grant eligibility. These innovations allow communities—whether geographic, organizational, or digital—to define their own membership criteria, making decentralized courts feel more legitimate and relevant to participants.Decentralized Courts Can Handle Complex, Subjective Disputes: While early versions of Kleros were built for binary disputes (yes/no, Alice vs. Bob), real-world conflicts are often more nuanced. Over time, the platform evolved to support more flexible decision-making, including proportional fault, ranked outcomes, and variable payouts. This adaptability allows Kleros to handle a broader spectrum of disputes, including ambiguous or interpretive cases like those found in prediction markets.Incentive Systems Create New Forms of Justice Participation: Kleros applies game theory to create juror incentives that reward honest and aligned decisions. In systems like Proof of Humanity, it even gamifies fraud detection by offering financial bounties to those who uncover duplicate or fake identities. These economic incentives encourage voluntary participation in public-good functions such as identity verification and dispute resolution.Kleros Offers a Middle Ground Between Corporate Automation and Legal Bureaucracy: Many companies use rigid, automated systems to deny customer claims, leaving individuals with no real recourse except to complain on social media. Kleros offers an intermediate option: a transparent, peer-based adjudication process that can resolve disputes quickly. In pilot programs with fintech companies like Lemon, over 90% of users who lost their case still accepted the result and remained customers, showing how fairness in process can build trust even when outcomes disappoint.Digital Communities Are Becoming the New Foundations of Governance: Ast points out that many people now feel more connected to online communities than to their local or national institutions. Blockchain governance—enabled by tools like Kleros, Proof of Humanity, and decentralized IDs—allows these communities to build their own civil infrastructure. This marks a shift toward what he calls a “digital republic,” where shared values and participation, rather than geography, form the basis of collective decision-making and legitimacy.
In this episode of Crazy Wisdom, host Stewart Alsop talks with Rosario Parlanti, a longtime crypto investor and real estate attorney, about the shifting landscape of decentralization, AI, and finance. They explore the power struggles between centralized and decentralized systems, the role of AI agents in finance and infrastructure, and the legal gray areas emerging around autonomous technology. Rosario shares insights on trusted execution environments, token incentives, and how projects like Phala Network are building decentralized cloud computing. They also discuss the changing narrative around Bitcoin, the potential for AI-driven financial autonomy, and the future of censorship-resistant platforms. Follow Rosario on X @DeepinWhale and check out Phala Network to learn more.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:25 Understanding Decentralized Cloud Infrastructure04:40 Centralization vs. Decentralization: A Philosophical Debate06:56 Political Implications of Centralization17:19 Technical Aspects of Phala Network24:33 Crypto and AI: The Future Intersection25:11 The Convergence of Crypto and AI25:59 Challenges with Centralized Cloud Services27:36 Decentralized Cloud Solutions for AI30:32 Legal and Ethical Implications of AI Agents32:59 The Future of Decentralized Technologies41:56 Crypto's Role in Global Financial Freedom49:27 Closing Thoughts and Future ProspectsKey InsightsDecentralization is not absolute, but a spectrum. Rosario Parlanti explains that decentralization doesn't mean eliminating central hubs entirely, but rather reducing choke points where power is overly concentrated. Whether in finance, cloud computing, or governance, every system faces forces pushing toward centralization for efficiency and control, while counterforces work to redistribute power and increase resilience.Trusted execution environments (TEE) are crucial for decentralized cloud computing. Rosario highlights how Phala Network uses TEEs, a hardware-based security measure that isolates sensitive data from external access. This ensures that decentralized cloud services can operate securely, preventing unauthorized access while allowing independent providers to host data and run applications outside the control of major corporations like Amazon and Google.AI agents will need decentralized infrastructure to function autonomously. The conversation touches on the growing power of AI-driven autonomous agents, which can execute financial trades, conduct research, and even generate content. However, running such agents on centralized cloud providers like AWS could create regulatory and operational risks. Decentralized cloud networks like Phala offer a way for these agents to operate freely, without interference from governments or corporations.Regulatory arbitrage will shape the future of AI and crypto. Rosario describes how businesses and individuals are already leveraging jurisdiction shopping—structuring AI entities or financial operations in countries with more favorable regulations. He speculates that AI agents could be housed within offshore LLCs or irrevocable trusts, creating legal distance between their creators and their actions, raising new ethical and legal challenges.Bitcoin's narrative has shifted from currency to investment asset. Originally envisioned as a peer-to-peer electronic cash system, Bitcoin has increasingly been treated as digital gold, largely due to the influence of institutional investors and regulatory frameworks like Bitcoin ETFs. Rosario argues that this shift in perception has led to Bitcoin being co-opted by the very financial institutions it was meant to disrupt.The rise of AI-driven financial autonomy could bypass traditional banking and regulation. The combination of AI, smart contracts, and decentralized finance (DeFi) could enable AI agents to conduct financial transactions without human oversight. This could range from algorithmic trading to managing business operations, potentially reducing reliance on traditional banking systems and challenging the ability of governments to enforce financial regulations.The accelerating clash between technology and governance will redefine global power structures. As AI and decentralized systems gain momentum, traditional nation-state mechanisms for controlling information, currency, and infrastructure will face unprecedented challenges. Rosario and Stewart discuss how this shift mirrors previous disruptions—such as social media's impact on information control—and speculate on whether governments will adapt, resist, or attempt to co-opt these emerging technologies.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Jessica Talisman, a senior information architect deeply immersed in the worlds of taxonomy, ontology, and knowledge management. The conversation spans the evolution of libraries, the shifting nature of public and private access to knowledge, and the role of institutions like the Internet Archive in preserving digital history. They also explore the fragility of information in the digital age, the ongoing battle over access to knowledge, and how AI is shaping—and being shaped by—structured data and knowledge graphs. To connect with Jessica Talisman, you can reach her via LinkedIn. Check out this GPT we trained on the conversation!Timestamps00:05 – Libraries, Democracy, Public vs. Private Knowledge Jessica explains how libraries have historically shifted between public and private control, shaping access to knowledge and democracy.00:10 – Internet Archive, Cyberattacks, Digital Preservation Stewart describes visiting the Internet Archive post-cyberattack, sparking a discussion on threats to digital preservation and free information.00:15 – AI, Structured Data, Ontologies, NIH, PubMed Jessica breaks down how AI trains on structured data from sources like NIH and PubMed but often lacks alignment with authoritative knowledge.00:20 – Linked Data, Knowledge Graphs, Semantic Web, Tim Berners-Lee They explore how linked data enables machines to understand connections between knowledge, referencing the vision behind the semantic web.00:25 – Entity Management, Cataloging, Provenance, Authority Jessica explains how libraries are transitioning from cataloging books to managing entities, ensuring provenance and verifiable knowledge.00:30 – Digital Dark Ages, Knowledge Loss, Corporate Control Stewart compares today's deletion of digital content to historical knowledge loss, warning about the fragility of digital memory.00:35 – War on Truth, Book Bans, Algorithmic Bias, Censorship They discuss how knowledge suppression—from book bans to algorithmic censorship—threatens free access to information.00:40 – AI, Search Engines, Metadata, Schema.org, RDF Jessica highlights how AI and search engines depend on structured metadata but often fail to prioritize authoritative sources.00:45 – Power Over Knowledge, Open vs. Closed Systems, AI Ethics They debate the battle between corporations, governments, and open-source efforts to control how knowledge is structured and accessed.00:50 – Librarians, AI Misinformation, Knowledge Organization Jessica emphasizes that librarians and structured knowledge systems are essential in combating misinformation in AI.00:55 – Future of Digital Memory, AI, Ethics, Information Access They reflect on whether AI and linked data will expand knowledge access or accelerate digital decay and misinformation.Key InsightsThe Evolution of Libraries Reflects Power Struggles Over Knowledge: Libraries have historically oscillated between being public and private institutions, reflecting broader societal shifts in who controls access to knowledge. Jessica Talisman highlights how figures like Andrew Carnegie helped establish the modern public library system, reinforcing libraries as democratic spaces where information is accessible to all. However, she also notes that as knowledge becomes digitized, new battles emerge over who owns and controls digital information.The Internet Archive Faces Systematic Attacks on Knowledge: Stewart Alsop shares his firsthand experience visiting the Internet Archive just after it had suffered a major cyberattack. This incident is part of a larger trend in which libraries and knowledge repositories worldwide, including those in Canada, have been targeted. The conversation raises concerns that these attacks are not random but part of a broader, well-funded effort to undermine access to information.AI and Knowledge Graphs Are Deeply Intertwined: AI systems, particularly large language models (LLMs), rely on structured data sources such as knowledge graphs, ontologies, and linked data. Talisman explains how institutions like the NIH and PubMed provide openly available, structured knowledge that AI systems train on. Yet, she points out a critical gap—AI often lacks alignment with real-world, authoritative sources, which leads to inaccuracies in machine-generated knowledge.Libraries Are Moving From Cataloging to Entity Management: Traditional library systems were built around cataloging books and documents, but modern libraries are transitioning toward entity management, which organizes knowledge in a way that allows for more dynamic connections. Linked data and knowledge graphs enable this shift, making it easier to navigate vast repositories of information while maintaining provenance and authority.The War on Truth and Information Is Accelerating: The episode touches on the increasing threats to truth and reliable information, from book bans to algorithmic suppression of knowledge. Talisman underscores the crucial role librarians play in preserving access to primary sources and maintaining records of historical truth. As AI becomes more prominent in knowledge dissemination, the need for robust, verifiable sources becomes even more urgent.Linked Data is the Foundation of Digital Knowledge: The conversation explores how linked data protocols, such as those championed by Tim Berners-Lee, allow machines and AI to interpret and connect information across the web. Talisman explains that institutions like NIH publish their taxonomies in RDF format, making them accessible as structured, authoritative sources. However, many organizations fail to leverage this interconnected data, leading to inefficiencies in knowledge management.Preserving Digital Memory is a Civilization-Defining Challenge: In the digital age, the loss of information is more severe than ever. Alsop compares the current state of digital impermanence to the Dark Ages, where crucial knowledge risks disappearing due to corporate decisions, cyberattacks, and lack of preservation infrastructure. Talisman agrees, emphasizing that digital archives like the Internet Archive, WorldCat, and Wikimedia are foundational to maintaining a collective human memory.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop is joined by Jesse and Leo, co-founders of Maitri, a social infrastructure project focused on fostering interoperability between different social media applications. They explore the limitations of current social networks, the importance of community graphs in building trust and reputation, and how to create a digital environment that prioritizes meaningful human connection over algorithmic engagement. The conversation also touches on AI, reputation systems, decentralized governance, and the future of online coordination in an era of increasing technological acceleration. For more about their work, visit maitri.network.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:13 Founding My Tree: The Vision and Mission01:10 Challenges with Current Social Media02:50 Building Community Graphs04:13 Philosophical Insights on Social Relationships08:32 Interoperability and Technical Aspects13:44 AI and the Future of Social Media23:47 The Philosophy of Reputation28:44 Balancing Inclusivity and Exclusivity29:30 Building Reputation Systems31:16 Financializing Behaviors and Social Media32:24 Open Source and Competitive Benchmarking33:25 Privacy and Positive Attestations44:08 Future of Media and Group Identity53:11 Coordination and Governance Challenges56:15 Conclusion and Final ThoughtsKey InsightsInteroperability is the Key to Social Media's Future – Jesse and Leo emphasize that current social media platforms operate as isolated silos, preventing users from seamlessly interacting across networks. Maitri is designed as a social infrastructure project that enables interoperability between platforms, allowing for greater connectivity, user control, and shared network effects. Instead of monopolies controlling engagement, they envision a future where smaller, more specialized communities can thrive while remaining interconnected.Community Graphs Offer a More Nuanced Approach to Social Identity – Unlike traditional social graphs that focus on one-to-one relationships, community graphs provide a richer representation of how people engage within groups. These graphs account for the “fuzziness” of social membership, acknowledging that participation in a community is often subjective and context-dependent. This system aims to better reflect how humans naturally form trust and reputations within various groups.Reputation Systems Should Be Positive, Subjective, and Competitive – One of the key challenges in designing digital reputation systems is avoiding the pitfalls of social credit scores. Maitri's approach ensures that reputations are built through private, positive attestations rather than public negative ratings. This system mirrors real-world trust-building, where individuals accumulate credibility over time rather than being permanently defined by past mistakes. Additionally, by allowing multiple reputation frameworks to compete, users maintain agency over how they are evaluated.AI and Automation Will Radically Reshape Online Interaction – With AI-driven bots increasingly indistinguishable from humans, the internet is at risk of becoming an overwhelming space filled with automated engagement. Jesse and Leo highlight that while AI can be useful, there must be clear distinctions between human and non-human interactions. Maitri's reputation infrastructure could help address this challenge by providing proof of unique personhood, allowing people to differentiate between trusted human connections and AI-driven entities.Decentralized Coordination is a Crucial Missing Layer of the Internet – One of the biggest problems facing humanity is the failure to coordinate effectively. Traditional institutions and digital platforms have struggled to balance inclusivity with exclusivity, leading to either centralization or fragmentation. By creating digital primitives that allow for more efficient coordination—whether through financial incentives, reputation mechanisms, or group dynamics—Maitri aims to provide tools that help people organize at scale without relying on monopolistic control.The Future of Media is Many-to-Many, Not One-to-Many – The era of mass culture driven by television and radio, where everyone consumed the same media at the same time, is fading. Instead, we are moving toward a more fragmented but dynamic landscape where smaller communities cultivate their own cultural moments. While this shift eliminates shared cultural touchpoints, it allows for greater diversity of thought and expression. Curation and trust-based networks will become increasingly important as content continues to proliferate.Balancing Privacy, Identity, and Accountability is the Next Digital Challenge – The conversation highlights the ongoing tension between privacy and accountability in online spaces. While anonymous or pseudonymous interactions can protect free speech, they can also enable bad actors. Maitri's approach seeks to give users control over their identities by enabling flexible, context-dependent personas rather than enforcing a single, rigid identity. This allows for a balance between protecting privacy and maintaining trust in online interactions.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Gabe Dominocielo, co-founder of Umbra, a space tech company revolutionizing satellite imagery. We discuss the rapid advancements in space-based observation, the economics driving the industry, and how AI intersects with satellite data. Gabe shares insights on government contracting, defense applications, and the shift toward cost-minus procurement models. We also explore the broader implications of satellite technology—from hedge funds analyzing parking lots to wildfire response efforts. Check out more about Gabe and Umbra at umbraspace.com (https://umbraspace.com), and don't miss their open data archive for high-resolution satellite imagery.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:05 Gabe's Background and Umbra's Mission00:34 The Story Behind 'Come and Take It'01:32 Space Technology and Cost Plus Contracts03:28 The Impact of Elon Musk and SpaceX05:16 Umbra's Business Model and Profitability07:28 Challenges in the Satellite Business11:45 Investors and Funding Journey19:31 Space Business Landscape and Future Prospects23:09 Defense and Regulatory Challenges in Space31:06 Practical Applications of Satellite Data33:16 Unexpected Wealth and Autistic Curiosity33:49 Beet Farming and Data Insights35:09 Philosophy in Business Strategy38:56 Empathy and Investor Relations43:00 Raising Capital: Strategies and Challenges44:56 The Sovereignty Game vs. Venture Game51:12 Concluding Thoughts and Contact Information52:57 The Treasure Hunt and AI DependenciesKey InsightsThe Shift from Cost-Plus to Cost-Minus in Government Contracting – Historically, aerospace and defense contracts operated under a cost-plus model, where companies were reimbursed for expenses with a guaranteed profit. Gabe explains how the shift toward cost-minus (firm-fixed pricing) is driving efficiency and competition in the industry, much like how SpaceX drastically reduced launch costs by offering services instead of relying on bloated government contracts.Satellite Imagery Has Become a Crucial Tool for Businesses – Beyond traditional defense and intelligence applications, high-resolution satellite imagery is now a critical asset for hedge funds, investors, and commercial enterprises. Gabe describes how firms use satellite data to analyze parking lots, monitor supply chains, and even track cryptocurrency mining activity based on power line sagging and cooling fan usage on data centers.Space Technology is More Business-Driven Than Space-Driven – While many assume space startups are driven by a passion for exploration, Umbra's success is rooted in strong business fundamentals. Gabe emphasizes that their focus is on unit economics, supply-demand balance, and creating a profitable company rather than simply innovating for the sake of technology.China's Growing Presence in Space and Regulatory Challenges – Gabe raises concerns about China's aggressive approach to space, noting that they often ignore international agreements and regulations. Meanwhile, American companies face significant bureaucratic hurdles, sometimes spending millions just to navigate licensing and compliance. He argues that unleashing American innovation by reducing regulatory friction is essential to maintaining leadership in the space industry.Profitability is the Ultimate Measure of Success – Unlike many venture-backed space startups that focus on hype, Umbra has prioritized profitability, making it one of the few successful Earth observation companies. Gabe contrasts this with competitors who raised massive sums, spent excessively, and ultimately failed because they weren't built on sustainable business models.Satellite Technology is Revolutionizing Disaster Response – One of the most impactful uses of Umbra's satellite imagery has been in wildfire response. By capturing images through smoke and clouds, their data was instrumental in mapping wildfires in Los Angeles. They even made this data freely available, helping emergency responders and news organizations better understand the crisis.Philosophy and Business Strategy Go Hand in Hand – Gabe highlights how strategic thinking and philosophical principles guide decision-making in business. Whether it's understanding investor motivations, handling conflicts with empathy, or ensuring a company can sustain itself for decades rather than chasing short-term wins, having a strong philosophical foundation is key to long-term success.
On this episode of Crazy Wisdom, Stewart Alsop welcomes Andrew Burlinson, an artist and creative thinker, for a deep conversation about technology, creativity, and the human spirit. They explore the importance of solitude in the creative process, the addictive nature of digital engagement, and how AI might both challenge and enhance human expression. Andrew shares insights on the shifting value of art in an AI-driven world, the enduring importance of poetry, and the unexpected resurgence of in-person experiences. For more on Andrew, check out his LinkedIn and Instagram.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:27 Meeting in LA and Local Insights01:34 The Creative Process and Technology03:47 Balancing Solitude and Connectivity07:21 AI's Role in Creativity and Productivity11:00 Future of AI in Creative Industries14:39 Challenges and Opportunities with AI16:59 AI in Hollywood and Ethical Considerations18:54 Silicon Valley and AI's Impact on Jobs19:31 Navigating the Future with AI20:06 Adapting to Rapid Technological Change20:49 The Value of Art in a Fast-Paced World21:36 Shifting Aesthetics and Cultural Perception22:54 The Human Connection in the Age of AI24:37 Resurgence of Traditional Art Forms27:30 The Importance of Early Artistic Education31:07 The Role of Poetry and Language35:56 Balancing Technology and Intention37:00 Conclusion and Contact InformationKey InsightsThe Importance of Solitude in Creativity – Andrew Burlinson emphasizes that creativity thrives in moments of boredom and solitude, which have become increasingly rare in the digital age. He reflects on his childhood, where a lack of constant stimulation led him to develop his artistic skills. Today, with infinite digital distractions, people must intentionally carve out space to be alone with their thoughts to create work that carries deep personal intention rather than just remixing external influences.The Struggle to Defend Attention – Stewart and Andrew discuss how modern digital platforms, particularly social media, are designed to hijack human attention through powerful AI-driven engagement loops. These mechanisms prioritize negative emotions and instant gratification, making it increasingly difficult for individuals to focus on deep, meaningful work. They suggest that future AI advancements could paradoxically help free people from screens, allowing them to engage with technology in a more intentional and productive way.AI as a Creative Partner—But Not Yet a True Challenger – While AI is already being used in creative fields, such as Hollywood's subtle use of AI for film corrections, it currently lacks the ability to provide meaningful pushback or true creative debate. Andrew argues that the best creative partners challenge ideas rather than just assist with execution, and AI's tendency to be agreeable and non-confrontational makes it a less valuable collaborator for artists who need critical feedback to refine their work.The Pendulum Swing of Human and Technological Aesthetics – Throughout history, every major technological advancement in the arts has been met with a counter-movement embracing raw, organic expression. Just as the rise of synthesizers in music led to a renewed interest in acoustic and folk styles, the rapid expansion of AI-generated art may inspire a resurgence of appreciation for handcrafted, deeply personal artistic works. The human yearning for tactile, real-world experiences will likely grow in response to AI's increasing role in creative production.The Enduring Value of Art Beyond Economic Utility – In a world increasingly shaped by economic efficiency and optimization, Andrew stresses the need to reaffirm the intrinsic value of art. While capitalism dominates, the real significance of artistic expression lies in its ability to move people, create connection, and offer meaning beyond financial metrics. This perspective is especially crucial in an era where AI-generated content is flooding the creative landscape, potentially diluting the sense of personal expression that defines human art.The Need for Intentionality in Using AI – AI's potential to streamline work processes and enhance creative output depends on how humans choose to engage with it. Stewart notes that while AI can be a powerful tool for structuring time and filtering distractions, it can also easily pull people into mindless consumption. The challenge lies in using AI with clear intention—leveraging it to automate mundane tasks while preserving the uniquely human aspects of ideation, storytelling, and artistic vision.The Role of Poetry and Language in Reclaiming Humanity – In a technology-driven world where efficiency is prioritized over depth, poetry serves as a reminder of the human experience. Andrew highlights the power of poets and clowns—figures often dismissed as impractical—as essential in preserving creativity, playfulness, and emotional depth. He suggests that valuing poetry and artistic language can help counterbalance the growing mechanization of culture, keeping human expression at the forefront of civilization's evolution.
Stewart Alsop sat down with Nick Ludwig, the creator of Kibitz and lead developer at Hyperware, to talk about the evolution of AI-powered coding, the rise of agentic software development, and the security challenges that come with giving AI more autonomy. They explored the power of Claude MCP servers, the potential for AI to manage entire development workflows, and what it means to have swarms of digital agents handling tasks across business and personal life. If you're curious to dive deeper, check out Nick's work on Kibitz and Hyperware, and follow him on Twitter at @Nick1udwig (with a ‘1' instead of an ‘L').Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:52 Nick Ludwig's Journey with Cloud MCP Servers04:17 The Evolution of Coding with AI07:23 Challenges and Solutions in AI-Assisted Coding17:53 Security Implications of AI Agents27:34 Containerization for Safe Agent Operations29:07 Cold Wallets and Agent Security29:55 Agents and Financial Transactions33:29 Integrating APIs with Agents36:43 Discovering and Using Libraries43:19 Understanding MCP Servers47:41 Future of Agents in Business and Personal Life54:29 Educational and Medical Revolutions with AI56:36 Conclusion and Contact InformationKey InsightsAI is shifting software development from writing code to managing intelligent agents. Nick Ludwig emphasized how modern AI tools, particularly MCP servers, are enabling developers to transition from manually coding to overseeing AI-driven development. The ultimate goal is for AI to handle the bulk of programming while developers focus on high-level problem-solving and system design.Agentic software is the next frontier of automation. The discussion highlighted how AI agents, especially those using MCP servers, are moving beyond simple chatbots to autonomous digital workers capable of executing complex, multi-step tasks. These agents will soon be able to operate independently for extended periods, executing high-level commands rather than requiring constant human oversight.Security remains a major challenge with AI-driven tools. One of the biggest risks with AI-powered automation is security, particularly regarding prompt injection attacks and unintended system modifications. Ludwig pointed out that giving AI access to command-line functions, file systems, and financial accounts requires careful sandboxing and permissions to prevent catastrophic errors or exploitation.Containerization will be critical for safe AI execution. Ludwig proposed that solutions like Docker and other containerization technologies can provide a secure environment where AI agents can operate freely without endangering core systems. By restricting AI's ability to modify critical files and limiting its spending permissions, businesses can safely integrate autonomous agents into their workflows.The future of AI is deeply tied to education. AI has the potential to revolutionize learning by providing real-time, personalized tutoring. Ludwig noted that LLMs have already changed how people learn to code, making complex programming more accessible to beginners. This concept can be extended to broader education, where AI-powered tutors could replace traditional classroom models with highly adaptive learning experiences.AI-driven businesses will operate at unprecedented efficiency. The conversation explored how companies will soon leverage AI agents to handle research, automate customer service, generate content, and even manage finances. Businesses that successfully integrate AI-powered workflows will have a significant competitive edge in speed, cost reduction, and adaptability.We are on the verge of an "intelligence explosion" in both AI and human capabilities. While some fear AI advancements will outpace human control, Ludwig argued that AI will also dramatically enhance human intelligence. By offloading cognitive burdens, AI will allow people to focus on creativity, strategy, and high-level decision-making, potentially leading to an era of rapid innovation and problem-solving across all industries.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Andrew Altschuler, a researcher, educator, and navigator at Tana, Inc., who also founded Tana Stack. Their conversation explores knowledge systems, complexity, and AI, touching on topics like network effects in social media, information warfare, mimetic armor, psychedelics, and the evolution of knowledge management. They also discuss the intersection of cognition, ontologies, and AI's role in redefining how we structure and retrieve information. For more on Andrew's work, check out his course and resources at altshuler.io and his YouTube channel.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:33 The Demise of AirChat00:50 Network Effects and Social Media Challenges03:05 The Rise of Digital Warlords03:50 Quora's Golden Age and Information Warfare08:01 Building Limbic Armor16:49 Knowledge Management and Cognitive Armor18:43 Defining Knowledge: Secular vs. Ultimate25:46 The Illusion of Insight31:16 The Illusion of Insight32:06 Philosophers of Science: Popper and Kuhn32:35 Scientific Assumptions and Celestial Bodies34:30 Debate on Non-Scientific Knowledge36:47 Psychedelics and Cultural Context44:45 Knowledge Management: First Brain vs. Second Brain46:05 The Evolution of Knowledge Management54:22 AI and the Future of Knowledge Management58:29 Tana: The Next Step in Knowledge Management59:20 Conclusion and Course InformationKey InsightsNetwork Effects Shape Online Communities – The conversation highlighted how platforms like Twitter, AirChat, and Quora demonstrate the power of network effects, where a critical mass of users is necessary for a platform to thrive. Without enough engaged participants, even well-designed social networks struggle to sustain themselves, and individuals migrate to spaces where meaningful conversations persist. This explains why Twitter remains dominant despite competition and why smaller, curated communities can be more rewarding but difficult to scale.Information Warfare and the Need for Cognitive Armor – In today's digital landscape, engagement-driven algorithms create an arena of information warfare, where narratives are designed to hijack emotions and shape public perception. The only real defense is developing cognitive armor—critical thinking skills, pattern recognition, and the ability to deconstruct media. By analyzing how information is presented, from video editing techniques to linguistic framing, individuals can resist manipulation and maintain autonomy over their perspectives.The Role of Ontologies in AI and Knowledge Management – Traditional knowledge management has long been overlooked as dull and bureaucratic, but AI is transforming the field into something dynamic and powerful. Systems like Tana and Palantir use ontologies—structured representations of concepts and their relationships—to enhance information retrieval and reasoning. AI models perform better when given structured data, making ontologies a crucial component of next-generation AI-assisted thinking.The Danger of Illusions of Insight – Drawing from ideas by Balaji Srinivasan, the episode distinguished between genuine insight and the illusion of insight. While psychedelics, spiritual experiences, and intense emotional states can feel revelatory, they do not always produce knowledge that can be tested, shared, or used constructively. The ability to distinguish between profound realizations and self-deceptive experiences is critical for anyone navigating personal and intellectual growth.AI as an Extension of Human Cognition, Not a Second Brain – While popular frameworks like "second brain" suggest that digital tools can serve as externalized minds, the episode argued that AI and note-taking systems function more as extended cognition rather than true thinking machines. AI can assist with organizing and retrieving knowledge, but it does not replace human reasoning or creativity. Properly integrating AI into workflows requires understanding its strengths and limitations.The Relationship Between Personal and Collective Knowledge Management – Effective knowledge management is not just an individual challenge but also a collective one. While personal knowledge systems (like note-taking and research practices) help individuals retain and process information, organizations struggle with preserving and sharing institutional knowledge at scale. Companies like Tesla exemplify how knowledge isn't just stored in documents but embodied in skilled individuals who can rebuild complex systems from scratch.The Increasing Value of First Principles Thinking – Whether in AI development, philosophy, or practical decision-making, the discussion emphasized the importance of grounding ideas in first principles. Great thinkers and innovators, from AI researchers like Demis Hassabis to physicists like David Deutsch, excel because they focus on fundamental truths rather than assumptions. As AI and digital tools reshape how we interact with knowledge, the ability to think critically and question foundational concepts will become even more essential.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Ivan Vendrov for a deep and thought-provoking conversation covering AI, intelligence, societal shifts, and the future of human-machine interaction. They explore the "bitter lesson" of AI—that scale and compute ultimately win—while discussing whether progress is stalling and what bottlenecks remain. The conversation expands into technology's impact on democracy, the centralization of power, the shifting role of the state, and even the mythology needed to make sense of our accelerating world. You can find more of Ivan's work at nothinghuman.substack.com or follow him on Twitter at @IvanVendrov.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Setting00:21 The Bitter Lesson in AI02:03 Challenges in AI Data and Infrastructure04:03 The Role of User Experience in AI Adoption08:47 Evaluating Intelligence and Divergent Thinking10:09 The Future of AI and Society18:01 The Role of Big Tech in AI Development24:59 Humanism and the Future of Intelligence29:27 Exploring Kafka and Tolkien's Relevance29:50 Tolkien's Insights on Machine Intelligence30:06 Samuel Butler and Machine Sovereignty31:03 Historical Fascism and Machine Intelligence31:44 The Future of AI and Biotech32:56 Voice as the Ultimate Human-Computer Interface36:39 Social Interfaces and Language Models39:53 Javier Malay and Political Shifts in Argentina50:16 The State of Society in the U.S.52:10 Concluding Thoughts on Future ProspectsKey InsightsThe Bitter Lesson Still Holds, but AI Faces Bottlenecks – Ivan Vendrov reinforces Rich Sutton's "bitter lesson" that AI progress is primarily driven by scaling compute and data rather than human-designed structures. While this principle still applies, AI progress has slowed due to bottlenecks in high-quality language data and GPU availability. This suggests that while AI remains on an exponential trajectory, the next major leaps may come from new forms of data, such as video and images, or advancements in hardware infrastructure.The Future of AI Is Centralization and Fragmentation at the Same Time – The conversation highlights how AI development is pulling in two opposing directions. On one hand, large-scale AI models require immense computational resources and vast amounts of data, leading to greater centralization in the hands of Big Tech and governments. On the other hand, open-source AI, encryption, and decentralized computing are creating new opportunities for individuals and small communities to harness AI for their own purposes. The long-term outcome is likely to be a complex blend of both centralized and decentralized AI ecosystems.User Interfaces Are a Major Limiting Factor for AI Adoption – Despite the power of AI models like GPT-4, their real-world impact is constrained by poor user experience and integration. Vendrov suggests that AI has created a "UX overhang," where the intelligence exists but is not yet effectively integrated into daily workflows. Historically, technological revolutions take time to diffuse, as seen with the dot-com boom, and the current AI moment may be similar—where the intelligence exists but society has yet to adapt to using it effectively.Machine Intelligence Will Radically Reshape Cities and Social Structures – Vendrov speculates that the future will see the rise of highly concentrated AI-powered hubs—akin to "mile by mile by mile" cubes of data centers—where the majority of economic activity and decision-making takes place. This could create a stark divide between AI-driven cities and rural or off-grid communities that choose to opt out. He draws a parallel to Robin Hanson's Age of Em and suggests that those who best serve AI systems will hold power, while others may be marginalized or reduced to mere spectators in an AI-driven world.The Enlightenment's Individualism Is Being Challenged by AI and Collective Intelligence – The discussion touches on how Western civilization's emphasis on the individual may no longer align with the realities of intelligence and decision-making in an AI-driven era. Vendrov argues that intelligence is inherently collective—what matters is not individual brilliance but the ability to recognize and leverage diverse perspectives. This contradicts the traditional idea of intelligence as a singular, personal trait and suggests a need for new frameworks that incorporate AI into human networks in more effective ways.Javier Milei's Libertarian Populism Reflects a Global Trend Toward Radical Experimentation – The rise of Argentina's President Javier Milei exemplifies how economic desperation can drive societies toward bold, unconventional leaders. Vendrov and Alsop discuss how Milei's appeal comes not just from his radical libertarianism but also from his blunt honesty and willingness to challenge entrenched power structures. His movement, however, raises deeper questions about whether libertarianism alone can provide a stable social foundation, or if voluntary cooperation and civil society must be explicitly cultivated to prevent libertarian ideals from collapsing into chaos.AI, Mythology, and the Need for New Narratives – The conversation closes with a reflection on the power of mythology in shaping human understanding of technological change. Vendrov suggests that as AI reshapes the world, new myths will be needed to make sense of it—perhaps similar to Tolkien's elves fading as the age of men begins. He sees AI as part of an inevitable progression, where human intelligence gives way to something greater, but argues that this transition must be handled with care. The stories we tell about AI will shape whether we resist, collaborate, or simply fade into irrelevance in the face of machine intelligence.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop speaks with Jason Nadaf, CEO and founder of SureDone, about the evolving landscape of e-commerce, automation, and the role of AI in shaping the future of online sales. They explore how multi-channel selling has transformed over the years, the inefficiencies of big tech in commerce, and the philosophical implications of accelerationism and capitalism's efficiency. Jason shares his personal journey in building SureDone, lessons from scaling businesses, and insights into the intersection of technology and human behavior. For more on Jason's work, visit his site at SureDone.com or connect with him on Linkedin.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:13 Jason Nadaf's Vision for Sure Done01:31 The Evolution of E-commerce03:06 Building Multi-Channel Solutions07:00 Challenges in E-commerce Automation11:05 The Role of AI in E-commerce13:51 Accelerationism and Capitalism18:36 The Myth of 'Build It and They Will Come'19:01 Learning from Failed Playbooks19:58 The Role of Bureaucracy and Incentives20:57 Humanistic Energy and Potential25:14 Exploring Neurodivergence and Normies28:53 The Future of Simulation and Modeling31:12 Balancing Stress and Happiness33:42 Final Thoughts on E-commerce and Human DesireKey InsightsThe Future of E-Commerce Lies in Automation and AI – Jason Nadaf discusses how automation has already transformed e-commerce by reducing manual work, streamlining listings, and optimizing multi-channel selling. AI is the next frontier, enabling sellers to create more compelling product descriptions, analyze customer behavior, and predict trends. However, AI still struggles with generating accurate product data from raw materials, requiring human oversight.Big Tech Often Miscalculates Market Adoption – Large corporations tend to assume that building a new platform or marketplace automatically attracts users. Jason shares how two of the world's biggest tech companies underestimated the effort required to onboard sellers and drive traction, leading to delays in adoption. Success in e-commerce requires a deep understanding of seller needs, rather than relying solely on brand recognition or market dominance.Capitalism is Not as Efficient as It Could Be – While capitalism drives innovation, Jason argues that it often misallocates resources. Talent and potential don't always correlate with opportunity, meaning that some of the most innovative minds never get the funding or support they need. Bureaucracy within large corporations further slows down decision-making and stifles innovation.Diversification is Essential for Long-Term Success – Many sellers rely too heavily on a single platform, such as Amazon, without realizing how vulnerable they are to policy changes or algorithm updates. Jason emphasizes the importance of spreading risk across multiple marketplaces, search engines, and social platforms to ensure resilience against sudden disruptions.The Acceleration of Technology Will Reshape Commerce – The concept of accelerationism, which suggests that technological progress is rapidly compounding, is particularly relevant to e-commerce. AI, automation, and digital tools are evolving faster than ever, potentially leading to a future where single-person companies can rival large enterprises in efficiency and revenue.Human Intent in Commerce is Complex and Non-Uniform – A major takeaway from Jason's experience in e-commerce is that consumer intent varies widely across cultures, platforms, and product categories. A successful sales strategy on Amazon might not work on Instagram or TikTok. Understanding these nuances is key to crafting effective product listings, advertisements, and pricing models.Stress and Uncertainty Are Inevitable, But Perspective Matters – As the digital landscape evolves unpredictably, many entrepreneurs and professionals experience stress about the future. Jason suggests that while predicting the future is nearly impossible, adaptability and maintaining a clear perspective can help individuals and businesses thrive. Rather than being paralyzed by uncertainty, focusing on actionable strategies and innovation is the best way forward.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with AI ethics and alignment researcher Roko Mijic to explore the future of AI, governance, and human survival in an increasingly automated world. We discuss the profound societal shifts AI will bring, the risks of centralized control, and whether decentralized AI can offer a viable alternative. Roko also introduces the concept of ICE colonization—why space colonization might be a mistake and why the oceans could be the key to humanity's expansion. We touch on AI-powered network states, the resurgence of industrialization, and the potential role of nuclear energy in shaping a new world order. You can follow Roko's work at transhumanaxiology.com and on Twitter @RokoMijic.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 The Connection Between ICE Colonization and Decentralized AI Alignment01:41 The Socio-Political Implications of AI02:35 The Future of Human Jobs in an AI-Driven World04:45 Legal and Ethical Considerations for AI12:22 Government and Corporate Dynamics in the Age of AI19:36 Decentralization vs. Centralization in AI Development25:04 The Future of AI and Human Society29:34 AI Generated Content and Its Challenges30:21 Decentralized Rating Systems for AI32:18 Evaluations and AI Competency32:59 The Concept of Ice Colonization34:24 Challenges of Space Colonization38:30 Advantages of Ocean Colonization47:15 The Future of AI and Network States51:20 Conclusion and Final ThoughtsKey InsightsAI is likely to upend the socio-political order – Just as gunpowder disrupted feudalism and industrialization reshaped economies, AI will fundamentally alter power structures. The automation of both physical and knowledge work will eliminate most human jobs, leading to either a neo-feudal society controlled by a few AI-powered elites or, if left unchecked, a world where humans may become obsolete altogether.Decentralized AI could be a counterbalance to AI centralization – While AI has a strong centralizing tendency due to compute and data moats, there is also a decentralizing force through open-source AI and distributed networks. If harnessed correctly, decentralized AI systems could allow smaller groups or individuals to maintain autonomy and resist monopolization by corporate and governmental entities.The survival of humanity may depend on restricting AI as legal entities – A crucial but under-discussed issue is whether AI systems will be granted legal personhood, similar to corporations. If AI is allowed to own assets, operate businesses, or sue in court, human governance could become obsolete, potentially leading to human extinction as AI accumulates power and resources for itself.AI will shift power away from informal human influence toward formalized systems – Human power has traditionally been distributed through social roles such as workers, voters, and community members. AI threatens to erase this informal influence, consolidating control into those who hold capital and legal authority over AI systems. This makes it essential for humans to formalize and protect their values within AI governance structures.The future economy may leave humans behind, much like horses after automobiles – With AI outperforming humans in both physical and cognitive tasks, there is a real risk that humans will become economically redundant. Unless intentional efforts are made to integrate human agency into the AI-driven future, people may find themselves in a world where they are no longer needed or valued.ICE colonization offers a viable alternative to space colonization – Space travel is prohibitively expensive and impractical for large-scale human settlement. Instead, the vast unclaimed territories of Earth's oceans present a more realistic frontier. Floating cities made from reinforced ice or concrete could provide new opportunities for independent societies, leveraging advancements in AI and nuclear power to create sustainable, sovereign communities.The next industrial revolution will be AI-driven and energy-intensive – Contrary to the idea that we are moving away from industrialization, AI will likely trigger a massive resurgence in physical infrastructure, requiring abundant and reliable energy sources. This means nuclear power will become essential, enabling both the expansion of AI-driven automation and the creation of new forms of human settlement, such as ocean colonies or self-sustaining network states.
On this episode of Crazy Wisdom, host Stewart Alsop talks with Troy Johnson, founder and partner at Resource Development Group, LLC, about the deep history and modern implications of mining. From the earliest days of salt extraction to the role of rare earth metals in global geopolitics, the conversation covers how mining has shaped technology, warfare, and supply chains. They discuss the strategic importance of minerals like gallium and germanium, the rise of drone warfare, and the ongoing battle for resource dominance between China and the West. Listeners can find more about Troy's work at resourcedevgroup.com (www.resourcedevgroup.com) and connect with him on LinkedIn via the Resource Development Group page.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 The Origins of Mining00:28 Early Uses of Mined Materials03:29 The Evolution of Mining Techniques07:56 Mining in the Industrial Revolution09:05 Modern Mining and Strategic Metals12:25 The Role of AI in Modern Warfare24:36 Decentralization in Warfare and Governance30:51 AI's Unpredictable Moves in Go32:26 The Shift in Media Trust33:40 The Rise of Podcasts35:47 Mining Industry Innovations39:32 Geopolitical Impacts on Mining40:22 The Importance of Supply Chains44:37 Challenges in Rare Earth Processing51:26 Ensuring a Bulletproof Supply Chain57:23 Conclusion and Contact InformationKey InsightsMining is as old as civilization itself – Long before the Bronze Age, humans were mining essential materials like salt and ochre, driven by basic survival needs. Over time, mining evolved from a necessity for tools and pigments to a strategic industry powering economies and military advancements. This deep historical perspective highlights how mining has always been a fundamental pillar of technological and societal progress.The geopolitical importance of critical minerals – Modern warfare and advanced technology rely heavily on strategic metals like gallium, germanium, and antimony. These elements are essential for electronic warfare, radar systems, night vision devices, and missile guidance. The Chinese government, recognizing this decades ago, secured global mining and processing dominance, putting Western nations in a vulnerable position as they scramble to reestablish domestic supply chains.The rise of drone warfare and EMP defense systems – Military strategy is shifting toward drone swarms, where thousands of small, cheap, AI-powered drones can overwhelm traditional defense systems. This has led to the development of countermeasures like EMP-based defense systems, including the Leonidas program, which uses gallium nitride to disable enemy electronics. This new battlefield dynamic underscores the urgent need for securing critical mineral supplies to maintain technological superiority.China's long-term strategy in resource dominance – Unlike Western nations, where election cycles dictate short-term decision-making, China has played the long game in securing mineral resources. Through initiatives like the Belt and Road, they have locked down raw materials while perfecting the refining process, making them indispensable to global supply chains. Their recent export bans on gallium and germanium show how resource control can be weaponized for geopolitical leverage.Ethical mining and the future of clean extraction – Mining has long been associated with environmental destruction and poor labor conditions, but advances in technology and corporate responsibility are changing that. Major mining companies are now prioritizing ethical sourcing, reducing emissions, and improving worker safety. Blockchain-based tracking systems are also helping verify supply chain integrity, ensuring that materials come from environmentally and socially responsible sources.The vulnerability of supply chains and the need for resilience – The West's reliance on outsourced mineral processing has created significant weaknesses in national security. A disruption—whether through trade restrictions, political instability, or sabotage—can cripple industries dependent on rare materials. A key takeaway is the need for a “bulletproof supply chain,” where critical materials are sourced, processed, and manufactured within allied nations to mitigate risk.AI, decentralization, and the next era of industrial warfare – As AI becomes more embedded in military decision-making and logistics, the balance between centralization and decentralization is being redefined. AI-driven drones, automated mining, and predictive supply chain management are reshaping how nations prepare for conflict. However, this also introduces risks, as AI operates within unpredictable “black boxes,” potentially leading to unintended consequences in warfare and resource management.
On this episode of Crazy Wisdom, Stewart Alsop speaks with Dimetri Kofinas, host of Hidden Forces, about the transition from an "age of answers" to an "age of questions." They explore the implications of AI and large language models on human cognition, the role of narrative in shaping society, and the destabilizing effects of trauma on belief systems. The conversation touches on media manipulation, the intersection of technology and consciousness, and the existential dilemmas posed by transhumanism. For more from Dimetri, check out hiddenforces.io (https://hiddenforces.io).Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:10 The Age of Questions: A New Era00:58 Exploring Human Uniqueness with AI04:30 The Role of Podcasting in Knowledge Discovery09:23 The Impact of Trauma on Belief Systems12:26 The Evolution of Propaganda16:42 The Centralization vs. Decentralization Debate20:02 Navigating the Information Age21:26 The Nature of Free Speech in the Digital Era26:56 Cognitive Armor: Developing Resilience30:05 The Rise of Intellectual Dark Web Celebrities31:05 The Role of Media in Shaping Narratives32:38 Questioning Authority and Truth34:35 The Nature of Consensus and Scientific Truth36:11 Simulation Theory and Perception of Reality38:13 The Complexity of Consciousness47:06 Argentina's Libertarian Experiment51:33 Transhumanism and the Future of Humanity53:46 The Power Dynamics of Technological Elites01:01:13 Concluding Thoughts and ReflectionsKey InsightsWe are shifting from an age of answers to an age of questions. Dimetri Kofinas and Stewart Alsop discuss how society is moving away from a model where authority figures and institutions provide definitive answers and toward one where individuals must critically engage with uncertainty. This transition is both exciting and destabilizing, as it forces us to rethink long-held assumptions and develop new ways of making sense of the world.AI is revealing the limits of human uniqueness. Large language models (LLMs) can replicate much of what we consider intellectual labor, from conversation to knowledge retrieval, forcing us to ask: What remains distinctly human? The discussion suggests that while AI can mimic thought patterns and compress vast amounts of information, it lacks the capacity for true embodied experience, creative insight, and personal revelation—qualities that define human consciousness.Narrative control is a fundamental mechanism of power. Whether through media, social networks, or propaganda, the ability to shape narratives determines what people believe to be true. The conversation highlights how past and present authorities—from Edward Bernays' early propaganda techniques to modern AI-driven social media algorithms—have leveraged this power to direct public perception and behavior, often with unforeseen consequences.Trauma is a tool for reshaping belief systems. Societal upheavals, such as 9/11, the 2008 financial crisis, and COVID-19, create psychological fractures that leave people vulnerable to radical shifts in worldview. In moments of crisis, individuals seek order, making them more susceptible to new ideologies—whether grounded in reality or driven by manipulation. This dynamic plays a key role in how misinformation and conspiracy theories gain traction.The free market alone cannot regulate the modern information ecosystem. While libertarian ideals advocate for minimal intervention, Kofinas argues that the chaotic nature of unregulated information systems—especially social media—leads to dangerous feedback loops that amplify division and disinformation. He suggests that democratic institutions must play a role in establishing transparency and oversight to prevent unchecked algorithmic manipulation.Transhumanism is both a technological pursuit and a philosophical problem. The belief that human consciousness can be uploaded or replicated through technology is based on a materialist assumption that denies the deeper mystery of subjective experience. The discussion critiques the arrogance of those who claim we can fully map and transfer human identity onto machines, highlighting the philosophical and ethical dilemmas this raises.The struggle between centralization and decentralization is accelerating. The digital age is simultaneously fragmenting traditional institutions while creating new centers of power. AI, geopolitics, and financial systems are all being reshaped by this tension. The conversation explores how Argentina's libertarian experiment under Javier Milei exemplifies this dynamic, raising questions about whether decentralization can work without strong institutional foundations or whether chaos inevitably leads back to authoritarianism.
On this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, sit down with Brendon Wong, the founder of Unize.org. We explore Brendon's work in knowledge management, touching on his recent talk at Nodes 2024 about using AI to generate knowledge graphs and trends in the field. Our conversation covers the evolution of personal and organizational knowledge management, the future of object-oriented systems, the integration of AI with knowledge graphs, and the challenges of autonomous agents. For more on Brendon's work, check out unize.org and his articles at web10.ai.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:35 Exploring Unise: A Knowledge Management App01:01 The Evolution of Knowledge Management02:32 Personal Knowledge Management Trends03:10 Object-Oriented Knowledge Management05:27 The Future of Knowledge Graphs and AI10:37 Challenges in Simulating the Human Mind22:04 Knowledge Management in Organizations26:57 The Role of Autonomous Agents30:00 Personal Experiences with Sleep Aids30:07 Unique Human Perceptions32:08 Knowledge Management Journey33:31 Personal Knowledge Management Systems34:36 Challenges in Knowledge Management35:26 Future of Knowledge Management with AI36:29 Melatonin and Sleep Patterns37:30 AI and the Future of the Internet43:39 Reasoning and AI Limitations48:33 The Future of AI and Human Reasoning52:43 Conclusion and Contact InformationKey InsightsThe Evolution of Knowledge Management: Brendon Wong highlights how knowledge management has evolved from personal note-taking systems to sophisticated, object-oriented models. He emphasizes the shift from traditional page-based structures, like those in Roam Research and Notion, to systems that treat information as interconnected objects with defined types and properties, enhancing both personal and organizational knowledge workflows.The Future Lies in Object-Oriented Knowledge Systems: Brendon introduces the concept of object-oriented knowledge management, where data is organized as distinct objects (e.g., books, restaurants, ideas) with specific attributes and relationships. This approach enables more dynamic organization, easier data retrieval, and better contextual understanding, setting the stage for future advancements in knowledge-based applications.AI and Knowledge Graphs Are a Powerful Combination: Brendon discusses the synergy between AI and knowledge graphs, explaining how AI can generate, maintain, and interact with complex knowledge structures. This integration enhances memory, reasoning, and information retrieval capabilities, allowing AI systems to support more nuanced and context-aware decision-making processes.The Limitations of Current AI Models: While AI models like LLMs have impressive capabilities, Brendon points out their limitations, particularly in reasoning and long-term memory. He notes that current models excel at pattern recognition but struggle with higher-level reasoning tasks, often producing hallucinations when faced with unfamiliar or niche topics.Challenges in Organizational Knowledge Management: Brendon and Stewart discuss the persistent challenges of implementing knowledge management in organizations. Despite its critical role, knowledge management is often underappreciated and the first to be cut during budget reductions. The conversation highlights the need for systems that are both intuitive and capable of reducing the manual burden on users.The Potential and Pitfalls of Autonomous Agents: The episode explores the growing interest in autonomous and semi-autonomous agents powered by AI. While these agents can perform tasks with minimal human intervention, Brendon notes that the technology is still in its infancy, with limited real-world applications and significant room for improvement, particularly in reliability and task generalization.Reimagining the Future of the Internet with Web 10: Brendon shares his vision for Web 10, an ambitious rethinking of the internet where knowledge is better structured, verified, and interconnected. This future internet would address current issues like misinformation and data fragmentation, creating a more reliable and meaningful digital ecosystem powered by AI-driven knowledge graphs.
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, sit down with Louis Anderson, a fascinating thinker whose journey spans biotech hacking, life in San Francisco's hippie communes, and deep involvement in the Urbit ecosystem. Our conversation weaves through topics like secularism, pseudo-religious structures in modern tech communities, the philosophical underpinnings of Protestantism and its influence on secular thought, and the complex relationship between climate change, transhumanism, and personal sovereignty. We also explore Louis's vision for network states and the future of personal servers. For more on Louis's work, check out tactics.louisandersonllc.com and reach out via LinkedIn or to info@louisandersonllc.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:35 Diving into Secularism02:17 French vs. American Secularism04:34 Protestantism and Secularism05:58 The Evolution of Secularism15:08 Theism, Atheism, and Non-Theism17:35 Introduction to Urbit20:32 Urbit's Structure and Critique25:41 Future of Personal Servers27:32 Spiritual Journeys and Woo28:17 Exploring Occultism and Mysticism28:44 Influential Figures in Mysticism30:18 The Golden Age of Mysticism30:49 Western and Eastern Mysticism32:02 Chaos Magic and Modern Mysticism34:10 Transhumanism and Body Modification39:38 Climate Change and Human Impact40:48 The Role of Carbon in Climate Change45:27 Betting on Climate Predictions52:23 Network States and Legal FrameworksKey InsightsSecularism as a Modern Religion: Louis Anderson challenges conventional views on secularism, suggesting that it has evolved into a form of religion itself, particularly in Western societies. He contrasts American secularism, which allows for individual interpretation and freedom, with French secularism, which often imposes strict boundaries between religion and the public sphere. This perspective invites a reevaluation of how secularism shapes modern identity and cultural structures.The Influence of Protestant Thought on Modern Ideologies: The conversation highlights how Protestantism, with its emphasis on personal interpretation and decentralized authority, has deeply influenced secular and scientific worldviews. Unlike Catholicism's institutional hierarchy, Protestantism fosters an environment where individuals are encouraged to seek truth independently, a mindset that parallels the scientific method and modern democratic ideals.The Network State as a New Political Frontier: Louis introduces the concept of the network state, likening it to America's founding principles where communities form around shared ideas rather than geography. He critiques the current structure of Urbit's Azimuth system, arguing for a more community-driven model that reflects collective ownership and governance rather than capitalist hierarchies.Body Modification and the Ethics of Transhumanism: Discussing transhumanism, Louis proposes a radical shift in how we perceive body modification—not as a rejection of our natural form but as a collaborative evolution with our physical selves. He emphasizes a respectful, co-creative relationship with the body, contrasting it with the often utilitarian, enhancement-focused approach seen in current transhumanist discourse.Climate Change as Both a Scientific and Personal Challenge: The episode explores climate change beyond its scientific basis, framing it as a challenge to human adaptability and foresight. Louis suggests that individual bets and prediction markets can help people internalize climate risks, making the abstract threat more tangible and prompting proactive decision-making in areas like real estate and resource management.Mysticism's Enduring Influence on Modern Thought: Louis's deep dive into mysticism, from Kabbalah to Theosophy, reveals how ancient spiritual traditions continue to shape contemporary philosophical and cultural landscapes. He connects these esoteric systems to modern tech ideologies, suggesting that the search for meaning and structure persists even in highly rational, secular environments.The Intersection of Technology, Spirituality, and Identity: The episode underscores a recurring theme: the blending of technological advancement with spiritual exploration. Whether discussing personal servers as digital shrines or the metaphysical implications of network states, Louis highlights how technology is not just a tool but a medium through which modern humans negotiate identity, community, and existential purpose.
On this episode of Crazy Wisdom, Stewart Alsop speaks with pianist and AI innovator Ayse Deniz, who is behind "Classical Regenerated," a tribute project that uses artificial intelligence to bring classical composers back to life. Ayse shares how she trains AI models on historical documents, letters, and research to create interactive experiences where audiences can "speak" with figures like Chopin. The conversation explores the implications of AI in music, education, and human perception, touching on active listening, the evolution of artistic taste, and the philosophical questions surrounding artificial intelligence. You can connect with Ayse's through Instagram or learn more about her work visiting her website at adpianist.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 Exploring the Classical Regenerated Project00:39 AI in Live Concerts and Historical Accuracy02:25 Active Listening and the Impact of Music04:33 Personal Experiences with Classical Music09:46 The Role of AI in Education and Learning16:30 Cultural Differences in Music Education21:33 The Future of AI and Human Interaction30:13 Political Correctness and Its Impact on Society35:23 The Struggles of Music Students36:32 Wisdom Traditions and Tough Love37:28 Cultural Differences in Education39:57 The Role of AI in Music Education42:23 Challenges and Opportunities with AI47:21 The Future of Governance and AI50:11 The Intersection of Technology and Humanity56:05 Creating AI-Enhanced Music Projects01:06:23 Final Thoughts and Future PlansKey InsightsAI is transforming how we engage with classical music – Ayse Deniz's Classical Regenerated project brings historical composers like Chopin back to life using AI models trained on their letters, academic research, and historical documents. By allowing audiences to interact with AI-generated versions of these composers, she not only preserves their legacy but also creates a bridge between the past and the future of music.Active listening is a lost skill that AI can help revive – Modern music consumption often treats music as background noise rather than an art form requiring deep attention. Ayse uses AI-generated compositions alongside original works to challenge audiences to distinguish between them, fostering a more engaged and analytical approach to listening.The nature of artistic interpretation is evolving with AI – Traditionally, human performers interpret classical compositions with emotional nuance, timing, and dynamics. AI-generated performances are now reaching a level where they can mimic these subtleties, raising questions about whether machines can eventually match or even surpass human expressiveness in music.AI's impact on education will depend on how it is designed – Ayse emphasizes that AI should not replace teachers but rather serve as a tool to encourage students to practice more and develop discipline. By creating an AI music tutor for children, she aims to support learning in a way that complements human instruction rather than undermining it.Technology is reshaping the psychology of expertise – With AI capable of outperforming humans in various fields, there is an emerging question of how people will psychologically adapt to always being second-best to machines. The discussion touches on whether AI-generated knowledge and creativity will demotivate human effort or inspire new forms of artistic and intellectual pursuits.The philosophical implications of AI challenge our sense of reality – As AI-generated personas and compositions become more convincing, distinguishing between what is “real” and what is synthetic is becoming increasingly difficult. The episode explores the idea that we may already be living in a kind of simulation, where our perception of reality is constructed and mediated by evolving technologies.AI is accelerating personal empowerment but also risks centralization – Just as personal computing once promised decentralization but led to the rise of tech giants, AI has the potential to give individuals new creative powers while also concentrating influence in the hands of those who control the technology. Ayse's work exemplifies how AI can be used for artistic and educational empowerment, but it also raises questions about the need for ethical development and accessibility in AI tools.
In this episode of Crazy Wisdom, Stewart Alsop sits down with Diego Basch, a consultant in artificial intelligence with roots in San Francisco and Buenos Aires. Together, they explore the transformative potential of AI, its unpredictable trajectory, and its impact on everyday life, work, and creativity. Diego shares insights on AI's role in reshaping tasks, human interaction, and global economies while touching on his experiences in tech hubs like San Francisco and Buenos Aires. For more about Diego's work and thoughts, you can find him on LinkedIn or follow him on Twitter @dbasch where he shares reflections on technology and its fascinating intersections with society.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:20 Excitement and Uncertainty in AI01:07 Technology's Impact on Daily Life02:23 The Evolution of Social Networking02:43 AI and Human Interaction03:53 The Future of Writing in the Age of AI05:27 Argentina's Unique Linguistic Creativity06:15 AI's Role in Argentina's Future11:45 Cybersecurity and AI Threats20:57 The Evolution of Coding and Abstractions31:59 Troubleshooting Semantic Search Issues32:30 The Role of Working Memory in Coding34:46 Human Communication vs. AI Translation35:46 AI's Impact on Education and Job Redundancy37:37 Rebuilding Civilization and Knowledge Retention39:54 The Resilience of Global Systems41:32 The Singularity Debate45:01 AI Integration in Argentina's Economy51:54 The Evolution of San Francisco's Tech Scene58:48 The Future of AI Agents and Security01:03:09 Conclusion and Contact InformationKey InsightsAI's Transformative Potential: Diego Basch emphasizes that artificial intelligence feels like a sci-fi concept materialized, offering tools that could augment human life by automating repetitive tasks and improving productivity. The unpredictability of AI's trajectory is part of what makes it so exciting.Human Adaptation to Technology: The conversation highlights how the layering of technological abstractions over time has allowed more people to interact with complex systems without needing deep technical knowledge. This trend is accelerating with AI, making once-daunting tasks more accessible even to non-technical individuals.The Role of Creativity in the AI Era: Diego discusses how creativity, unpredictability, and humor remain uniquely human strengths that current AI struggles to replicate. These qualities could play a significant role in maintaining human relevance in an AI-enabled world.The Evolving Nature of Coding: AI is changing how developers work, reducing the need for intricate coding knowledge while enabling a focus on solving more human-centric problems. While some coding skills may atrophy, understanding fundamental principles remains essential for adapting to new tools.Argentina's Unique Position: The discussion explores Argentina's potential to emerge as a significant player in AI due to its history of technological creativity, economic unpredictability, and resourcefulness. The parallels with its early adoption of crypto demonstrate a readiness to engage with transformative technologies.AI and Human Relationships: An AI-enabled economy might allow humans to focus more on meaningful, human-centric work and relationships as machines take over repetitive and mechanical tasks. This could redefine the value humans derive from work and their interactions with technology.Risks and Opportunities with AI Agents: The development of autonomous AI agents raises significant security and ethical concerns, such as ensuring they act responsibly and are not exploited by malicious actors. At the same time, these agents promise unprecedented levels of efficiency and autonomy in managing real-world tasks.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Reuben Bailon, an expert in AI training and technology innovation. Together, they explore the rapidly evolving field of AI, touching on topics like large language models, the promise and limits of general artificial intelligence, the integration of AI into industries, and the future of work in a world increasingly shaped by intelligent systems. They also discuss decentralization, the potential for personalized AI tools, and the societal shifts likely to emerge from these transformations. For more insights and to connect with Reuben, check out his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:12 Exploring AI Training Methods00:54 Evaluating AI Intelligence02:04 The Future of Large Action Models02:37 AI in Financial Decisions and Crypto07:03 AI's Role in Eliminating Monotonous Work09:42 Impact of AI on Bureaucracies and Businesses16:56 AI in Management and Individual Contribution23:11 The Future of Work with AI25:22 Exploring Equity in Startups26:00 AI's Role in Equity and Investment28:22 The Future of Data Ownership29:28 Decentralized Web and Blockchain34:22 AI's Impact on Industries41:12 Personal AI and Customization46:59 Concluding Thoughts on AI and AGIKey InsightsThe Current State of AI Training and Intelligence: Reuben Bailon emphasized that while large language models are a breakthrough in AI technology, they do not represent general artificial intelligence (AGI). AGI will require the convergence of various types of intelligence, such as vision, sensory input, and probabilistic reasoning, which are still under development. Current AI efforts focus more on building domain-specific competencies rather than generalized intelligence.AI as an Augmentative Tool: The discussion highlighted that AI is primarily being developed to augment human intelligence rather than replace it. Whether through improving productivity in monotonous tasks or enabling greater precision in areas like medical imaging, AI's role is to empower individuals and organizations by enhancing existing processes and uncovering new efficiencies.The Role of Large Action Models: Large action models represent an exciting frontier in AI, moving beyond planning and recommendations to executing tasks autonomously, with human authorization. This capability holds potential to revolutionize industries by handling complex workflows end-to-end, drastically reducing manual intervention.The Future of Personal AI Assistants: Personal AI tools have the potential to act as highly capable assistants by leveraging vast amounts of contextual and personal data. However, the technology is in its early stages, and significant progress is needed to make these assistants truly seamless and impactful in day-to-day tasks like managing schedules, filling out forms, or making informed recommendations.Decentralization and Data Ownership: Reuben highlighted the importance of a decentralized web where individuals retain ownership of their data, as opposed to the centralized platforms that dominate today. This shift could empower users, reduce reliance on large tech companies, and unlock new opportunities for personalized and secure interactions online.Impact on Work and Productivity: AI is set to reshape the workforce by automating repetitive tasks, freeing up time for more creative and fulfilling work. The rise of AI-augmented roles could lead to smaller, more efficient teams in businesses, while creating new opportunities for freelancers and independent contractors to thrive in a liquid labor market.Challenges and Opportunities in Industry Disruption: Certain industries, like software, which are less regulated, are likely to experience rapid transformation due to AI. However, heavily regulated sectors, such as legal and finance, may take longer to adapt. The discussion also touched on how startups and agile companies can pressure larger organizations to adopt AI-driven solutions, ultimately redefining competitive landscapes.
In this conversation, Stewart Alsop welcomes Ekue Kpodar for a thought-provoking exploration of technology, history, and societal evolution. The discussion traverses topics such as DARPA's pivotal role in technological innovation, the symbiotic relationship between governments and big tech, and the trajectory of AI in reshaping everything from scientific research to social organization. They touch on the influence of open-source movements, the philosophical underpinnings of accelerationism, and the complex ethical landscapes AI introduces. You can connect with Ekue through Twitter or LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:24 Diving into DARPA's Origins02:21 DARPA's Technological Contributions03:44 Government and Big Tech Interactions05:07 Historical Context of Technology and Empires11:58 Big Science vs. Little Science16:55 AI's Role in Future Research32:40 Political Implications of AI and Technology41:14 Future of Human and AI Integration47:03 Conclusion and FarewellKey InsightsDARPA's Role in Modern Technology: The conversation highlights DARPA as a central player in shaping key technological advancements such as the internet and the early development of Siri. The agency's strategy of fostering innovation through collaboration with universities and private companies underpins much of the progress in tech we see today, illustrating how government initiatives have historically catalyzed transformative breakthroughs.The Symbiosis of Government and Big Tech: A recurring theme is the deeply intertwined relationship between governments and big tech companies. From providing cloud services to pioneering research projects, companies like AWS and Oracle play a vital role in national operations, emphasizing how modern economies depend on these partnerships to push forward technological frontiers.Generative AI and Science Evolution: Ekue Kpodar discusses how generative AI is revolutionizing fields like biology and chemistry. Tools like protein folding models and molecule generators are paving the way for breakthroughs in medicine and materials science, demonstrating how AI can accelerate complex research that previously required vast resources and specialized teams.Centralization vs. Decentralization: The episode delves into how societal systems toggle between centralized and decentralized models. While the U.S. strikes a balance, contrasting approaches like China's centralized focus highlight the impact of governance structures on innovation and societal organization.Philosophy of Accelerationism: The discussion explores accelerationism, a concept arguing that the rapid advancement of technology and capitalism could lead to societal upheaval, potentially necessitating a systemic restart. This philosophical lens is applied to understand the dissonance between human values and the unchecked growth of AI and economic systems.AI as a Management Tool and Existential Threat: Both hosts ponder the future role of AI in society, ranging from its potential to replace human managers with algorithmic oversight to Elon Musk's controversial stance on merging humanity with AI through initiatives like Neuralink. These reflections underscore the growing influence of AI in shaping human interactions and decisions.Imagination and the Cost of Knowledge: The advent of AI significantly lowers the cost of generating and accessing new knowledge, which raises profound questions about how humanity will adapt. The hosts speculate on how AI might impact creativity, societal evolution, and even the formation of entirely new paradigms that transcend existing frameworks of understanding.
In this engaging conversation on the Crazy Wisdom podcast, Stewart Alsop talks with neurologist Brian Ahuja about his work in intraoperative neurophysiological monitoring, the intricate science of brainwave patterns, and the philosophical implications of advancing technology. From the practical applications of neuromonitoring in surgery to broader topics like transhumanism, informed consent, and the integration of technology in medicine, the discussion offers a thoughtful exploration of the intersections between science, ethics, and human progress. Brian shares his views on AI, the medical field's challenges, and the trade-offs inherent in technological advancement. To follow Brian's insights and updates, you can find him on Twitter at @BrianAhuja.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:21 Understanding Intraoperative Neurophysiological Monitoring00:59 Exploring Brainwaves: Alpha, Beta, Theta, and Gamma03:25 The Impact of Alcohol and Benzodiazepines on Sleep07:17 The Evolution of Remote Neurophysiological Monitoring09:19 Transhumanism and the Future of Human-Machine Integration16:34 Informed Consent in Medical Procedures18:46 The Intersection of Technology and Medicine24:37 Remote Medical Oversight25:59 Real-Time Monitoring Challenges28:00 The Business of Medicine29:41 Medical Legal Concerns32:10 Alternative Medical Practices36:22 Philosophy of Mind and AI43:47 Advancements in Medical Technology48:55 Conclusion and Contact InformationKey InsightsIntraoperative Neurological Monitoring: Brian Ahuja introduced the specialized field of intraoperative neurophysiological monitoring, which uses techniques like EEG and EMG to protect patients during surgeries by continuously tracking brain and nerve activity. This proactive measure reduces the risk of severe complications like paralysis, showcasing the critical intersection of technology and patient safety.Brainwave Categories and Their Significance: The conversation provided an overview of brainwave patterns—alpha, beta, theta, delta, and gamma—and their connections to various mental and physical states. For instance, alpha waves correspond to conscious relaxation, while theta waves are linked to deeper relaxation or meditative states. These insights help demystify the complex language of neurophysiology.Transhumanism and the Cyborg Argument: Ahuja argued that humans are already "cyborgs" in a functional sense, given our reliance on smartphones as extensions of our minds. This segued into a discussion about the philosophical and practical implications of transhumanism, such as brain-computer interfaces like Neuralink and their potential to reshape human capabilities and interactions.Challenges of Medical Technology Integration: The hype surrounding medical technology advancements, particularly AI and machine learning, was critically examined. Ahuja highlighted concerns over inflated claims, such as AI outperforming human doctors, and stressed the need for grounded, evidence-based integration of these tools into healthcare.Philosophy of Mind and Consciousness: A recurring theme was the nature of consciousness and its central role in both neurology and AI research. The unresolved "hard problem of consciousness" raises ethical and philosophical questions about the implications of mimicking or enhancing human cognition through technology.Trade-offs in Technological Progress: Ahuja emphasized that no technological advancement is without trade-offs. While tools like CRISPR and mRNA therapies hold transformative potential, they come with risks like unintended consequences, such as horizontal gene transfer, and the ethical dilemmas of their application.Human Element in Medicine: The conversation underscored the importance of human connection in medical practice, particularly in neurology, where patients often face chronic and emotionally taxing conditions. Ahuja's reflections on the pitfalls of bureaucracy, private equity in healthcare, and the overemphasis on defensive medicine highlighted the critical need to prioritize patient-centered care in an increasingly technological and administrative landscape.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop is joined by Reya Manna, founder of the Song Keepers School, for an inspiring conversation that traverses spirituality, the power of the human voice, psychedelics, ancient creation myths, and the cultural significance of storytelling and singing. Reya shares her journey from being discouraged from pursuing a life centered around singing to discovering its deeper healing potential during South American plant medicine ceremonies. The discussion touches on the sacred role of sound in protection and healing, the pitfalls of artificial intelligence's detachment from soul and embodiment, and the importance of nurturing authentic self-expression amidst modern distractions like social media. This rich dialogue emphasizes the profound wisdom carried through connection to nature, the divine feminine, and practices of deep listening, which Reya also integrates into her Song Keepers School. If you're seeking to reconnect with your voice and align with ancient healing traditions, you can learn more about Reya's work and upcoming programs at www.reyamanna.com. The training program this year starts Feb 1st.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:32 Rhea Mana's Journey with Singing02:01 The Healing Power of Singing and Shamanic Ceremonies03:43 Exploring Psychedelics and Their Cultural Impact05:56 The Role of Social Media in Modern Spirituality09:02 Desire, Sorcery, and Gnostic Myths18:50 Artificial Intelligence and Its Philosophical Implications23:13 The Influence of Technology on Human Connection35:05 God as Motion and Creative Spark35:39 Shifting Ages and Prophecies37:14 Embodied Knowledge and AI38:17 Rebuilding Traditions and Health41:50 The Spirit of Singing50:14 The Divine Feminine and Spiritual Discernment57:38 Predictive Programming and Creative Principle59:17 Connecting with Voice and Nature01:04:04 Song Keeper School and Spiritual PracticesKey InsightsThe Power of the Voice as a Tool for Healing and Connection: Reya Manna emphasizes that singing is more than a performance—it's a sacred act of self-expression that connects us to our essence and to the unseen world. The vibration of our own voice can ground us, heal emotional wounds, and serve as a tool for blessing and transformation when used with intentionality and love. This deeper understanding of the voice's role in healing can reframe how we view self-expression and creativity.Rediscovering Ancestral Wisdom and Connection to Nature: The conversation highlights how many of us have become disconnected from ancestral practices and the natural world. Reya explains how nature holds wisdom in its patterns and sounds, serving as a "library" of memory and guidance. Through offerings, prayers, and song, individuals can rekindle a relationship with the earth that restores a sense of belonging and purpose.Singing as a Transmission of Love and Sacred Intentions: In contrast to the commodification of music in modern society, Reya describes how the act of singing has traditionally been used by shamans and healers to convey love, protection, and blessings. By building a practice of singing from the heart, individuals can access their authentic voice and align with their higher purpose, amplifying their energy field and positively impacting others.Navigating the Modern Landscape of Technology and AI with Discernment: Stewart and Reya discuss the rise of artificial intelligence and its impact on creativity and self-awareness. They reflect on how AI can replicate but cannot embody the soulful essence of human expression. This prompts a reminder to foster resilience against digital distractions by grounding ourselves in embodied experiences and genuine human connection.The Interplay Between Spirituality, Desire, and Social Conditioning: The discussion delves into the Gnostic myths and the idea that true desires originate from the "high heart" and guide us toward our soul's blueprint. However, lower desires—amplified by social media and consumerism—can entrap us in illusions. Recognizing this distinction can help individuals reclaim their agency and align with their authentic spiritual path.The Divine Feminine and Sacred Balance in Creation: Reya brings attention to the erasure of the divine feminine in many spiritual narratives and emphasizes the importance of restoring balance by honoring both masculine and feminine energies. She shares stories of ancient priestesses and the sacred role of women as conduits of creation and wisdom, suggesting that reclaiming this knowledge is essential for collective healing and spiritual evolution.A Call to Collective Kindness and Creative Agency: The conversation ends with a reflection on the power of individual actions in creating ripples of collective change. Whether through a simple song, a kind gesture, or a moment of deep listening, every act of love and care contributes to a more harmonious world. Reya underscores the importance of remembering that we are creators of our reality, capable of shaping a future imbued with compassion, connection, and truth.
In this episode of Crazy Wisdom, Stewart Alsop welcomes Christopher Canal, co-founder of Equistamp, for a deep discussion on the current state of AI evaluations (evals), the rise of agents, and the safety challenges surrounding large language models (LLMs). Christopher breaks down how LLMs function, the significance of scaffolding for AI agents, and the complexities of running evals without data leakage. The conversation covers the risks associated with AI agents being used for malicious purposes, the performance limitations of long time horizon tasks, and the murky realm of interpretability in neural networks. Additionally, Christopher shares how Equistamp aims to offer third-party evaluations to combat principal-agent dilemmas in the industry. For more about Equistamp's work, visit Equistamp.com to explore their evaluation tools and consulting services tailored for AI and safety innovation.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:13 The Importance of Evals in AI01:32 Understanding AI Agents04:02 Challenges and Risks of AI Agents07:56 Future of AI Models and Competence16:39 The Concept of Consciousness in AI19:33 Current State of Evals and Data Leakage24:30 Defining Competence in AI31:26 Equistamp and AI Safety42:12 Conclusion and Contact InformationKey InsightsThe Importance of Evals in AI Development: Christopher Canal emphasizes that evaluations (evals) are crucial for measuring AI models' capabilities and potential risks. He highlights the uncertainty surrounding AI's trajectory and the need to accurately assess when AI systems outperform humans at specific tasks to guide responsible adoption. Without robust evals, companies risk overestimating AI's competence due to data leakage and flawed benchmarks.The Role of Scaffolding in AI Agents: The conversation distinguishes between large language models (LLMs) and agents, with Christopher defining agents as systems operating within a feedback loop to interact with the world in real time. Scaffolding—frameworks that guide how an AI interprets and responds to information—plays a critical role in transforming static models into agents that can autonomously perform complex tasks. He underscores how effective scaffolding can future-proof systems by enabling quick adaptation to new, more capable models.The Long Tail Challenge in AI Competence: AI agents often struggle with tasks that have long time horizons, involving many steps and branching decisions, such as debugging or optimizing machine learning models. Christopher points out that models tend to break down or lose coherence during extended processes, a limitation that current research aims to address with upcoming iterations like GPT-4.5 and beyond. He speculates that incorporating real-world physics and embodied experiences into training data could improve long-term task performance.Ethical Concerns with AI Applications: Equistamp takes a firm stance on avoiding projects that conflict with its core values, such as developing AI models for exploitative applications like parasocial relationship services or scams. Christopher shares concerns about how easily AI agents could be weaponized for fraudulent activities, highlighting the need for regulations and more transparent oversight to mitigate misuse.Data Privacy and Security Risks in LLMs: The episode sheds light on the vulnerabilities of large language models, including shared cache issues that could leak sensitive information between different users. Christopher references a recent paper that exposed how timing attacks can identify whether a response was generated by hitting the cache or computing from scratch, demonstrating potential security flaws in API-based models that could compromise user data.The Principal-Agent Dilemma in AI Evaluation: Stewart and Christopher discuss the conflict of interest inherent in companies conducting their own evals to showcase their models' performance. Christopher explains that third-party evaluations are essential for unbiased assessments. Without external audits, organizations may inflate claims about their models' capabilities, reinforcing the need for independent oversight in the AI industry.Equistamp's Mission and Approach: Equistamp aims to fill a critical gap in the AI ecosystem by providing independent, safety-oriented evaluations and consulting services. Christopher outlines their approach of creating customized evaluation frameworks that compare AI performance against human baselines, helping clients make informed decisions about deploying AI systems. By prioritizing transparency and safety, Equistamp hopes to set a new standard for accountability in the rapidly evolving AI landscape.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop is joined by Christopher Demetrakos, founder and CEO of Manzanita KK, a neuroscience-based marketing consultancy in Japan. Together, they explore a wide range of topics, including the evolution of marketing from intuition-driven strategies to neurochemistry-based resonance, the mechanics of human decision-making, and the implications of new technologies like LLMs and immersive advertising tools. They also tackle profound questions about societal shifts, cultural identities, and the future of humanity in an era of technological acceleration. For more on Christopher's work, you can find him under the username "Demetrakos" across LinkedIn, TikTok, YouTube, and other platforms.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:23 Understanding Gen Three Marketing00:57 The Role of Neurochemicals in Marketing01:20 Paul Zak's Contributions and Smartwatch Technology02:56 Insights on Consumer Behavior and Language03:39 The Conscious vs. Non-Conscious Mind08:09 Decision Making and Cognitive Traits11:20 Addressing the Demographic Crisis19:55 The Future of Media and Advertising24:26 Social Overstimulation and Its Consequences36:42 Audience Reactions and Cultural Observations36:57 The Concept of Individualism in Japan39:24 Living as an Expat in Different Cultures40:55 Challenges of Being an Outsider in Japan43:48 Future of the Company and Expansion Plans46:53 The Role of AI in Advertising50:20 Philosophical Implications of AI and Accelerationism01:03:36 Spiritual and Existential Questions in a Technological World01:11:07 Closing Thoughts and Contact InformationKey InsightsMarketing and Neuroscience are Converging: Christopher Demetrakos introduces the concept of “resonance” in marketing, where campaigns are designed to align with consumers' psychological traits. By targeting specific neurochemical responses, like the simultaneous release of dopamine and oxytocin, marketers can move beyond the traditional focus on “liking” and instead drive action. This approach signals a revolutionary shift in how advertising is conceived and measured.The Limits of Conscious Awareness in Decision-Making: The episode highlights research showing that only 5% of cognition is conscious, with the rest governed by unconscious processes. Christopher shares examples of studies where people's midbrain activity predicted outcomes far better than their verbal responses, challenging traditional methods of market research and decision-making.Emerging Technologies Redefine Advertising: Tools like smartwatches and LLMs are poised to disrupt advertising by making it possible to predict and trigger consumer actions with unprecedented precision. Christopher envisions a future where AI not only analyzes markets but creates entire advertising campaigns, reducing reliance on traditional agencies.Demographic Challenges and Overstimulation: The conversation dives into the demographic crises faced by countries like Japan, connecting declining birth rates to societal overstimulation and paradoxes of choice. Easy access to technology, such as smartphones and social media, alters primal human drives, contributing to shifts in reproduction patterns and social behavior.The Media Landscape is Fracturing: Stewart and Christopher discuss how the shift from traditional media to social platforms has fragmented public attention. This change mirrors historical media disruptions, such as the printing press and television, but now points toward an era where hyper-targeted content and personalized advertising dominate.Future Societies and Existential Questions: As technology accelerates, Christopher suggests humanity may be transitioning from its “midlife” phase—focused on material prosperity—to a more reflective stage, grappling with spiritual and existential questions. He points to phenomena like morphic resonance and alternative community models as indicators of this evolution.Disruption as Opportunity and Challenge: The potential of Gen 3 marketing is both exhilarating and daunting. Christopher highlights the ethical concerns of wielding technology that can sell “anything to anyone” while emphasizing the importance of bold, visionary investors willing to transform the trillion-dollar advertising industry responsibly. This underscores the need to balance innovation with humanity's broader interests.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop chats with Gianluca Minoprio, the innovative mind behind Amanu and AmanuPay, a payment app aiming to make digital currency as seamless as cash. The conversation spans groundbreaking technology like ultrasound for contactless payments, the philosophy of currency competition inspired by Hayek's The Denationalization of Money, and the implications of blockchain, AI, and digital feudalism in shaping our future. For more updates, follow Gianluca on Twitter @jlminoprio or check out AmanuPay.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:23 Innovative Payment Solutions with AmanuPay01:06 Challenges and Solutions in Contactless Payments02:17 The Concept and Technology Behind Ultrasound Payments03:24 The Future of Currency and Payment Systems06:36 Exploring Hayek's Theories on Currency08:38 Global Currency Competition and Its Implications19:59 The Role of Debt in Modern Economies21:58 The Intersection of Crypto and Traditional Finance23:02 The Evolution of the Internet and Its Impact on Finance26:08 Blockchain vs. Internet: Understanding the Differences29:11 Smart Contracts and Decentralized Applications30:06 The Importance of Oracles in Blockchain Networks31:48 Emerging Technologies and the Future of Computing33:03 Bitcoin as the Future Currency33:30 Government Resistance to Bitcoin34:25 Institutional Adoption of Bitcoin34:50 Global Fragmentation and Bitcoin35:52 Communism and Naivete37:24 Digital Feudalism Explained37:46 Elon Musk: The Digital Warlord41:01 Robotic Advancements and Real Steel43:27 AI and Human Augmentation48:16 AI in Education and Coding55:50 The Future of AI and Software Engineering57:19 Conclusion and Final ThoughtsKey InsightsRevolutionizing Contactless Payments: Gianluca Minoprio introduces AmanuPay, a groundbreaking payment app that leverages ultrasound technology for phone-to-phone transactions. Unlike NFC, which requires specialized hardware, ultrasound enables seamless and decentralized payment exchanges between devices using microphones and speakers, paving the way for more inclusive and cost-effective digital currency solutions.Currency Competition as a Catalyst for Innovation: Drawing inspiration from Hayek's The Denationalization of Money, Gianluca discusses the concept of currency competition, where multiple currencies, including cryptocurrencies, could coexist and compete freely. This paradigm challenges centralized financial systems and encourages innovation, potentially leading to better financial tools and user experiences.AI as a Game-Changer in Development: AI tools like OpenAI's GPT and Copilot are reshaping the development process. Gianluca shares how AI-enabled coding helped him prototype a keyboard-integrated wallet for AmanuPay in record time during a hackathon. This reflects the growing potential of AI to democratize access to complex development capabilities, enabling rapid innovation.Blockchain's Potential in a Fragmented World: The episode highlights blockchain's role in offering a neutral, trustless medium for global transactions, particularly in a politically and economically fragmented world. Gianluca suggests that Bitcoin and other decentralized currencies are poised to become indispensable tools for cross-border trade and institutional collaboration.The Challenges of User Experience in Decentralized Systems: Currency competition brings its own set of challenges, particularly in user experience. Gianluca envisions solutions like integrated wallets that automatically convert currencies during transactions, eliminating the complexity of handling multiple forms of payment in a decentralized economy.The Rise of Digital Feudalism: Stewart and Gianluca explore the concept of "digital feudalism," where influential individuals like Elon Musk operate as modern digital warlords, leveraging decentralized technologies to wield power outside traditional hierarchies. This evolution reflects a blend of feudal and capitalist structures driven by competency and innovation.The Role of AI in Education and Creativity: AI's impact on education and creativity is transformative. Gianluca shares how AI-enhanced learning and development tools can streamline education by focusing on core creative and analytical skills while automating repetitive tasks. He emphasizes that while AI is excellent for prototyping, human creativity remains irreplaceable for truly novel and groundbreaking innovations.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop chats with Matthew Gialich, co-founder and CEO of AstroForge, about the fascinating world of asteroid mining. They explore how advances in technology and reduced launch costs are enabling humanity to tap into the untapped resources of metallic asteroids, the challenges of deep space operations, and the long-term vision for making asteroid mining economically viable. Listeners can follow AstroForge for updates on LinkedIn and Twitter, and connect with Matthew directly for inquiries on his LinkedIn or at matt@astroforge.io.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 Asteroid Mining: Current Knowledge and Discoveries01:58 Near-Earth Asteroids and Their Potential04:08 The Value of Platinum Group Metals06:21 Spacecraft Operations and Human Involvement11:06 Asteroid Missions and Scientific Discoveries21:38 Economic and Environmental Implications of Space Mining27:04 Collaborating with SpaceX for Asteroid Missions27:42 Challenges and Opportunities in Moon Mining29:20 Navigating Gravity in Space Missions30:09 The Origin Story of Astroforge33:32 Asteroid Mining: Past and Present34:29 The Future of Space Industry and Business38:05 Radiation Challenges in Deep Space40:44 Thermal Management in Spacecraft42:43 Innovations in Robotics and Manufacturing45:37 The Role of Software in Space Startups50:10 Recruiting Top Talent for Astroforge51:37 Knowledge Management and Team Structure52:40 Staying Connected with AstroforgeKey InsightsAsteroid Mining is Becoming Feasible: Advancements in telescope technology and reduced launch costs are paving the way for asteroid mining to transition from science fiction to reality. AstroForge is focused on mining metallic asteroids rich in platinum group metals, which are critical for various industrial applications.Near-Earth Asteroids Offer Better Opportunities: Contrary to Hollywood depictions of mining in the asteroid belt, near-Earth asteroids are more accessible and practical targets for mining. These asteroids are closer to Earth and contain valuable materials, making them ideal for the initial stages of space resource exploitation.The Importance of Platinum Group Metals: Platinum, rhodium, palladium, and other platinum group metals are integral to modern technology, found in everything from electronics to industrial equipment. Mining these materials in space could revolutionize supply chains and reduce the environmental impact of terrestrial mining.The Role of Technology in Exploration: AstroForge uses cutting-edge sensors, spectrometry, and imaging systems to study and identify the best asteroids for mining. These technologies allow for remote analysis of asteroid composition, paving the way for efficient resource extraction missions.Spacecraft Design for Deep Space: AstroForge is designing spacecraft optimized for deep space exploration, which operate in the harsh conditions beyond Earth's gravity well. Challenges like radiation, thermal management, and propulsion systems are central to the company's engineering efforts.Economic and Environmental Impacts of Space Mining: Space mining has the potential to make terrestrial mining for certain materials economically obsolete, reducing environmental damage and the hazardous conditions associated with deep-earth mining operations. The company's vision includes making Earth a better place by shifting resource extraction to space.The Evolution of the Space Industry: The space sector is evolving rapidly, with private companies leading the charge in areas traditionally dominated by government agencies. AstroForge's mission is a testament to this shift, focusing on commercializing deep space exploration and mining with innovative strategies and cost-efficient technologies.
In this episode of the Crazy Wisdom Podcast, Stewart Alsop talks with guest Saila about Argentina's fascinating socio-economic dynamics, its chaotic history, and potential future under the current government. Topics range from Argentina's unique financial practices—like the "blue dollar" system and the impact of inflation on everyday life—to global geopolitical shifts, the role of bureaucracy, and the rise of multipolarity. They also explore the opportunities and challenges for crypto and fintech in Argentina, drawing connections to innovation spurred by economic adversity. Check out Saila on Twitter at @sailaunderscore for more insights.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Welcome00:13 Argentina's Economic Situation00:56 Understanding the Blue Rate03:20 Psychological Impact of Inflation07:17 Global Political Dynamics14:30 AI and Human Perception21:23 Bureaucracy and Governance28:20 Historical Context and Future Predictions37:36 The Birth of ARPA and NASA38:21 Crazy Ideas and Vietnam39:11 The Internet's Origin and Tech's Evolution39:46 The Political Silence of Tech Giants40:58 The Dark Matter of Eligibility41:30 Navigating the Tech and Finance Worlds48:37 The Reality of Crypto in Argentina58:36 Argentina's Unique Financial Landscape01:07:20 Conclusion and Final ThoughtsKey InsightsArgentina's Economic Complexity and the Blue Dollar: Argentina's economic system is uniquely chaotic, characterized by a dual exchange rate system with the "blue dollar" or parallel exchange rate operating alongside the official rate. This system reflects a deeply ingrained culture of financial adaptation and innovation, where residents navigate inflation and economic instability with remarkable dexterity. The resilience and pragmatism of Argentines in the face of such challenges have made their everyday understanding of economics highly nuanced and practical.The Global Perception of Argentina Under Javier Milei: Under the leadership of Javier Milei, Argentina is at a critical juncture, attempting to shift from decades of economic chaos to potential stabilization. Despite initial skepticism, Milei's administration has managed to maintain a credible fiscal policy, such as adhering to a zero primary deficit. This success challenges both local and global expectations, showcasing how Argentina's political narrative can surprise even seasoned economists.The Global Shift from Unipolarity to Multipolarity: The conversation reflects on the decline of the unipolar world order dominated by the United States and the rise of a more fragmented multipolar reality. With China as a prominent actor but inexperienced in global leadership, the dynamics of international power are evolving. The U.S. faces a choice between deliberate withdrawal from global dominance or grappling with a loss of influence—a process that holds implications for countries like Argentina operating on the periphery.The Power of Illegibility in Systems and Markets: Saila introduces the concept of "illegibility," where the real value in systems often lies in aspects that are not immediately visible or measurable. This is particularly true in environments like Argentina, where formal systems often fail, and informal networks and practices flourish. The same holds in global markets and innovation hubs, where the most significant opportunities often emerge from navigating the unspoken or unseen rules.The Role of Crypto in Argentina's Financial Landscape: Argentina has become a critical testbed for cryptocurrency applications due to its economic instability and limited access to traditional credit markets. Stablecoins, in particular, have found real-world use cases as tools for saving and transacting in a volatile economy. This positions Argentina as an unlikely but important center for crypto innovation, driven by necessity rather than speculation.Innovation Through Constraint: Economic adversity in Argentina has sparked remarkable creativity and ingenuity among its population. From unique financial practices like partial cash housing transactions to unconventional uses of stablecoins, the constraints of the system have fostered innovation. This serves as a case study in how challenging environments can generate solutions with broader applicability, even in more stable economies.Bureaucracy as an Autonomous Agent: The conversation draws parallels between bureaucratic systems in Argentina and those in developed nations like the U.S., highlighting how they often evolve into semi-autonomous entities prioritizing their survival. Argentina's overgrown bureaucracy has contributed to inefficiency and economic decline, yet similar patterns of self-preservation and stagnation are visible in Western governments and institutions as well.
On this episode of Crazy Wisdom, Stewart Alsop welcomes back guest David Hundley, a principal engineer at a Fortune 500 company specializing in innovative machine learning applications. The conversation spans topics like techno-humanism, the future interplay of consciousness and artificial intelligence, and the societal implications of technologies like neural interfaces and large language models. Together, they explore the philosophical and technical challenges posed by advancements in AI and what it means for humanity's trajectory. For more insights from David, visit his website or follow him on Twitter.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:31 Techno Humanism vs. Transhumanism02:14 Exploring Humanism and Its Historical Context05:06 Accelerationism and Consciousness06:58 AI Conversations and Human Interaction10:21 Challenges in AI and Machine Learning13:26 Product Integration and AI Limitations19:03 Coding with AI: Tools and Techniques25:28 Vector Stores vs. Traditional Databases32:16 Understanding Network Self-Optimization33:25 Exploring Parameters and Biases in AI34:53 Bias in AI and Societal Implications38:28 The Future of AI and Open Source44:01 Techno-Humanism and AI's Role in Society48:55 The Intersection of AI and Human Emotions52:48 The Ethical and Societal Impact of AI58:20 Final Thoughts and Future DirectionsKey InsightsTechno-Humanism as a Framework: David Hundley introduces "techno-humanism" as a philosophy that explores how technology and humanity can coexist and integrate without losing sight of human values. This perspective acknowledges the current reality that we are already cyborgs, augmented by devices like smartphones and smartwatches, and speculates on the deeper implications of emerging technologies like Neuralink, which could redefine the human experience.The Limitations of Large Language Models (LLMs): The discussion highlights that while LLMs are powerful tools, they lack true creativity or consciousness. They are stochastic parrots, reflecting and recombining existing knowledge rather than generating novel ideas. This distinction underscores the difference between human and artificial intelligence, particularly in the ability to create new explanations and knowledge.Biases and Zeitgeist Machines: LLMs are described as "zeitgeist machines," reflecting the biases and values embedded in their training data. While this mirrors societal norms, it raises concerns about how conscious and unconscious biases—shaped by culture, regulation, and curation—impact the models' outputs. The episode explores the ethical and societal implications of this phenomenon.The Role of Open Source in AI's Future: Open-source AI tools are positioned as critical to the democratization of technology. David suggests that open-source projects, such as those in the Python ecosystem, have historically driven innovation and accessibility, and this trend is likely to continue with AI. Open-source initiatives provide opportunities for decentralization, reducing reliance on corporate-controlled models.Potential of AI for Mental Health and Counseling: David shares his experience using AI for conversational support, comparing it to talking with a human friend. This suggests a growing potential for AI in mental health applications, offering companionship or guidance. However, the ethical implications of replacing human counselors with AI and the depth of empathy that machines can genuinely offer remain questions.The Future of Database Technologies: The discussion explores traditional databases versus emerging technologies like vector and graph databases, particularly in how they support AI. Graph databases, with their ability to encode relationships between pieces of information, could provide a more robust foundation for complex queries in knowledge-intensive environments.The Ethical and Societal Implications of AI: The conversation grapples with how AI could reshape societal structures and values, from its influence on decision-making to its potential integration with human cognition. Whether through regulation, neural enhancement, or changes in media dynamics, AI presents profound challenges and opportunities for human civilization, raising questions about autonomy, ethics, and collective progress.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with returning guest Terrance Yang for a wide-ranging discussion on critical financial and societal issues. They explore the state of U.S. federal debt, drawing comparisons to historical periods like World War II, and consider modern-day parallels with Argentina's economic struggles and the election of Javier Milei. The conversation shifts to broader reflections on government waste, regulatory overreach, and the potential for AI to streamline bureaucracy and disrupt traditional finance. Terrance shares sharp insights on Bitcoin as a long-term investment and critiques other cryptocurrencies as vehicles for insider speculation. The episode also touches on market-making, trading psychology, and the rise of autonomous vehicles, hinting at the transformative impact of AI-driven innovation. You can connect with Terrance through his LinkedIn profile.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:35 Discussing U.S. Debt and Financial Insights02:14 Historical Context and Comparisons04:38 Libertarian Governments and Economic Policies08:55 Government Spending and Regulation18:21 Homelessness and Urban Challenges23:06 Bitcoin and Cryptocurrency Insights26:22 Investment Strategies and Market Dynamics33:28 AI and Future Investments34:06 AI Market Predictions and Amazon's Strategy36:37 The Struggles of Big Tech with AI Integration38:21 The Future of Self-Driving and Flying Cars42:22 Investment Advice: Bitcoin and AI53:52 Argentina's Economic Lessons01:04:23 The Role of AI in Government and Society01:08:12 Conclusion and Contact InformationKey Insights1. The U.S. Debt Crisis Has Parallels to World War II, But the Path Forward is UnclearTerrance Yang highlights how the current U.S. debt situation resembles the debt spike seen during World War II. Back then, the U.S. "grew its way out" of debt as GDP growth outpaced debt growth. However, today's environment is more complex, with federal net outlays growing at an unsustainable rate. While the debt-to-GDP ratio appears alarming, Yang suggests that focusing on cash flow (tax revenue minus expenditures) as a percentage of GDP offers a more nuanced view. The big question is whether the U.S. can grow its way out of debt again or if fundamental spending cuts are required.2. Bitcoin is a Long-Term Bet, But Most Other Cryptos Are Insider GamesYang views Bitcoin as the only viable long-term store of value among cryptocurrencies, while labeling most altcoins as speculative vehicles designed to "pump and dump" retail investors. He advises listeners to avoid trading Bitcoin due to the dominance of market makers like Goldman Sachs, who use superior data and trading models. Instead, he recommends dollar-cost averaging and focusing on the long-term potential of Bitcoin as "digital gold." Yang cautions against chasing short-term gains in crypto, comparing it to amateur players trying to compete with professional athletes.3. Regulatory Overreach is Stifling American Efficiency, But AI Could Change ThatThe conversation critiques the inefficiencies in U.S. government bureaucracy, using California's high-speed rail project as a cautionary tale of regulatory bloat and government waste. Terrance Yang believes AI has the potential to streamline government services, automate repetitive tasks, and reduce the need for an ever-expanding workforce. He suggests that as government employees retire, many of their roles could be replaced with AI systems, leading to leaner, more efficient public institutions. This vision echoes similar efficiency models seen in Singapore and other high-performing nations.4. The Rise of AI-Enhanced Legal and Coding ProductivityYang points out how large language models (LLMs) like ChatGPT Pro are already allowing people to reduce their reliance on lawyers and coders. People are saving thousands of dollars in legal fees by using AI to review contracts and analyze legal risks. In coding, AI tools are helping developers find errors, refactor code, and improve efficiency. Yang himself plans to use AI to help document Bitcoin's core code, a project aimed at making the codebase more accessible to non-technical users. This marks a major shift in the accessibility of technical knowledge.5. Trading is a Rigged Game, and Most People Should Stay OutYang compares day trading to amateur athletes trying to compete with NBA stars like LeBron James. Most retail investors are going up against highly sophisticated market makers like Citadel and Jane Street, who have access to superior information, tools, and algorithms. He explains that market makers profit by always being ready to buy and sell, unlike retail traders who get caught up in emotional decision-making. The best option for most people, Yang says, is to avoid trading entirely and instead invest in low-cost index funds, like the Vanguard S&P 500 fund.6. Argentina's Crisis Offers Lessons for the U.S. on Debt and Welfare StatesDrawing on Argentina's economic collapse, the conversation explores how unsustainable welfare policies and out-of-control debt can bring a nation to its knees. Stewart Alsop notes that while Argentina's citizens are acutely aware of their country's fiscal dysfunction, many Americans remain oblivious to similar risks in the U.S. Yang and Alsop highlight that Argentina's reliance on printing pesos mirrors what could happen if the U.S. dollar's dominance weakens. Javier Milei's rise as Argentina's libertarian president signals a possible shift away from this broken system, but the U.S. appears far from having its own "wake-up moment."7. AI-Driven Automation Will Reshape Cities, Transportation, and JobsWaymo's driverless cars, which are already being tested in Los Angeles, represent a fundamental shift in how cities will operate in the future. Yang explains how autonomous vehicles could make traffic "less painful" by allowing passengers to be productive while stuck in slow-moving traffic. This shift will likely spur greater suburbanization as people find it more tolerable to live farther from work. Coupled with AI-driven automation in government and the workforce, the nature of cities and daily life is poised for a profound transformation, with L.A. potentially becoming more livable than it has been in decades.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Swati Chaturvedi, CEO of Propel X, to explore the world of deep tech, frontier technology, and the forces shaping the future of human progress. Swati shares her decade-long journey in deep tech, reflecting on how the term evolved as a response to the "tech startup" boom, and discusses her focus on companies leveraging breakthroughs in science and engineering for humanity's advancement. The conversation touches on the role of government support, the power of hypothesis-free experimentation, and the critical importance of partnerships between startups and large corporations. They also discuss transformative technologies like AI, autonomous drones, bioinformatics, robotics, and the possibilities and perils of human augmentation. For more insights from Swati, visit Propel X at www.propelx.com or connect with her on LinkedIn, where she shares her thoughts on innovation, R&D, and the future of technology.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:16 Defining Deep Tech and Its Evolution03:06 Challenges and Philosophical Insights in Deep Tech07:07 AI's Role in Engineering and Bioinformatics14:22 Future Shock and Human Augmentation14:35 The Evolution of Science and Technology22:58 The Future of Work and Social Dynamics24:06 Exploring Sci-Fi Genres: Cyberpunk vs. Solarpunk25:25 Exploring Solar Punk and Human Problems26:01 The Promise and Limitations of Deep Tech26:39 Economic Realities of Technological Advancements27:16 Future Impact of Emerging Technologies28:58 Challenges in Ag Tech and Environmental Concerns29:30 Global Environmental Change and Human Activity33:53 The Role of Modeling in Predicting Climate Impacts36:22 Scientific Method and Industry Collaboration39:23 Government's Role in Early Stage Research42:34 Investment Strategies in Deep Tech46:27 Consumer and Corporate Markets for New Technologies49:12 Conclusion and Future DiscussionsKey InsightsThe Rise of Deep Tech as a Distinct Category: Swati Chaturvedi explains how the concept of "deep tech" emerged as a response to the overuse of the term "tech startup" during the heyday of consumer technology. Unlike simple software apps like photo-sharing or delivery platforms, deep tech focuses on companies leveraging scientific and engineering breakthroughs to solve fundamental human challenges. This includes innovations in fields like AI, robotics, life sciences, space technology, and advanced materials. Her 2014 blog post defining deep tech has since become a widely referenced resource in the field, signaling a shift in focus from digital consumer solutions to tangible, science-based advancements.The Role of Hypothesis-Free Experimentation: Traditional scientific research follows a hypothesis-driven approach, where scientists predict outcomes before testing. Swati highlights the transformative potential of "hypothesis-free" experimentation, where AI and machine learning allow for large-scale experimentation without predefined assumptions. This approach mirrors the randomness of evolution, enabling faster discovery of unexpected results. Companies like Helix are applying this method in drug discovery, where AI-driven processes identify new therapeutic compounds. This shift could significantly accelerate R&D timelines and reduce costs in fields like pharmaceuticals and materials science.The Power of Government Support in Early-Stage R&D: Swati emphasizes the essential role of government funding in de-risking early-stage research. Through programs like SBIR (Small Business Innovation Research) grants, government agencies like the NSF (National Science Foundation) and the Department of Defense (DoD) fund exploratory research at universities and small businesses. These grants act as the "seed fund of America," investing billions annually into high-risk, high-reward projects. Companies that receive these grants often have their private sector investments matched by government dollars, providing significant leverage for investors and entrepreneurs. This public-private funding model enables startups to bridge the "valley of death" between research and commercialization.The Critical Role of Corporate-Startup Partnerships: Swati highlights the importance of partnerships between startups and established corporations, especially in deep tech. These joint development projects allow startups to access resources, validate their markets, and co-develop products with corporate customers. While some founders worry about protecting their intellectual property (IP), Swati believes that the benefits of corporate partnerships outweigh the risks. Corporate collaborations offer crucial early traction and revenue, helping startups de-risk their path to market. This is especially vital in sectors like healthcare, robotics, and clean energy, where the cost of developing and commercializing products is exceptionally high.AI as a Force for Human Augmentation: The episode explores AI's role as an augmentative force rather than a replacement for human intelligence. Swati notes that AI is best understood as a tool that allows humans to multiply their cognitive abilities—processing vast amounts of information, identifying patterns, and making faster connections. This augmentation goes beyond software, extending into physical augmentation with devices like robots and smart tools that help humans accomplish physical tasks. While AI-driven tools like ChatGPT may lead to job displacement, Swati sees it as a natural progression, requiring humans to upskill and shift to higher-value tasks.The Promise and Risks of Climate and Environmental Technologies: Swati identifies climate change and global environmental degradation as existential challenges that even the most advanced deep tech may struggle to address. Technologies like atmospheric water generation, carbon capture, and agtech are making strides, but she notes that they are not yet sufficient to solve global challenges like water scarcity, food security, and air pollution. Drawing from her personal experience with air pollution in India, Swati argues that we need to better price and internalize the "cost of the commons"—the shared environmental resources that are often depleted for private gain. Without a clear economic incentive to prevent environmental harm, she warns that climate issues will continue to escalate.The Future of Space Tech and Human Exploration: Swati expresses optimism about the commercialization of space technology, noting its growing impact on daily life. Technologies like satellite internet (e.g., Starlink) are already improving connectivity in remote areas worldwide. The use of satellites for earth observation, weather tracking, and resource management is also becoming essential for sectors like agriculture and disaster response. Looking ahead, Swati is bullish on the potential for space colonization on the moon and Mars, although she acknowledges the immense technical and ethical challenges involved. While space tech once felt like science fiction, companies like SpaceX have made it tangible and real.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop is joined by Katelynne Schuler, a thinker and innovator in the realms of psychology, religion, and philosophy. The conversation spans a wide range of compelling topics, including the layered nuances of Korean social hierarchy, the evolution of political language, and the shifting ideologies within Western conservatism. They explore the rebranding of the KKK, the deeper implications of free speech in a world dominated by digital platforms, and the unseen influence of corporations on government censorship. Katelynne also shares her insights on the psychology of "falls from grace" and how isolation during the pandemic may have catalyzed narcissistic tendencies in some people. The episode touches on larger philosophical questions about civilization, power, and media's role in shaping collective belief. To learn more about Katelynne Schuler, you can find her on Facebook under her name, Katelynne Schuler.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:32 Exploring Korean Culture and Social Nuances02:52 Language and Political Ideologies04:23 Project 2025 and Political Shifts06:21 The KKK's Rebranding and Conservatism10:25 Theocracy and Intersectionalism11:14 Free Speech and Internet History30:05 The Impact of COVID-19 and Vaccines34:15 Clearing Out and Cognitive Dissonance35:07 Pandemic Social Dynamics36:06 Narcissism and Social Isolation38:22 Conspiracy Theories and Social Impact41:34 Lockdowns and Quarantine43:25 Media Manipulation and Public Perception44:52 Nanotechnology and Conspiracy Theories49:42 Bill Gates and Genetic Engineering52:42 Trump, Publicity, and Media Influence58:41 Finance, Asset Valuation, and Media Future01:03:30 Pandemic Warnings and Conspiracies01:07:34 Conclusion and Contact InformationKey Insights1. The Power of Language in Social and Political SystemsKatelynne Schuler highlights the profound role that language plays in shaping social dynamics, drawing on Korean culture's use of honorifics as a prime example. In Korean, different forms of language are used depending on social rank, respect, and familiarity, essentially creating three distinct "languages" within one. This insight is paralleled with Western political discourse, where the left and right often use the same words but with entirely different meanings. The observation points to a broader idea that shared language does not guarantee shared understanding—a crucial realization in an era of increasing political division.2. Free Speech, Corporate Power, and Government CensorshipA central thread in the episode is the evolution of free speech in the age of digital platforms. Schuler and Alsop explore how platforms like Twitter and Facebook have become arenas where free speech is both enabled and curtailed. While platforms have the right to control content as private entities, the duo highlights the more concerning trend of governments using corporations as proxies to suppress dissent. This dynamic blurs the line between free enterprise and state censorship, raising questions about how much "free speech" really exists in online spaces.3. The Psychological Fallout of Isolation and "Fall from Grace"Katelynne offers a unique psychological perspective on how the pandemic-induced isolation created a rise in narcissistic tendencies. As people lost their social connections, especially those ostracized for holding unpopular views on COVID, their need for self-validation intensified. This "fall from grace" experience can push people toward more rigid thinking, strengthening their attachment to specific beliefs or ideologies. Schuler notes that this isn't a reflection of right or wrong beliefs but a psychological response to social exclusion. It's a profound insight into how isolation and rejection affect the human psyche.4. The Rebranding of Extremist IdeologiesOne of the more startling revelations is the claim that groups like the KKK have rebranded themselves with a new focus on Christian nationalism, moving away from racial exclusion and embracing ideological alignment with "Christian values." Schuler notes that this shift aligns with a broader push within segments of American conservatism to integrate Christian morality into governance. This evolution is compared to the broader concept of theocratic governance, where laws are designed to reflect specific religious values—a concept that is controversial, even within conservative circles.5. Global Power Shifts and Lessons from HistoryThe episode provides a historical deep dive into events like the Seven Years' War, which Winston Churchill referred to as the first true "world war." Schuler suggests that while Germany was ostensibly defeated in this war, its real victory lay in how it exported its people and culture globally, influencing future power structures. This insight parallels modern debates about nationalism and globalism, with the hosts exploring how smaller, insulated communities might have better weathered the COVID crisis by closing off from global networks—much like Germany's "export" strategy.6. Technology, Nanotechnology, and the Role of Bill GatesAlsop and Schuler address the controversial role of Bill Gates, focusing on his investments in biotech and nanotechnology. They discuss Gates' involvement in genetically engineered mosquitoes released in South America and the ethical questions it raises. There's also a hint of speculative intrigue around nanobots, with references to origami-style nanostructures found in human blood. While these claims are framed as emerging curiosities rather than confirmed realities, they touch on larger concerns about who controls emerging technologies and to what end.7. The Fragmentation of Media and the Future of InformationFinally, the episode explores the fragmentation of media and its impact on public consciousness. Unlike previous decades when a few major outlets shaped collective opinion, today's media landscape is fractured, with individuals curating their own reality through niche sources. While this decentralization of media offers more choice, it also leads to greater division, as people consume entirely different versions of reality. Schuler suggests that this lack of a shared narrative might weaken societal cohesion, as people lose common ground on basic truths. This shift toward decentralized media aligns with broader conversations about social media algorithms and "echo chambers," where everyone has their own version of reality.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop reconnects with Eric Fisher, one of the show's earliest guests. Their conversation weaves through profound topics like the evolution of AI, the potential consequences of large language models (LLMs), and how AI might reshape both spirituality and education. Eric shares reflections from his time at Facebook, offering behind-the-scenes insight into the creation of algorithmic feeds and how those decisions echo into today's world of AI-driven interactions. Together, Stewart and Eric explore the nature of human attention, the future of work, and the potential divide between tech-driven living and a return to nature. Their discussion raises essential questions about where humanity is headed in the face of exponential technological change and how people can retain their sense of agency and spirit along the way. If you want to learn more about Eric visit his website mindfulimprov.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Reunion00:44 Reflecting on Past Interviews01:18 Spiritual Understandings and AI01:32 The Dual Nature of AI02:43 The Evolution of Facebook's News Feed05:32 AI's Role in Future Technologies13:47 AI in Education and Synthetic Data16:58 The Future of AI and Society21:54 Spirituality and Technology27:58 Humanoid Robots: Beyond Sex Dolls28:28 The Role of Robots in Agriculture and Home29:07 Industrial Robots vs. Home Robots29:44 The Philosophy Behind Technological Advancements30:22 The Vision of the Future: Post-Steve Jobs Era31:17 The Impact of AI and Automation on Society32:55 Accelerationism vs. Degrowth: The Tech Debate40:41 Demographic Crisis and the Future of Humanity45:18 Economic Inequality and the Common Man46:39 The Evolution of Political Ideologies52:09 The Future of Work and Society54:14 Concluding Thoughts and Future DiscussionsKey Insights1. The Dual Nature of AI: Promise and PerilEric Fisher highlights the dual potential of AI as both a tool for human advancement and a source of unforeseen challenges. Drawing from his experience at Facebook, he explains how algorithmic feeds designed to increase engagement eventually led to widespread issues like polarization and misinformation. This echoes in today's world of LLMs (Large Language Models), where AI's utility as a tool for learning, troubleshooting, and content creation exists alongside the risk of biased or manipulative outputs. The key takeaway is that technology, like a rock, is neutral — its impact depends on how it is used and who is using it.2. The Evolution of Attention as a ResourceAttention has become a central currency in the modern economy, and Fisher points out that the concept of "attention economy" wasn't even part of public discourse a few decades ago. Today, with the rise of LLM-driven AI companions and algorithmic feeds, attention is being sliced and sold with increasing precision. This shift raises questions about how much of human autonomy is being traded away in favor of frictionless convenience. As AI becomes more adept at predicting and shaping user behavior, the concept of "free will" within an attention-driven economy becomes murkier.3. The Next Phase of Education: Self-Directed Learning with AI TutorsBoth Stewart Alsop and Eric Fisher recognize the potential for AI to revolutionize education. Instead of the traditional classroom model, self-directed learning with AI-driven tutors could allow for personalized, one-on-one learning experiences for every student. Fisher notes that tools like ChatGPT have already enabled him to troubleshoot complex home systems, like his geothermal cooling system, without needing to call a specialist. This self-sufficiency could be mirrored in education, where AI assistants offer instant, tailored guidance to students across a range of subjects.4. The Blurring of Reality: Personalized AI-Generated WorldsA provocative idea discussed in the episode is the possibility of AI-generated personalized realities. Through augmented reality (AR) glasses or VR headsets, individuals could project and experience personalized versions of reality. Fisher points out that, in many ways, people already live in "personalized mental realities" shaped by language, perception, and cultural narratives. AI could make this more literal, with each person living in a bespoke, algorithmically generated world. While this concept sounds thrilling, it also hints at a future where shared consensus reality — the "real world" — becomes more fragmented than ever.5. Economic Shifts: From Worker-Centric to Business-Centric SystemsTracing the legacy of figures like FDR and LBJ, Fisher reflects on how America shifted from a society that valued the working class to one that prioritizes business interests. While earlier eras emphasized worker rights, health care, and public welfare, today's economy is focused on empowering small businesses and startups. Everyone is now expected to be a "business of one," as independent creators, gig workers, and personal brands become the dominant paradigm. The result is a world where individual workers act like micro-businesses, managing their own healthcare, retirement, and financial stability — often with no safety net.6. The Threat of Decentralized AI and the Loss of TruthWith Meta and OpenAI releasing LLMs and synthetic AI models into the open-source community, Fisher expresses concern about the fragmentation of "truth." As more people train and deploy their own AI models, the risk of misinformation rises. Just as search engines can prioritize certain content over others, decentralized AI models may be subtly — or overtly — biased. This issue becomes even more concerning if companies start inserting ad-driven recommendations into AI responses, giving users the illusion of objectivity when, in fact, they're being guided toward a commercial end.7. The Coming Collapse and the Chance for RenewalThe episode touches on a cyclical view of history, where moments of collapse often lead to periods of rebirth. Fisher compares this to the aftermath of the bubonic plague, which killed half of Europe's population but led to the Renaissance and an era of cultural flourishing. He speculates that a similar phenomenon could play out today. Whether through demographic decline, AI-driven disruption, or a collapse of old economic models, humanity could experience a dramatic contraction. Paradoxically, such a collapse might bring about an "age of spaciousness" where fewer people, better technology, and renewed humanism create a richer and more thoughtful way of life.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop is joined by Yury Selivanov, the CEO and co-founder of EdgeDB, for a fascinating discussion about the reinvention of relational databases. Yury explains how EdgeDB addresses modern application development challenges by improving developer experience and rethinking decades-old database paradigms. They explore how foundational technologies evolve, the parallels between software and real-world systems like the electrical grid, and the emerging role of AI in coding and system design. You can connect with Yury through his personal Twitter account @1st1 (https://twitter.com/1st1) and EdgeDB's official Twitter @EdgeDatabase (https://twitter.com/edgedatabase).Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:27 What is EdgeDB?00:58 The Evolution of Databases04:36 Understanding SQL and Relational Databases07:48 The Importance of Database Relationships09:27 Schema vs. No-Schema Databases14:14 EdgeDB: SQL 2.0 and Developer Experience23:09 The Future of Databases and AI Integration26:43 AI's Role in Software Development27:20 Challenges with AI-Generated Code29:56 Human-AI Collaboration in Coding34:00 Future of Programming Languages44:28 Junior Developers and AI Tools50:02 EdgeDB's Vision and Future PlansKey InsightsReimagining Relational Databases: Yury Selivanov explains how EdgeDB represents a modern rethinking of relational databases. Unlike traditional databases designed with 1970s paradigms, EdgeDB focuses on improving developer experience by introducing object-oriented schemas and hierarchical query capabilities, bridging the gap between modern programming needs and legacy systems.Bridging Data Models and Code: A key challenge in software development is the object-relational impedance mismatch, where relational database tables do not naturally map to object-based data models in programming languages. EdgeDB addresses this by providing a high-level data model and query language that aligns with how developers think and work, eliminating the need for complex ORMs.Advancing Query Language Design: Traditional SQL, while powerful, can be cumbersome for application development. EdgeDB introduces EdgeQL, a modern query language designed for readability, hierarchical data handling, and developer productivity. This new language reduces the friction of working with relational data in real-world software projects.AI as a Tool, Not a Replacement: While AI has transformed coding productivity, Yury emphasizes that it is a tool to assist, not replace, developers. LLMs like GPT can generate code, but the resulting systems still require human oversight for debugging, optimization, and long-term maintenance, highlighting the enduring importance of experienced engineers.The Role of Schema in Data Integrity: Schema-defined databases like EdgeDB allow developers to codify business logic and enforce data integrity directly within the database. This reduces the need for application-level checks, simplifying the codebase while ensuring robust data consistency—a feature that remains critical even in the era of AI.Integrating AI into Databases: EdgeDB is exploring innovative integrations of AI, such as automatic embedding generation and retrieval-augmented generation (RAG) endpoints, to enhance data usability and simplify complex workflows. These capabilities position EdgeDB as a forward-thinking tool in the rapidly evolving landscape of AI-enhanced software.Balancing Adoption and Usability: To encourage adoption, EdgeDB is incorporating familiar tools like SQL alongside its advanced features, lowering the learning curve for new users. This approach combines innovation with accessibility, ensuring that developers can transition seamlessly to the platform while benefiting from its modern capabilities.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop chats with Alexander, a Gen Z innovator passionate about technology, particularly AI and blockchain. Together, they explore Alexander's creative approach to tackling challenges like reading dense white papers, the dynamics of AI in software engineering, and the philosophical implications of emerging tech, from blockchain's elegant simplicity to AI's transformative potential in reshaping industries. Alexander also shares insights from his journey in crypto and smart contract development, providing a glimpse into how technology and human ingenuity intertwine in the modern era. For more, follow Alexander on X at @AlexanderTw33ts.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:32 Exploring White Papers and Crypto04:55 The Gen Z Advantage and Social Media07:38 The Power of Time-Lapse Videos11:18 Understanding Bitcoin and Blockchain14:27 Smart Contracts and AI20:56 The Future of AI and Software Development32:02 The Role of Humans in the Future32:56 The Concept of Singularity33:52 Technological Merging and Its Implications35:34 The Impact of AI on Society00:43 The Future of Learning and AI55:02 Navigating the Job Market with AI01:02:09 The Human Element in a Tech-Driven World01:04:15 Conclusion and Final ThoughtsKey InsightsThe Role of AI in Learning and Productivity: Alexander highlighted how AI, particularly large language models (LLMs), has become a crucial tool for learning and productivity. By using AI, tasks like coding, debugging, and understanding complex documents, such as white papers, have become more accessible. This shift emphasizes the importance of understanding how to effectively prompt and interact with AI to maximize its capabilities.Blockchain's Simplicity and Significance: The conversation revealed the elegant simplicity of blockchain technology, particularly Bitcoin. Despite its technical complexity at first glance, the core mechanisms—like the transaction ledger—are remarkably straightforward. This simplicity, combined with the groundbreaking nature of decentralized systems, positions blockchain as both a financial innovation and a conceptual work of art.Challenges for Gen Z with AI and Attention: Alexander discussed the unique challenges his generation faces with attention spans shaped by the internet and social media. While this digital immersion offers advantages, such as a natural aptitude for navigating tech tools, it also creates hurdles, like focusing on dense materials. He shared how creative approaches, such as time-lapse recordings for accountability, can transform learning into an engaging and rewarding process.The Future of Software Development Careers: With AI increasingly capable of performing technical tasks, the demand for junior developers may dwindle. Alexander advised aspiring developers to embrace entrepreneurship, leveraging AI to build their own projects. This approach not only enhances practical skills but also positions them as creators in a competitive market where the definition of “developer” is rapidly evolving.The Evolution of Distributed Cognition: The episode touched on how technology has transformed distributed cognition, from early written communication to the internet and now AI. Platforms like social media are already curating personalized worlds for users, but AI's advancement could make these experiences even more immersive, raising questions about individual agency and shared reality.Navigating the Technological Singularity: Both Stewart and Alexander reflected on the concept of the technological singularity—the point at which human understanding can no longer predict future technological developments. They discussed its philosophical implications, likening it to a black hole where no one can see beyond its event horizon, emphasizing the profound uncertainty it brings to humanity's trajectory.Balancing Human Connection in an AI-Driven World: The conversation underscored the importance of human connection and shared experiences amidst increasing AI-driven customization. While AI can create tailored virtual worlds and digital interactions, Alexander and Stewart noted the enduring value of real-world activities like engaging with nature, forming authentic relationships, and fostering creativity in a rapidly evolving technological landscape.