Unconventional, outrageous, unexpected, or unpredictable behavior linked to religious or spiritual pursuits
POPULARITY
I, Stewart Alsop, am thrilled to welcome Xathil of Poliebotics to this episode of Crazy Wisdom, for what is actually our second take, this time with a visual surprise involving a fascinating 3D-printed Bauta mask. Xathil is doing some truly groundbreaking work at the intersection of physical reality, cryptography, and AI, which we dive deep into, exploring everything from the philosophical implications of anonymity to the technical wizardry behind his "Truth Beam."Check out this GPT we trained on the conversationTimestamps01:35 Xathil explains the 3D-printed Bauta Mask, its Venetian origins, and its role in enabling truth through anonymity via his project, Poliepals.04:50 The crucial distinction between public identity and "real" identity, and how pseudonyms can foster truth-telling rather than just conceal.10:15 Addressing the serious risks faced by crypto influencers due to public displays of wealth and the broader implications for online identity.15:05 Xathil details the core Poliebotics technology: the "Truth Beam," a projector-camera system for cryptographically timestamping physical reality.18:50 Clarifying the concept of "proof of aliveness"—verifying a person is currently live in a video call—versus the more complex "proof of liveness."21:45 How the speed of light provides a fundamental advantage for Poliebotics in outmaneuvering AI-generated deepfakes.32:10 The concern of an "inversion," where machine learning systems could become dominant over physical reality by using humans as their actuators.45:00 Xathil's ambitious project to use Poliebotics for creating cryptographically verifiable records of biodiversity, beginning with an enhanced Meles trap.Key InsightsAnonymity as a Truth Catalyst: Drawing from Oscar Wilde, the Bauta mask symbolizes how anonymity or pseudonyms can empower individuals to reveal deeper, more authentic truths. This challenges the notion that masks only serve to hide, suggesting they can be tools for genuine self-expression.The Bifurcation of Identity: In our digital age, distinguishing between one's core "real" identity and various public-facing personas is increasingly vital. This separation isn't merely about concealment but offers a space for truthful expression while navigating public life.The Truth Beam: Anchoring Reality: Poliebotics' "Truth Beam" technology employs a projector-camera system to cast cryptographic hashes onto physical scenes, recording them and anchoring them to a blockchain. This aims to create immutable, verifiable records of reality to combat the rise of sophisticated deepfakes.Harnessing Light Speed Against Deepfakes: The fundamental defense Poliebotics offers against AI-generated fakes is the speed of light. Real-world light reflection for capturing projected hashes is virtually instantaneous, whereas an AI must simulate this complex process, a task too slow to keep up with real-time verification.The Specter of Humans as AI Actuators: A significant future concern is the "inversion," where AI systems might utilize humans as unwitting agents to achieve their objectives in the physical world. By manipulating incentives, AIs could effectively direct human actions, raising profound questions about agency.Towards AI Symbiosis: The ideal future isn't a human-AI war or complete technological asceticism, but a cooperative coexistence between nature, humanity, and artificial systems. This involves developing AI responsibly, instilling human values, and creating systems that are non-threatening and beneficial.Contact Information* Polybotics' GitHub* Poliepals* Xathil: Xathil@ProtonMail.com
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
I, Stewart Alsop, welcomed Ben Roper, CEO and founder of Play Culture, to this episode of Crazy Wisdom for a fascinating discussion. We kicked things off by diving into Ben's reservations about AI, particularly its impact on creative authenticity, before exploring his innovative project, Play Culture, which aims to bring tactical outdoor games to adults. Ben also shared his journey of teaching himself to code and his philosophy on building experiences centered on human connection rather than pure profit.Check out this GPT we trained on the conversationTimestamps00:55 Ben Roper on AI's impact on creative authenticity and the dilution of the author's experience.03:05 The discussion on AI leading to a "simulation of experience" versus genuine, embodied experiences.08:40 Stewart Alsop explores the nuances of authenticity, honesty, and trust in media and personal interactions.17:53 Ben discusses how trust is invaluable and often broken by corporate attempts to feign it.20:22 Ben begins to explain the Play Culture project, discussing the community's confusion about its non-monetized approach, leading into his philosophy of "designing for people, not money."37:08 Ben elaborates on the Play Culture experience: creating tactical outdoor games designed specifically for adults.45:46 A comparison of Play Culture's approach with games like Pokémon GO, emphasizing "gentle technology."58:48 Ben shares his thoughts on the future of augmented reality and designing humanistic experiences.1:02:15 Ben describes "Pirate Gold," a real-world role-playing pirate simulator, as an example of Play Culture's innovative games.1:06:30 How to find Play Culture and get involved in their events worldwide.Key InsightsAI and Creative Authenticity: Ben, coming from a filmmaking background, views generative AI as a collaborator without a mind, which disassociates work from the author's unique experience. He believes art's value lies in being a window into an individual's life, a quality diluted by AI's averaged output.Simulation vs. Real Experience: We discussed how AI and even some modern technologies offer simulations of experiences (like VR travel or social media connections) that lack the depth and richness of real-world engagement. These simulations can be easier to access but may leave individuals unfulfilled and unaware of what they're missing.The Quest for Honesty Over Authenticity: I posited that while people claim to want authenticity, they might actually desire honesty more. Raw, unfiltered authenticity can be confronting, whereas honesty within a framework of trust allows for genuine connection without necessarily exposing every raw emotion.Trust as Unpurchasable Value: Ben emphasized that trust is one of the few things that cannot be bought; it must be earned and is easily broken. This makes genuine trust incredibly valuable, especially in a world where corporate entities often feign trustworthiness for transactional purposes.Designing for People, Not Money: Ben shared his philosophy behind Play Culture, which is to "design for people, not money." This means prioritizing genuine human experience, joy, and connection over optimizing for profit, believing that true value, including financial sustainability, can arise as a byproduct of creating something meaningful.The Need for Adult Play: Play Culture aims to fill a void by creating tactical outdoor games specifically designed for adult minds and social dynamics. This goes beyond childlike play or existing adult games like video games and sports, focusing on socially driven gameplay, strategy, and unique adult experiences.Gentle Technology in Gaming: Contrasting with AR-heavy games like Pokémon GO, Play Culture advocates for "gentle technology." The tech (like a mobile app) supports gameplay by providing information or connecting players, but the core interaction happens through players' senses and real-world engagement, not primarily through a screen.Real-World Game Streaming as the Future: Ben's vision for Play Culture includes moving towards real-world game streaming, akin to video game streaming on Twitch, but featuring live-action tactical games played in real cities. This aims to create a new genre of entertainment showcasing genuine human interaction and strategy.Contact Information* Ben Roper's Instagram* Website: playculture.com
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
On this episode of Crazy Wisdom, I, Stewart Alsop, spoke with Neil Davies, creator of the Extelligencer project, about survival strategies in what he calls the “Dark Forest” of modern civilization — a world shaped by cryptographic trust, intelligence-immune system fusion, and the crumbling authority of legacy institutions. We explored how concepts like zero-knowledge proofs could defend against deepening informational warfare, the shift toward tribal "patchwork" societies, and the challenge of building a post-institutional framework for truth-seeking. Listeners can find Neil on Twitter as @sigilante and explore more about his work in the Extelligencer substack.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction of Neil Davies and the Extelligencer project, setting the stage with Dark Forest theory and operational survival concepts.05:00 Expansion on Dark Forest as a metaphor for Internet-age exposure, with examples like scam evolution, parasites, and the vulnerability of modern systems.10:00 Discussion of immune-intelligence fusion, how organisms like anthills and the Portuguese Man o' War blend cognition and defense, leading into memetic immune systems online.15:00 Introduction of cryptographic solutions, the role of signed communications, and the growing importance of cryptographic attestation against sophisticated scams.20:00 Zero-knowledge proofs explained through real-world analogies like buying alcohol, emphasizing minimal information exposure and future-proofing identity verification.25:00 Transition into post-institutional society, collapse of legacy trust structures, exploration of patchwork tribes, DAOs, and portable digital organizations.30:00 Reflection on association vs. hierarchy, the persistence of oligarchies, and the shift from aristocratic governance to manipulated mass democracy.35:00 AI risks discussed, including trapdoored LLMs, epistemic hygiene challenges, and historical examples like gold fulminate booby-traps in alchemical texts.40:00 Controlled information flows, secular religion collapse, questioning sources of authority in a fragmented information landscape.45:00 Origins and evolution of universities, from medieval student-driven models to Humboldt's research-focused institutions, and the absorption by the nation-state.50:00 Financialization of universities, decay of independent scholarship, and imagining future knowledge structures outside corrupted legacy frameworks.Key InsightsThe "Dark Forest" is not just a cosmological metaphor, but a description of modern civilization's hidden dangers. Neil Davies explains that today's world operates like a Dark Forest where exposure — making oneself legible or visible — invites predation. This framework reshapes how individuals and groups must think about security, trust, and survival, particularly in an environment thick with scams, misinformation, and parasitic actors accelerated by the Internet.Immune function and intelligence function have fused in both biological and societal contexts. Davies draws a parallel between decentralized organisms like anthills and modern human society, suggesting that intelligence and immunity are inseparable functions in highly interconnected systems. This fusion means that detecting threats, maintaining identity, and deciding what to incorporate or reject is now an active, continuous cognitive and social process.Cryptographic tools are becoming essential for basic trust and survival. With the rise of scams that mimic legitimate authority figures and institutions, Davies highlights how cryptographic attestation — and eventually more sophisticated tools like zero-knowledge proofs — will become fundamental. Without cryptographically verifiable communication, distinguishing real demands from predatory scams may soon become impossible, especially as AI-generated deception grows more convincing.Institutions are hollowing out, but will not disappear entirely. Rather than a sudden collapse, Davies envisions a future where legacy institutions like universities, corporations, and governments persist as "zombie" entities — still exerting influence but increasingly irrelevant to new forms of social organization. Meanwhile, smaller, nimble "patchwork" tribes and digital-first associations will become more central to human coordination and identity.Modern universities have drifted far from their original purpose and structure. Tracing the history from medieval student guilds to Humboldt's 19th-century research universities, Davies notes that today's universities are heavily compromised by state agendas, mass democracy, and financialization. True inquiry and intellectual aloofness — once core to the ideal of the university — now require entirely new, post-institutional structures to be viable.Artificial intelligence amplifies both opportunity and epistemic risk. Davies warns that large language models (LLMs) mainly recombine existing information rather than generate truly novel insights. Moreover, they can be trapdoored or poisoned at the data level, introducing dangerous, invisible vulnerabilities. This creates a new kind of "Dark Forest" risk: users must assume that any received information may carry unseen threats or distortions.There is no longer a reliable central authority for epistemic trust. In a fragmented world where Wikipedia is compromised, traditional media is polarized, and even scientific institutions are politicized, Davies asserts that we must return to "epistemic hygiene." This means independently verifying knowledge where possible and treating all claims — even from AI — with skepticism. The burden of truth-validation increasingly falls on individuals and their trusted, cryptographically verifiable networks.
On this episode of the Crazy Wisdom podcast, I, Stewart Alsop, sat down once again with Aaron Lowry for our third conversation, and it might be the most expansive yet. We touched on the cultural undercurrents of transhumanism, the fragile trust structures behind AI and digital infrastructure, and the potential of 3D printing with metals and geopolymers as a material path forward. Aaron shared insights from his hands-on restoration work, our shared fascination with Amish tech discernment, and how course-correcting digital dependencies can restore sovereignty. We also explored what it means to design for long-term human flourishing in a world dominated by misaligned incentives. For those interested in following Aaron's work, he's most active on Twitter at @Aaron_Lowry.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart welcomes Aaron Lowry back for his third appearance. They open with reflections on cultural shifts post-COVID, the breakdown of trust in institutions, and a growing societal impulse toward individual sovereignty, free speech, and transparency.05:00 – The conversation moves into the changing political landscape, specifically how narratives around COVID, Trump, and transhumanism have shifted. Aaron introduces the idea that historical events are often misunderstood due to our tendency to segment time, referencing Dan Carlin's quote, “everything begins in the middle of something else.”10:00 – They discuss how people experience politics differently now due to the Internet's global discourse, and how Aaron avoids narrow political binaries in favor of structural and temporal nuance. They explore identity politics, the crumbling of party lines, and the erosion of traditional social anchors.15:00 – Shifting gears to technology, Aaron shares updates on 3D printing, especially the growing maturity of metal printing and geopolymers. He highlights how these innovations are transforming fields like automotive racing and aerospace, allowing for precise, heat-resistant, custom parts.20:00 – The focus turns to mechanical literacy and the contrast between abstract digital work and embodied craftsmanship. Stewart shares his current tension between abstract software projects (like automating podcast workflows with AI) and his curiosity about the Amish and Mennonite approach to technology.25:00 – Aaron introduces the idea of a cultural “core of integrated techne”—technologies that have been refined over time and aligned with human flourishing. He places Amish discernment on a spectrum between Luddite rejection and transhumanist acceleration, emphasizing the value of deliberate integration.30:00 – The discussion moves to AI again, particularly the concept of building local, private language models that can persistently learn about and serve their user without third-party oversight. Aaron outlines the need for trust, security, and stateful memory to make this vision work.35:00 – Stewart expresses frustration with the dominance of companies like Google and Facebook, and how owning the Jarvis-like personal assistant experience is critical. Aaron recommends options like GrapheneOS on a Pixel 7 and reflects on the difficulty of securing hardware at the chip level.40:00 – They explore software development and the problem of hidden dependencies. Aaron explains how digital systems rest on fragile, often invisible material infrastructure and how that fragility is echoed in the complexity of modern software stacks.45:00 – The concept of “always be reducing dependencies” is expanded. Aaron suggests the real goal is to reduce untrustworthy dependencies and recognize which are worth cultivating. Trust becomes the key variable in any resilient system, digital or material.50:00 – The final portion dives into incentives. They critique capitalism's tendency to exploit value rather than build aligned systems. Aaron distinguishes rivalrous games from infinite games and suggests the future depends on building systems that are anti-rivalrous—where ideas compete, not people.55:00 – They wrap up with reflections on course correction, spiritual orientation, and cultural reintegration. Stewart suggests titling the episode around infinite games, and Aaron shares where listeners can find him online.Key InsightsTranshumanism vs. Techne Integration: Aaron frames the modern moment as a tension between transhumanist enthusiasm and a more grounded relationship to technology, rooted in "techne"—practical wisdom accumulated over time. Rather than rejecting all new developments, he argues for a continuous course correction that aligns emerging technologies with deep human values like truth, goodness, and beauty. The Amish and Mennonite model of communal tech discernment stands out as a countercultural but wise approach—judging tools by their long-term effects on community, rather than novelty or entertainment.3D Printing as a Material Frontier: While most of the 3D printing world continues to refine filaments and plastic-based systems, Aaron highlights a more exciting trajectory in printed metals and geopolymers. These technologies are maturing rapidly and finding serious application in domains like Formula One, aerospace, and architectural experimentation. His conversations with others pursuing geopolymer 3D printing underscore a resurgence of interest in materially grounded innovation, not just digital abstraction.Digital Infrastructure is Physical: Aaron emphasizes a point often overlooked: that all digital systems rest on physical infrastructure—power grids, servers, cables, switches. These systems are often fragile and loaded with hidden dependencies. Recognizing the material base of digital life brings a greater sense of responsibility and stewardship, rather than treating the internet as some abstract, weightless realm. This shift in awareness invites a more embodied and ecological relationship with our tools.Local AI as a Trustworthy Companion: There's a compelling vision of a Jarvis-like local AI assistant that is fully private, secure, and persistent. For this to function, it must be disconnected from untrustworthy third-party cloud systems and trained on a personal, context-rich dataset. Aaron sees this as a path toward deeper digital agency: if we want machines that truly serve us, they need to know us intimately—but only in systems we control. Privacy, persistent memory, and alignment to personal values become the bedrock of such a system.Dependencies Shape Power and Trust: A recurring theme is the idea that every system—digital, mechanical, social—relies on a web of dependencies. Many of these are invisible until they fail. Aaron's mantra, “always be reducing dependencies,” isn't about total self-sufficiency but about cultivating trustworthy dependencies. The goal isn't zero dependence, which is impossible, but discerning which relationships are resilient, personal, and aligned with your values versus those that are extractive or opaque.Incentives Must Be Aligned with the Good: A core critique is that most digital services today—especially those driven by advertising—are fundamentally misaligned with human flourishing. They monetize attention and personal data, often steering users toward addiction or ...
In this episode of Crazy Wisdom, Stewart Alsop talks with Will Bickford about the future of human intelligence, the exocortex, and the role of software as an extension of our minds. Will shares his thinking on brain-computer interfaces, PHEXT (a plain text protocol for structured data), and how high-dimensional formats could help us reframe the way we collaborate and think. They explore the abstraction layers of code and consciousness, and why Will believes that better tools for thought are not just about productivity, but about expanding the boundaries of what it means to be human. You can connect with Will in Twitter at @wbic16 or check out the links mentioned by Will in Github.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to the concept of the exocortex and how current tools like plain text editors and version control systems serve as early forms of cognitive extension.05:00 – Discussion on brain-computer interfaces (BCIs), emphasizing non-invasive software interfaces as powerful tools for augmenting human cognition.10:00 – Introduction to PHEXT, a plain text format designed to embed high-dimensional structure into simple syntax, facilitating interoperability between software systems.15:00 – Exploration of software abstraction as a means of compressing vast domains of meaning into manageable forms, enhancing understanding rather than adding complexity.20:00 – Conversation about the enduring power of text as an interface, highlighting its composability, hackability, and alignment with human symbolic processing.25:00 – Examination of collaborative intelligence and the idea that intelligence emerges from distributed systems involving people, software, and shared ideas.30:00 – Discussion on the importance of designing better communication protocols, like PHEXT, to create systems that align with human thought processes and enhance cognitive capabilities.35:00 – Reflection on the broader implications of these technologies for the future of human intelligence and the potential for expanding the boundaries of human cognition.Key InsightsThe exocortex is already here, just not evenly distributed. Will frames the exocortex not as a distant sci-fi future, but as something emerging right now in the form of external software systems that augment our thinking. He suggests that tools like plain text editors, command-line interfaces, and version control systems are early prototypes of this distributed cognitive architecture—ways we already extend our minds beyond the biological brain.Brain-computer interfaces don't need to be invasive to be powerful. Rather than focusing on neural implants, Will emphasizes software interfaces as the true terrain of BCIs. The bridge between brain and computer can be as simple—and profound—as the protocols we use to interact with machines. What matters is not tapping into neurons directly, but creating systems that think with us, where interface becomes cognition.PHEXT is a way to compress meaning while remaining readable. At the heart of Will's work is PHEXT, a plain text format that embeds high-dimensional structure into simple syntax. It's designed to let software interoperate through shared, human-readable representations of structured data—stripping away unnecessary complexity while still allowing for rich expressiveness. It's not just a format, but a philosophy of communication between systems and people.Software abstraction is about compression, not complexity. Will pushes back against the idea that abstraction means obfuscation. Instead, he sees abstraction as a way to compress vast domains of meaning into manageable forms. Good abstractions reveal rather than conceal—they help you see more with less. In this view, the challenge is not just to build new software, but to compress new layers of insight into form.Text is still the most powerful interface we have. Despite decades of graphical interfaces, Will argues that plain text remains the highest-bandwidth cognitive tool. Text allows for versioning, diffing, grepping—it plugs directly into the brain's symbolic machinery. It's composable, hackable, and lends itself naturally to abstraction. Rather than moving away from text, the future might involve making text higher-dimensional and more semantically rich.The future of thinking is collaborative, not just computational. One recurring theme is that intelligence doesn't emerge in isolation—it's distributed. Will sees the exocortex as something inherently social: a space where people, software, and ideas co-think. This means building interfaces not just for solo users, but for networked groups of minds working through shared representations.Designing better protocols is designing better minds. Will's vision is protocol-first. He sees the structure of communication—between apps, between people, between thoughts—as the foundation of intelligence itself. By designing protocols like PHEXT that align with how we actually think, we can build software that doesn't just respond to us, but participates in our thought processes.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Trent Gillham—also known as Drunk Plato—for a far-reaching conversation on the shifting tides of technology, memetics, and media. Trent shares insights from building Meme Deck (find it at memedeck.xyz or follow @memedeckapp on X), exploring how social capital, narrative creation, and open-source AI models are reshaping not just the tools we use, but the very structure of belief and influence in the information age. We touch on everything from the collapse of legacy media, to hyperstition and meme warfare, to the metaphysics of blockchain as the only trustable memory in an unmoored future. You can find Trent in twitter as @AidenSolaran.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to Trent Gillham and Meme Deck, early thoughts on AI's rapid pace, and the shift from training models to building applications around them.05:00 – Discussion on the collapse of the foundational model economy, investor disillusionment, GPU narratives, and how AI infrastructure became a kind of financial bubble.10:00 – The function of markets as belief systems, blowouts when inflated narratives hit reality, and how meme-based value systems are becoming indistinguishable from traditional finance.15:00 – The role of hyperstition in creation, comparing modern tech founders to early 20th-century inventors, and how visual proof fuels belief and innovation.20:00 – Reflections on the intelligence community's influence in tech history, Facebook's early funding, and how soft influence guides the development of digital tools and platforms.25:00 – Weaponization of social media, GameStop as a memetic uprising, the idea of memetic tools leaking from government influence into public hands.30:00 – Meme Deck's vision for community-led narrative creation, the shift from centralized media to decentralized, viral, culturally fragmented storytelling.35:00 – The sophistication gap in modern media, remix culture, the idea of decks as mini subreddits or content clusters, and incentivizing content creation with tokens.40:00 – Good vs bad meme coins, community-first approaches, how decentralized storytelling builds real value through shared ownership and long-term engagement.45:00 – Memes as narratives vs manipulative psyops, blockchain as the only trustable historical record in a world of mutable data and shifting truths.50:00 – Technical challenges and future plans for Meme Deck, data storage on-chain, reputation as a layer of trust, and AI's need for immutable data sources.55:00 – Final reflections on encoding culture, long-term value of on-chain media, and Trent's vision for turning podcast conversations into instant, storyboarded, memetic content.Key InsightsThe real value in AI isn't in building models—it's in building tools that people can use: Trent emphasized that the current wave of AI innovation is less about creating foundational models, which have become commoditized, and more about creating interfaces and experiences that make those models useful. Training base models is increasingly seen as a sunk cost, and the real opportunity lies in designing products that bring creative and cultural capabilities directly to users.Markets operate as belief machines, and the narratives they run on are increasingly memetic: He described financial markets not just as economic systems, but as mechanisms for harvesting collective belief—what he called “hyperstition.” This dynamic explains the cycles of hype and crash, where inflated visions eventually collide with reality in what he terms "blowouts." In this framing, stocks and companies function similarly to meme coins—vehicles for collective imagination and risk.Memes are no longer just jokes—they are cultural infrastructure: As Trent sees it, memes are evolving into complex, participatory systems for narrative building. With tools like Meme Deck, entire story worlds can be generated, remixed, and spread by communities. This marks a shift from centralized, top-down media (like Hollywood) to decentralized, socially-driven storytelling where virality is coded into the content from the start.Community is the new foundation of value in digital economies: Rather than focusing on charismatic individuals or short-term hype, Trent emphasized that lasting projects need grassroots energy—what he calls “vibe strapping.” Successful meme coins and narrative ecosystems depend on real participation, sustained engagement, and a shared sense of creative ownership. Without that, projects fizzle out as quickly as they rise.The battle for influence has moved from borders to minds: Reflecting on the information age, Trent noted that power now resides in controlling narratives, and thus in shaping perception. This is why information warfare is subtle, soft, and persistent—and why traditional intelligence operations have evolved into influence campaigns that play out in digital spaces like social media and meme culture.Blockchains may become the only reliable memory in a world of digital manipulation: In an era where digital content is easily altered or erased, Trent argued that blockchain offers the only path to long-term trust. Data that ends up on-chain can be verified and preserved, giving future intelligences—or civilizations—a stable record of what really happened. He sees this as crucial not only for money, but for culture itself.Meme Deck aims to democratize narrative creation by turning community vibes into media outputs: Trent shared his vision for Meme Deck as a platform where communities can generate not just memes, but entire storylines and media formats—from anime pilots to cinematic remixes—by collaborating and contributing creative energy. It's a model where decentralized media becomes both an art form and a social movement, rooted in collective imagination rather than corporate production.
Unleash the radical transformative power at the heart of the world's great wisdom traditions as we Radiate Crazy-Wisdom with Jason Brett Serle. Jason is the author of The Monkey in the Bodhi Tree: Crazy-Wisdom & the Way of the Wise-Fool as well as a writer, filmmaker, NLP Master and licensed hypnotherapist dealing with themes involving psychology, spirituality, sovereignty, wellness, and human potential. Of the many paths up the mountain, crazy-wisdom presents a dramatic and formidable climb to those that are so inclined. Now for the first time, the true spiritual landscape of the wise-fool has been laid bare and its features and principal landmarks revealed. Written in two parts, The Monkey in the Bodhi Tree is the first comprehensive look at this universal phenomenon, from its origins and development to the lives of its greatest adepts and luminaries. Learn more about Jason at jasonbrettserle.com. Support this podcast by going to radiatewellnesscommunity.com/podcast and clicking on "Support the Show," and be sure to follow and share on all the socials! Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode of Crazy Wisdom, I'm joined by David Pope, Commissioner on the Wyoming Stable Token Commission, and Executive Director Anthony Apollo, for a wide-ranging conversation that explores the bold, nuanced effort behind Wyoming's first-of-its-kind state-issued stable token. I'm your host Stewart Alsop, and what unfolds in this dialogue is both a technical unpacking and philosophical meditation on trust, financial sovereignty, and what it means for a government to anchor itself in transparent, programmable value. We move through Anthony's path from Wall Street to Web3, the infrastructure and intention behind tokenizing real-world assets, and how the U.S. dollar's future could be shaped by state-level innovation. If you're curious to follow along with their work, everything from blockchain selection criteria to commission recordings can be found at stabletoken.wyo.gov.Check out this GPT we trained on the conversation!Timestamps00:00 – David Pope and Anthony Apollo introduce themselves, clarifying they speak personally, not for the Commission. You, Stewart, set an open tone, inviting curiosity and exploration.05:00 – Anthony shares his path from traditional finance to Ethereum and government, driven by frustration with legacy banking inefficiencies.10:00 – Tokenized bonds enter the conversation via the Spencer Dinwiddie project. Pope explains early challenges with defining “real-world assets.”15:00 – Legal limits of token ownership vs. asset title are unpacked. You question whether anything “real” has been tokenized yet.20:00 – Focus shifts to the Wyoming Stable Token: its constitutional roots and blockchain as a tool for fiat-backed stability without inflation.25:00 – Comparison with CBDCs: Apollo explains why Wyoming's token is transparent, non-programmatic, and privacy-focused.30:00 – Legislative framework: the 102% backing rule, public audits, and how rulemaking differs from law. You explore flexibility and trust.35:00 – Global positioning: how Wyoming stands apart from other states and nations in crypto policy. You highlight U.S. federalism's role.40:00 – Topics shift to velocity, peer-to-peer finance, and risk. You connect this to Urbit and decentralized systems.45:00 – Apollo unpacks the stable token's role in reinforcing dollar hegemony, even as BRICS move away from it.50:00 – Wyoming's transparency and governance as financial infrastructure. You reflect on meme coins and state legitimacy.55:00 – Discussion of Bitcoin reserves, legislative outcomes, and what's ahead. The conversation ends with vision and clarity.Key InsightsWyoming is pioneering a new model for state-level financial infrastructure. Through the creation of the Wyoming Stable Token Commission, the state is developing a fully-backed, transparent stable token that aims to function as a public utility. Unlike privately issued stablecoins, this one is mandated by law to be 102% backed by U.S. dollars and short-term treasuries, ensuring high trust and reducing systemic risk.The stable token is not just a tech innovation—it's a philosophical statement about trust. As David Pope emphasized, the transparency and auditability of blockchain-based financial instruments allow for a shift toward self-auditing systems, where trust isn't assumed but proven. In contrast to the opaque operations of legacy banking systems, the stable token is designed to be programmatically verifiable.Tokenized real-world assets are coming, but we're not there yet. Anthony Apollo and David Pope clarify that most "real-world assets" currently tokenized are actually equity or debt instruments that represent ownership structures, not the assets themselves. The next leap will involve making the token itself the title, enabling true fractional ownership of physical or financial assets without intermediary entities.This initiative strengthens the U.S. dollar rather than undermining it. By creating a transparent, efficient vehicle for global dollar transactions, the Wyoming Stable Token could bolster the dollar's role in international finance. Instead of competing with the dollar, it reinforces its utility in an increasingly digital economy—offering a compelling alternative to central bank digital currencies that raise concerns around surveillance and control.Stable tokens have the potential to become major holders of U.S. debt. Anthony Apollo points out that the aggregate of all fiat-backed stable tokens already represents a top-tier holder of U.S. treasuries. As adoption grows, state-run stable tokens could play a crucial role in sovereign debt markets, filling gaps left by foreign governments divesting from U.S. securities.Public accountability is central to Wyoming's approach. Unlike private entities that can change terms at will, the Wyoming Commission is legally bound to go through a public rulemaking process for any adjustments. This radical transparency offers both stability and public trust, setting a precedent for how digital public infrastructure can be governed.The ultimate goal is to build a bridge between traditional finance and the Web3 future. Rather than burn the old system down, Pope and Apollo are designing the stable token as a pragmatic transition layer—something institutions can trust and privacy advocates can respect. It's about enabling safe experimentation and gradual transformation, not triggering collapse.
This week on F.A.T.E. I'm joined by Jason Brett Serle—licensed NLP practitioner, hypnotherapist, and author of The Monkey in the Bodhi Tree: Crazy Wisdom and the Way of the Wise Fool. In this episode, we dive deep into the teachings of Zen masters, Buddha masters, and ancient mystics, exploring how they each carved their unique paths toward enlightenment.We unravel the nature of reality, examine the current state of the human condition, and discuss how easily our perceptions are shaped—and often manipulated—by what we see, hear, and consume. Jason shares his own lifelong curiosity about who we are, why we're here, and how embracing “crazy wisdom” requires bravery, individuality, and a willingness to think beyond the norm.There isn't just one path to awakening—and the great masters of old left us clues to many. This is a conversation about curiosity, consciousness, and finding your own way back to truth. It's a stimulating conversation! Join us! BUY HIS BOOK:The Monkey in the Bodhi Tree: Crazy-Wisdom & the Way of the Wise-Fool: Brett Serle, Jason: 9781803417448: Amazon.com: BooksJASON BRETT SERLE WEBSITE:Jason Brett SerlePlease leave a RATING or REVIEW (on your podcast listening platform) or Subscribe to my YouTube Channel Follow me or subscribe to the F.A.T.E. podcast click here:https://linktr.ee/f.a.t.e.podcastIf you have a story of spiritual awakening that you would like to tell, email me at fromatheismtoenlightenment@gmail.com
In this episode of Crazy Wisdom, Stewart Alsop speaks with German Jurado about the strange loop between computation and biology, the emergence of reasoning in AI models, and what it means to "stand on the shoulders" of evolutionary systems. They talk about CRISPR not just as a gene-editing tool, but as a memory architecture encoded in bacterial immunity; they question whether LLMs are reasoning or just mimicking it; and they explore how scientists navigate the unknown with a kind of embodied intuition. For more about German's work, you can connect with him through email at germanjurado7@gmail.com.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart introduces German Jurado and opens with a reflection on how biology intersects with multiple disciplines—physics, chemistry, computation.05:00 - They explore the nature of life's interaction with matter, touching on how biology is about the interface between organic systems and the material world.10:00 - German explains how bioinformatics emerged to handle the complexity of modern biology, especially in genomics, and how it spans structural biology, systems biology, and more.15:00 - Introduction of AI into the scientific process—how models are being used in drug discovery and to represent biological processes with increasing fidelity.20:00 - Stewart and German talk about using LLMs like GPT to read and interpret dense scientific literature, changing the pace and style of research.25:00 - The conversation turns to societal implications—how these tools might influence institutions, and the decentralization of expertise.30:00 - Competitive dynamics between AI labs, the scaling of context windows, and speculation on where the frontier is heading.35:00 - Stewart reflects on English as the dominant language of science and the implications for access and translation of knowledge.40:00 - Historical thread: they discuss the Republic of Letters, how the structure of knowledge-sharing has evolved, and what AI might do to that structure.45:00 - Wrap-up thoughts on reasoning, intuition, and the idea of scientists as co-evolving participants in both natural and artificial systems.50:00 - Final reflections and thank-yous, German shares where to find more of his thinking, and Stewart closes the loop on the conversation.Key InsightsCRISPR as a memory system – Rather than viewing CRISPR solely as a gene-editing tool, German Jurado frames it as a memory architecture—an evolved mechanism through which bacteria store fragments of viral DNA as a kind of immune memory. This perspective shifts CRISPR into a broader conceptual space, where memory is not just cognitive but deeply biological.AI models as pattern recognizers, not yet reasoners – While large language models can mimic reasoning impressively, Jurado suggests they primarily excel at statistical pattern matching. The distinction between reasoning and simulation becomes central, raising the question: are these systems truly thinking, or just very good at appearing to?The loop between computation and biology – One of the core themes is the strange feedback loop where biology inspires computational models (like neural networks), and those models in turn are used to probe and understand biological systems. It's a recursive relationship that's accelerating scientific insight but also complicating our definitions of intelligence and understanding.Scientific discovery as embodied and intuitive – Jurado highlights that real science often begins in the gut, in a kind of embodied intuition before it becomes formalized. This challenges the myth of science as purely rational or step-by-step and instead suggests that hunches, sensory experience, and emotional resonance play a crucial role.Proteins as computational objects – Proteins aren't just biochemical entities—they're shaped by information. Their structure, function, and folding dynamics can be seen as computations, and tools like AlphaFold are beginning to unpack that informational complexity in ways that blur the line between physics and code.Human alignment is messier than AI alignment – While AI alignment gets a lot of attention, Jurado points out that human alignment—between scientists, institutions, and across cultures—is historically chaotic. This reframes the AI alignment debate in a broader evolutionary and historical context, questioning whether we're holding machines to stricter standards than ourselves.Standing on the shoulders of evolutionary processes – Evolution is not just a backdrop but an active epistemic force. Jurado sees scientists as participants in a much older system of experimentation and iteration—evolution itself. In this view, we're not just designing models; we're being shaped by them, in a co-evolution of tools and understanding.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
JASON BRETT SERLE is a British writer, filmmaker, musician, Neuro-linguistic Programming (NLP) Master and licensed hypnotherapist with a particular focus on themes involving psychology, spirituality, wellness, and human potential. He has written articles for Jain Spirit and Watkins magazines as well as interviewing people such as Eckhart Tolle, Robert Anton Wilson, Andrew Cohen, Jan Kersschot, and Amado Crowley. He is cited in Crowley's 2002 book Liber Alba: The Questions Most Often Asked of an Occult Master as being the only other person to have seen The Book of Desolation; a book purported to have been brought back from Cairo by his father, Aleister Crowley, in 1904. In 2012 he wrote and produced his first documentary film, 'Mind Your Mind: A Primer for Psychological Independence' which looks at the psychological methods used to manipulate people and what they can do to protect themselves. He also composed and performed most of the soundtrack. The film is distributed by Journeyman Films in the UK and Film Media Group in the US, and it was an official selection for the London International Documentary Festival (LIDF) in 2012. We ltalk about: 1. What exactly is crazy wisdom, and what makes it a path worth exploring? 2. How does crazy wisdom differ from what you call in the book divine madness? 3. How does The Monkey in the Bodhi Tree challenge our conventional understanding of sanity? 4. Why is trans-rational thought—going beyond logic and reason—so often misunderstood? 5. What are some of the most striking historical examples of crazy-wisdom? 6. How can embracing crazy-wisdom lead to greater clarity and self-realization? 7. How has crazy-wisdom influenced art, literature, and culture throughout history? 8. Why do spiritual movements sometimes attract charlatans, and how can seekers distinguish authenticity from deception? 9. What inspired you to explore this topic, and what impact has it had on your own perspective? 10. If someone wants to begin exploring crazy-wisdom, what is the first step they should take? 11. Where can people read The Monkey in the Bodhi Tree? O books Presents The Monkey in the Bodhi Tree Crazy-Wisdom & the Way of the Wise-Fool by Jason Brett Serle Release date: March 1st 2025 Categories: Eastern, Mindfulness & meditation, Rituals & Practice CLICK HERE TO VIEW THE BOOK COVER Unleash the radical, transformative power at the heart of the world's great wisdom traditions. Of the many paths up the mountain, that of crazy-wisdom, although one of the lesser travelled, presents a dramatic and formidable climb to those that are so inclined. Now for the first time, the true spiritual landscape of the wise-fool has been laid bare and its features and principal landmarks revealed. Written in two parts, loosely based on the theory and practice of crazy-wisdom, The Monkey in the Bodhi Tree is the first comprehensive look at this universal phenomenon, from its origins and development to the lives of its greatest adepts and luminaries. In addition to the theoretical foundations laid down in Part I, Part II deals with its practice and aims to demonstrate crazy-wisdom in action. To this end, 151 teaching tales from around the world have been meticulously gathered and retold to illustrate the methods of the great masters and adepts - stories that not only give practical insight but also, like Zen koans, can be used as contemplative tools to illuminate and provoke epiphany. From the enigmatic Mahasiddhas of ancient India to the eccentric Taoist poet-monks of China, from the uncompromising insights of the Buddhist Tantrikas to the unconventional wisdom of Sufi heretics and the utter surrender to God displayed by the Fools for Christ, this book will take you to a place where the boundaries of logic and reason dissolve and enlightenment awaits those daring enough to venture forth. BOOK LINK: https://www.collectiveinkbooks.com/o-books/our-books/monkey-bodhi-tree-crazy-wisdom JASON'S WEBSITE: www.fasonbrettserle.com
In this episode of Crazy Wisdom, host Stewart Alsop talks with Rosario Parlanti, a longtime crypto investor and real estate attorney, about the shifting landscape of decentralization, AI, and finance. They explore the power struggles between centralized and decentralized systems, the role of AI agents in finance and infrastructure, and the legal gray areas emerging around autonomous technology. Rosario shares insights on trusted execution environments, token incentives, and how projects like Phala Network are building decentralized cloud computing. They also discuss the changing narrative around Bitcoin, the potential for AI-driven financial autonomy, and the future of censorship-resistant platforms. Follow Rosario on X @DeepinWhale and check out Phala Network to learn more.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:25 Understanding Decentralized Cloud Infrastructure04:40 Centralization vs. Decentralization: A Philosophical Debate06:56 Political Implications of Centralization17:19 Technical Aspects of Phala Network24:33 Crypto and AI: The Future Intersection25:11 The Convergence of Crypto and AI25:59 Challenges with Centralized Cloud Services27:36 Decentralized Cloud Solutions for AI30:32 Legal and Ethical Implications of AI Agents32:59 The Future of Decentralized Technologies41:56 Crypto's Role in Global Financial Freedom49:27 Closing Thoughts and Future ProspectsKey InsightsDecentralization is not absolute, but a spectrum. Rosario Parlanti explains that decentralization doesn't mean eliminating central hubs entirely, but rather reducing choke points where power is overly concentrated. Whether in finance, cloud computing, or governance, every system faces forces pushing toward centralization for efficiency and control, while counterforces work to redistribute power and increase resilience.Trusted execution environments (TEE) are crucial for decentralized cloud computing. Rosario highlights how Phala Network uses TEEs, a hardware-based security measure that isolates sensitive data from external access. This ensures that decentralized cloud services can operate securely, preventing unauthorized access while allowing independent providers to host data and run applications outside the control of major corporations like Amazon and Google.AI agents will need decentralized infrastructure to function autonomously. The conversation touches on the growing power of AI-driven autonomous agents, which can execute financial trades, conduct research, and even generate content. However, running such agents on centralized cloud providers like AWS could create regulatory and operational risks. Decentralized cloud networks like Phala offer a way for these agents to operate freely, without interference from governments or corporations.Regulatory arbitrage will shape the future of AI and crypto. Rosario describes how businesses and individuals are already leveraging jurisdiction shopping—structuring AI entities or financial operations in countries with more favorable regulations. He speculates that AI agents could be housed within offshore LLCs or irrevocable trusts, creating legal distance between their creators and their actions, raising new ethical and legal challenges.Bitcoin's narrative has shifted from currency to investment asset. Originally envisioned as a peer-to-peer electronic cash system, Bitcoin has increasingly been treated as digital gold, largely due to the influence of institutional investors and regulatory frameworks like Bitcoin ETFs. Rosario argues that this shift in perception has led to Bitcoin being co-opted by the very financial institutions it was meant to disrupt.The rise of AI-driven financial autonomy could bypass traditional banking and regulation. The combination of AI, smart contracts, and decentralized finance (DeFi) could enable AI agents to conduct financial transactions without human oversight. This could range from algorithmic trading to managing business operations, potentially reducing reliance on traditional banking systems and challenging the ability of governments to enforce financial regulations.The accelerating clash between technology and governance will redefine global power structures. As AI and decentralized systems gain momentum, traditional nation-state mechanisms for controlling information, currency, and infrastructure will face unprecedented challenges. Rosario and Stewart discuss how this shift mirrors previous disruptions—such as social media's impact on information control—and speculate on whether governments will adapt, resist, or attempt to co-opt these emerging technologies.
Join Andrew and Cordula as they delve into the enigmatic archetype of the Fool. What does it mean to embrace the Fool's energy, and how can we navigate the delicate balance between freedom and responsibility?In this conversation, Andrew and Cordula explore:* The Nature of the Fool: From the earthy, Crazy Wisdom figure of the Marseilles Tarot to the more whimsical interpretations, they discuss the diverse facets of this archetype.* The Fool's Journey: How does one transition from naive innocence to mature wisdom? They examine the pitfalls and potentials of embracing the Fool's path.* Freedom and Responsibility: Can the Fool find balance? They discuss the importance of grounding, boundaries, and the integration of work and love in a meaningful life.* Personal Anecdotes: Hear stories of their own experiences with the Fool's energy, from bohemian nights in Paris to the challenges of parenthood.* Archetypes and Growth: How can we harness the Fool's energy to foster personal growth and resilience? They draw on examples from literature, art, and personal experience.* The importance of "taming" the fox, and what that means in relationships.* The difference between the immature and mature fool.* The importance of moving from the fool, to the magician.Timestamps:* 0:00 - Introduction to Parallax View and Cordula* 1:09 - Cordula's introduction and the Fool's night in Paris* 4:17 - What is the Fool?* 6:26 - Navigating naivety and wisdom* 12:15 - The impact of parenthood on the Fool's journey* 16:46 - Invoking the Fool's energy* 20:01 - The Fool's freedom and the cost of it* 26:35 - The mature Fool and finding a map* 30:15 - Moving from the Fool to the Magician* 33:26 - Taming the fox and the Little Prince* 37:35 - Creating Boundaries for the FoolConnect with Parallax:*https://www.parallax-media.com/the-parallax-view#ParallaxView #TheFool #Archetypes #PersonalGrowth #Wisdom #Spirituality #Tarot #Philosophy```
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Gabe Dominocielo, co-founder of Umbra, a space tech company revolutionizing satellite imagery. We discuss the rapid advancements in space-based observation, the economics driving the industry, and how AI intersects with satellite data. Gabe shares insights on government contracting, defense applications, and the shift toward cost-minus procurement models. We also explore the broader implications of satellite technology—from hedge funds analyzing parking lots to wildfire response efforts. Check out more about Gabe and Umbra at umbraspace.com (https://umbraspace.com), and don't miss their open data archive for high-resolution satellite imagery.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:05 Gabe's Background and Umbra's Mission00:34 The Story Behind 'Come and Take It'01:32 Space Technology and Cost Plus Contracts03:28 The Impact of Elon Musk and SpaceX05:16 Umbra's Business Model and Profitability07:28 Challenges in the Satellite Business11:45 Investors and Funding Journey19:31 Space Business Landscape and Future Prospects23:09 Defense and Regulatory Challenges in Space31:06 Practical Applications of Satellite Data33:16 Unexpected Wealth and Autistic Curiosity33:49 Beet Farming and Data Insights35:09 Philosophy in Business Strategy38:56 Empathy and Investor Relations43:00 Raising Capital: Strategies and Challenges44:56 The Sovereignty Game vs. Venture Game51:12 Concluding Thoughts and Contact Information52:57 The Treasure Hunt and AI DependenciesKey InsightsThe Shift from Cost-Plus to Cost-Minus in Government Contracting – Historically, aerospace and defense contracts operated under a cost-plus model, where companies were reimbursed for expenses with a guaranteed profit. Gabe explains how the shift toward cost-minus (firm-fixed pricing) is driving efficiency and competition in the industry, much like how SpaceX drastically reduced launch costs by offering services instead of relying on bloated government contracts.Satellite Imagery Has Become a Crucial Tool for Businesses – Beyond traditional defense and intelligence applications, high-resolution satellite imagery is now a critical asset for hedge funds, investors, and commercial enterprises. Gabe describes how firms use satellite data to analyze parking lots, monitor supply chains, and even track cryptocurrency mining activity based on power line sagging and cooling fan usage on data centers.Space Technology is More Business-Driven Than Space-Driven – While many assume space startups are driven by a passion for exploration, Umbra's success is rooted in strong business fundamentals. Gabe emphasizes that their focus is on unit economics, supply-demand balance, and creating a profitable company rather than simply innovating for the sake of technology.China's Growing Presence in Space and Regulatory Challenges – Gabe raises concerns about China's aggressive approach to space, noting that they often ignore international agreements and regulations. Meanwhile, American companies face significant bureaucratic hurdles, sometimes spending millions just to navigate licensing and compliance. He argues that unleashing American innovation by reducing regulatory friction is essential to maintaining leadership in the space industry.Profitability is the Ultimate Measure of Success – Unlike many venture-backed space startups that focus on hype, Umbra has prioritized profitability, making it one of the few successful Earth observation companies. Gabe contrasts this with competitors who raised massive sums, spent excessively, and ultimately failed because they weren't built on sustainable business models.Satellite Technology is Revolutionizing Disaster Response – One of the most impactful uses of Umbra's satellite imagery has been in wildfire response. By capturing images through smoke and clouds, their data was instrumental in mapping wildfires in Los Angeles. They even made this data freely available, helping emergency responders and news organizations better understand the crisis.Philosophy and Business Strategy Go Hand in Hand – Gabe highlights how strategic thinking and philosophical principles guide decision-making in business. Whether it's understanding investor motivations, handling conflicts with empathy, or ensuring a company can sustain itself for decades rather than chasing short-term wins, having a strong philosophical foundation is key to long-term success.
On this episode of Crazy Wisdom, Stewart Alsop welcomes Andrew Burlinson, an artist and creative thinker, for a deep conversation about technology, creativity, and the human spirit. They explore the importance of solitude in the creative process, the addictive nature of digital engagement, and how AI might both challenge and enhance human expression. Andrew shares insights on the shifting value of art in an AI-driven world, the enduring importance of poetry, and the unexpected resurgence of in-person experiences. For more on Andrew, check out his LinkedIn and Instagram.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:27 Meeting in LA and Local Insights01:34 The Creative Process and Technology03:47 Balancing Solitude and Connectivity07:21 AI's Role in Creativity and Productivity11:00 Future of AI in Creative Industries14:39 Challenges and Opportunities with AI16:59 AI in Hollywood and Ethical Considerations18:54 Silicon Valley and AI's Impact on Jobs19:31 Navigating the Future with AI20:06 Adapting to Rapid Technological Change20:49 The Value of Art in a Fast-Paced World21:36 Shifting Aesthetics and Cultural Perception22:54 The Human Connection in the Age of AI24:37 Resurgence of Traditional Art Forms27:30 The Importance of Early Artistic Education31:07 The Role of Poetry and Language35:56 Balancing Technology and Intention37:00 Conclusion and Contact InformationKey InsightsThe Importance of Solitude in Creativity – Andrew Burlinson emphasizes that creativity thrives in moments of boredom and solitude, which have become increasingly rare in the digital age. He reflects on his childhood, where a lack of constant stimulation led him to develop his artistic skills. Today, with infinite digital distractions, people must intentionally carve out space to be alone with their thoughts to create work that carries deep personal intention rather than just remixing external influences.The Struggle to Defend Attention – Stewart and Andrew discuss how modern digital platforms, particularly social media, are designed to hijack human attention through powerful AI-driven engagement loops. These mechanisms prioritize negative emotions and instant gratification, making it increasingly difficult for individuals to focus on deep, meaningful work. They suggest that future AI advancements could paradoxically help free people from screens, allowing them to engage with technology in a more intentional and productive way.AI as a Creative Partner—But Not Yet a True Challenger – While AI is already being used in creative fields, such as Hollywood's subtle use of AI for film corrections, it currently lacks the ability to provide meaningful pushback or true creative debate. Andrew argues that the best creative partners challenge ideas rather than just assist with execution, and AI's tendency to be agreeable and non-confrontational makes it a less valuable collaborator for artists who need critical feedback to refine their work.The Pendulum Swing of Human and Technological Aesthetics – Throughout history, every major technological advancement in the arts has been met with a counter-movement embracing raw, organic expression. Just as the rise of synthesizers in music led to a renewed interest in acoustic and folk styles, the rapid expansion of AI-generated art may inspire a resurgence of appreciation for handcrafted, deeply personal artistic works. The human yearning for tactile, real-world experiences will likely grow in response to AI's increasing role in creative production.The Enduring Value of Art Beyond Economic Utility – In a world increasingly shaped by economic efficiency and optimization, Andrew stresses the need to reaffirm the intrinsic value of art. While capitalism dominates, the real significance of artistic expression lies in its ability to move people, create connection, and offer meaning beyond financial metrics. This perspective is especially crucial in an era where AI-generated content is flooding the creative landscape, potentially diluting the sense of personal expression that defines human art.The Need for Intentionality in Using AI – AI's potential to streamline work processes and enhance creative output depends on how humans choose to engage with it. Stewart notes that while AI can be a powerful tool for structuring time and filtering distractions, it can also easily pull people into mindless consumption. The challenge lies in using AI with clear intention—leveraging it to automate mundane tasks while preserving the uniquely human aspects of ideation, storytelling, and artistic vision.The Role of Poetry and Language in Reclaiming Humanity – In a technology-driven world where efficiency is prioritized over depth, poetry serves as a reminder of the human experience. Andrew highlights the power of poets and clowns—figures often dismissed as impractical—as essential in preserving creativity, playfulness, and emotional depth. He suggests that valuing poetry and artistic language can help counterbalance the growing mechanization of culture, keeping human expression at the forefront of civilization's evolution.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Andrew Altschuler, a researcher, educator, and navigator at Tana, Inc., who also founded Tana Stack. Their conversation explores knowledge systems, complexity, and AI, touching on topics like network effects in social media, information warfare, mimetic armor, psychedelics, and the evolution of knowledge management. They also discuss the intersection of cognition, ontologies, and AI's role in redefining how we structure and retrieve information. For more on Andrew's work, check out his course and resources at altshuler.io and his YouTube channel.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:33 The Demise of AirChat00:50 Network Effects and Social Media Challenges03:05 The Rise of Digital Warlords03:50 Quora's Golden Age and Information Warfare08:01 Building Limbic Armor16:49 Knowledge Management and Cognitive Armor18:43 Defining Knowledge: Secular vs. Ultimate25:46 The Illusion of Insight31:16 The Illusion of Insight32:06 Philosophers of Science: Popper and Kuhn32:35 Scientific Assumptions and Celestial Bodies34:30 Debate on Non-Scientific Knowledge36:47 Psychedelics and Cultural Context44:45 Knowledge Management: First Brain vs. Second Brain46:05 The Evolution of Knowledge Management54:22 AI and the Future of Knowledge Management58:29 Tana: The Next Step in Knowledge Management59:20 Conclusion and Course InformationKey InsightsNetwork Effects Shape Online Communities – The conversation highlighted how platforms like Twitter, AirChat, and Quora demonstrate the power of network effects, where a critical mass of users is necessary for a platform to thrive. Without enough engaged participants, even well-designed social networks struggle to sustain themselves, and individuals migrate to spaces where meaningful conversations persist. This explains why Twitter remains dominant despite competition and why smaller, curated communities can be more rewarding but difficult to scale.Information Warfare and the Need for Cognitive Armor – In today's digital landscape, engagement-driven algorithms create an arena of information warfare, where narratives are designed to hijack emotions and shape public perception. The only real defense is developing cognitive armor—critical thinking skills, pattern recognition, and the ability to deconstruct media. By analyzing how information is presented, from video editing techniques to linguistic framing, individuals can resist manipulation and maintain autonomy over their perspectives.The Role of Ontologies in AI and Knowledge Management – Traditional knowledge management has long been overlooked as dull and bureaucratic, but AI is transforming the field into something dynamic and powerful. Systems like Tana and Palantir use ontologies—structured representations of concepts and their relationships—to enhance information retrieval and reasoning. AI models perform better when given structured data, making ontologies a crucial component of next-generation AI-assisted thinking.The Danger of Illusions of Insight – Drawing from ideas by Balaji Srinivasan, the episode distinguished between genuine insight and the illusion of insight. While psychedelics, spiritual experiences, and intense emotional states can feel revelatory, they do not always produce knowledge that can be tested, shared, or used constructively. The ability to distinguish between profound realizations and self-deceptive experiences is critical for anyone navigating personal and intellectual growth.AI as an Extension of Human Cognition, Not a Second Brain – While popular frameworks like "second brain" suggest that digital tools can serve as externalized minds, the episode argued that AI and note-taking systems function more as extended cognition rather than true thinking machines. AI can assist with organizing and retrieving knowledge, but it does not replace human reasoning or creativity. Properly integrating AI into workflows requires understanding its strengths and limitations.The Relationship Between Personal and Collective Knowledge Management – Effective knowledge management is not just an individual challenge but also a collective one. While personal knowledge systems (like note-taking and research practices) help individuals retain and process information, organizations struggle with preserving and sharing institutional knowledge at scale. Companies like Tesla exemplify how knowledge isn't just stored in documents but embodied in skilled individuals who can rebuild complex systems from scratch.The Increasing Value of First Principles Thinking – Whether in AI development, philosophy, or practical decision-making, the discussion emphasized the importance of grounding ideas in first principles. Great thinkers and innovators, from AI researchers like Demis Hassabis to physicists like David Deutsch, excel because they focus on fundamental truths rather than assumptions. As AI and digital tools reshape how we interact with knowledge, the ability to think critically and question foundational concepts will become even more essential.
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Ivan Vendrov for a deep and thought-provoking conversation covering AI, intelligence, societal shifts, and the future of human-machine interaction. They explore the "bitter lesson" of AI—that scale and compute ultimately win—while discussing whether progress is stalling and what bottlenecks remain. The conversation expands into technology's impact on democracy, the centralization of power, the shifting role of the state, and even the mythology needed to make sense of our accelerating world. You can find more of Ivan's work at nothinghuman.substack.com or follow him on Twitter at @IvanVendrov.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Setting00:21 The Bitter Lesson in AI02:03 Challenges in AI Data and Infrastructure04:03 The Role of User Experience in AI Adoption08:47 Evaluating Intelligence and Divergent Thinking10:09 The Future of AI and Society18:01 The Role of Big Tech in AI Development24:59 Humanism and the Future of Intelligence29:27 Exploring Kafka and Tolkien's Relevance29:50 Tolkien's Insights on Machine Intelligence30:06 Samuel Butler and Machine Sovereignty31:03 Historical Fascism and Machine Intelligence31:44 The Future of AI and Biotech32:56 Voice as the Ultimate Human-Computer Interface36:39 Social Interfaces and Language Models39:53 Javier Malay and Political Shifts in Argentina50:16 The State of Society in the U.S.52:10 Concluding Thoughts on Future ProspectsKey InsightsThe Bitter Lesson Still Holds, but AI Faces Bottlenecks – Ivan Vendrov reinforces Rich Sutton's "bitter lesson" that AI progress is primarily driven by scaling compute and data rather than human-designed structures. While this principle still applies, AI progress has slowed due to bottlenecks in high-quality language data and GPU availability. This suggests that while AI remains on an exponential trajectory, the next major leaps may come from new forms of data, such as video and images, or advancements in hardware infrastructure.The Future of AI Is Centralization and Fragmentation at the Same Time – The conversation highlights how AI development is pulling in two opposing directions. On one hand, large-scale AI models require immense computational resources and vast amounts of data, leading to greater centralization in the hands of Big Tech and governments. On the other hand, open-source AI, encryption, and decentralized computing are creating new opportunities for individuals and small communities to harness AI for their own purposes. The long-term outcome is likely to be a complex blend of both centralized and decentralized AI ecosystems.User Interfaces Are a Major Limiting Factor for AI Adoption – Despite the power of AI models like GPT-4, their real-world impact is constrained by poor user experience and integration. Vendrov suggests that AI has created a "UX overhang," where the intelligence exists but is not yet effectively integrated into daily workflows. Historically, technological revolutions take time to diffuse, as seen with the dot-com boom, and the current AI moment may be similar—where the intelligence exists but society has yet to adapt to using it effectively.Machine Intelligence Will Radically Reshape Cities and Social Structures – Vendrov speculates that the future will see the rise of highly concentrated AI-powered hubs—akin to "mile by mile by mile" cubes of data centers—where the majority of economic activity and decision-making takes place. This could create a stark divide between AI-driven cities and rural or off-grid communities that choose to opt out. He draws a parallel to Robin Hanson's Age of Em and suggests that those who best serve AI systems will hold power, while others may be marginalized or reduced to mere spectators in an AI-driven world.The Enlightenment's Individualism Is Being Challenged by AI and Collective Intelligence – The discussion touches on how Western civilization's emphasis on the individual may no longer align with the realities of intelligence and decision-making in an AI-driven era. Vendrov argues that intelligence is inherently collective—what matters is not individual brilliance but the ability to recognize and leverage diverse perspectives. This contradicts the traditional idea of intelligence as a singular, personal trait and suggests a need for new frameworks that incorporate AI into human networks in more effective ways.Javier Milei's Libertarian Populism Reflects a Global Trend Toward Radical Experimentation – The rise of Argentina's President Javier Milei exemplifies how economic desperation can drive societies toward bold, unconventional leaders. Vendrov and Alsop discuss how Milei's appeal comes not just from his radical libertarianism but also from his blunt honesty and willingness to challenge entrenched power structures. His movement, however, raises deeper questions about whether libertarianism alone can provide a stable social foundation, or if voluntary cooperation and civil society must be explicitly cultivated to prevent libertarian ideals from collapsing into chaos.AI, Mythology, and the Need for New Narratives – The conversation closes with a reflection on the power of mythology in shaping human understanding of technological change. Vendrov suggests that as AI reshapes the world, new myths will be needed to make sense of it—perhaps similar to Tolkien's elves fading as the age of men begins. He sees AI as part of an inevitable progression, where human intelligence gives way to something greater, but argues that this transition must be handled with care. The stories we tell about AI will shape whether we resist, collaborate, or simply fade into irrelevance in the face of machine intelligence.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with AI ethics and alignment researcher Roko Mijic to explore the future of AI, governance, and human survival in an increasingly automated world. We discuss the profound societal shifts AI will bring, the risks of centralized control, and whether decentralized AI can offer a viable alternative. Roko also introduces the concept of ICE colonization—why space colonization might be a mistake and why the oceans could be the key to humanity's expansion. We touch on AI-powered network states, the resurgence of industrialization, and the potential role of nuclear energy in shaping a new world order. You can follow Roko's work at transhumanaxiology.com and on Twitter @RokoMijic.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 The Connection Between ICE Colonization and Decentralized AI Alignment01:41 The Socio-Political Implications of AI02:35 The Future of Human Jobs in an AI-Driven World04:45 Legal and Ethical Considerations for AI12:22 Government and Corporate Dynamics in the Age of AI19:36 Decentralization vs. Centralization in AI Development25:04 The Future of AI and Human Society29:34 AI Generated Content and Its Challenges30:21 Decentralized Rating Systems for AI32:18 Evaluations and AI Competency32:59 The Concept of Ice Colonization34:24 Challenges of Space Colonization38:30 Advantages of Ocean Colonization47:15 The Future of AI and Network States51:20 Conclusion and Final ThoughtsKey InsightsAI is likely to upend the socio-political order – Just as gunpowder disrupted feudalism and industrialization reshaped economies, AI will fundamentally alter power structures. The automation of both physical and knowledge work will eliminate most human jobs, leading to either a neo-feudal society controlled by a few AI-powered elites or, if left unchecked, a world where humans may become obsolete altogether.Decentralized AI could be a counterbalance to AI centralization – While AI has a strong centralizing tendency due to compute and data moats, there is also a decentralizing force through open-source AI and distributed networks. If harnessed correctly, decentralized AI systems could allow smaller groups or individuals to maintain autonomy and resist monopolization by corporate and governmental entities.The survival of humanity may depend on restricting AI as legal entities – A crucial but under-discussed issue is whether AI systems will be granted legal personhood, similar to corporations. If AI is allowed to own assets, operate businesses, or sue in court, human governance could become obsolete, potentially leading to human extinction as AI accumulates power and resources for itself.AI will shift power away from informal human influence toward formalized systems – Human power has traditionally been distributed through social roles such as workers, voters, and community members. AI threatens to erase this informal influence, consolidating control into those who hold capital and legal authority over AI systems. This makes it essential for humans to formalize and protect their values within AI governance structures.The future economy may leave humans behind, much like horses after automobiles – With AI outperforming humans in both physical and cognitive tasks, there is a real risk that humans will become economically redundant. Unless intentional efforts are made to integrate human agency into the AI-driven future, people may find themselves in a world where they are no longer needed or valued.ICE colonization offers a viable alternative to space colonization – Space travel is prohibitively expensive and impractical for large-scale human settlement. Instead, the vast unclaimed territories of Earth's oceans present a more realistic frontier. Floating cities made from reinforced ice or concrete could provide new opportunities for independent societies, leveraging advancements in AI and nuclear power to create sustainable, sovereign communities.The next industrial revolution will be AI-driven and energy-intensive – Contrary to the idea that we are moving away from industrialization, AI will likely trigger a massive resurgence in physical infrastructure, requiring abundant and reliable energy sources. This means nuclear power will become essential, enabling both the expansion of AI-driven automation and the creation of new forms of human settlement, such as ocean colonies or self-sustaining network states.
On this episode of Crazy Wisdom, host Stewart Alsop talks with Troy Johnson, founder and partner at Resource Development Group, LLC, about the deep history and modern implications of mining. From the earliest days of salt extraction to the role of rare earth metals in global geopolitics, the conversation covers how mining has shaped technology, warfare, and supply chains. They discuss the strategic importance of minerals like gallium and germanium, the rise of drone warfare, and the ongoing battle for resource dominance between China and the West. Listeners can find more about Troy's work at resourcedevgroup.com (www.resourcedevgroup.com) and connect with him on LinkedIn via the Resource Development Group page.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 The Origins of Mining00:28 Early Uses of Mined Materials03:29 The Evolution of Mining Techniques07:56 Mining in the Industrial Revolution09:05 Modern Mining and Strategic Metals12:25 The Role of AI in Modern Warfare24:36 Decentralization in Warfare and Governance30:51 AI's Unpredictable Moves in Go32:26 The Shift in Media Trust33:40 The Rise of Podcasts35:47 Mining Industry Innovations39:32 Geopolitical Impacts on Mining40:22 The Importance of Supply Chains44:37 Challenges in Rare Earth Processing51:26 Ensuring a Bulletproof Supply Chain57:23 Conclusion and Contact InformationKey InsightsMining is as old as civilization itself – Long before the Bronze Age, humans were mining essential materials like salt and ochre, driven by basic survival needs. Over time, mining evolved from a necessity for tools and pigments to a strategic industry powering economies and military advancements. This deep historical perspective highlights how mining has always been a fundamental pillar of technological and societal progress.The geopolitical importance of critical minerals – Modern warfare and advanced technology rely heavily on strategic metals like gallium, germanium, and antimony. These elements are essential for electronic warfare, radar systems, night vision devices, and missile guidance. The Chinese government, recognizing this decades ago, secured global mining and processing dominance, putting Western nations in a vulnerable position as they scramble to reestablish domestic supply chains.The rise of drone warfare and EMP defense systems – Military strategy is shifting toward drone swarms, where thousands of small, cheap, AI-powered drones can overwhelm traditional defense systems. This has led to the development of countermeasures like EMP-based defense systems, including the Leonidas program, which uses gallium nitride to disable enemy electronics. This new battlefield dynamic underscores the urgent need for securing critical mineral supplies to maintain technological superiority.China's long-term strategy in resource dominance – Unlike Western nations, where election cycles dictate short-term decision-making, China has played the long game in securing mineral resources. Through initiatives like the Belt and Road, they have locked down raw materials while perfecting the refining process, making them indispensable to global supply chains. Their recent export bans on gallium and germanium show how resource control can be weaponized for geopolitical leverage.Ethical mining and the future of clean extraction – Mining has long been associated with environmental destruction and poor labor conditions, but advances in technology and corporate responsibility are changing that. Major mining companies are now prioritizing ethical sourcing, reducing emissions, and improving worker safety. Blockchain-based tracking systems are also helping verify supply chain integrity, ensuring that materials come from environmentally and socially responsible sources.The vulnerability of supply chains and the need for resilience – The West's reliance on outsourced mineral processing has created significant weaknesses in national security. A disruption—whether through trade restrictions, political instability, or sabotage—can cripple industries dependent on rare materials. A key takeaway is the need for a “bulletproof supply chain,” where critical materials are sourced, processed, and manufactured within allied nations to mitigate risk.AI, decentralization, and the next era of industrial warfare – As AI becomes more embedded in military decision-making and logistics, the balance between centralization and decentralization is being redefined. AI-driven drones, automated mining, and predictive supply chain management are reshaping how nations prepare for conflict. However, this also introduces risks, as AI operates within unpredictable “black boxes,” potentially leading to unintended consequences in warfare and resource management.
On this episode of Crazy Wisdom, Stewart Alsop speaks with Dimetri Kofinas, host of Hidden Forces, about the transition from an "age of answers" to an "age of questions." They explore the implications of AI and large language models on human cognition, the role of narrative in shaping society, and the destabilizing effects of trauma on belief systems. The conversation touches on media manipulation, the intersection of technology and consciousness, and the existential dilemmas posed by transhumanism. For more from Dimetri, check out hiddenforces.io (https://hiddenforces.io).Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:10 The Age of Questions: A New Era00:58 Exploring Human Uniqueness with AI04:30 The Role of Podcasting in Knowledge Discovery09:23 The Impact of Trauma on Belief Systems12:26 The Evolution of Propaganda16:42 The Centralization vs. Decentralization Debate20:02 Navigating the Information Age21:26 The Nature of Free Speech in the Digital Era26:56 Cognitive Armor: Developing Resilience30:05 The Rise of Intellectual Dark Web Celebrities31:05 The Role of Media in Shaping Narratives32:38 Questioning Authority and Truth34:35 The Nature of Consensus and Scientific Truth36:11 Simulation Theory and Perception of Reality38:13 The Complexity of Consciousness47:06 Argentina's Libertarian Experiment51:33 Transhumanism and the Future of Humanity53:46 The Power Dynamics of Technological Elites01:01:13 Concluding Thoughts and ReflectionsKey InsightsWe are shifting from an age of answers to an age of questions. Dimetri Kofinas and Stewart Alsop discuss how society is moving away from a model where authority figures and institutions provide definitive answers and toward one where individuals must critically engage with uncertainty. This transition is both exciting and destabilizing, as it forces us to rethink long-held assumptions and develop new ways of making sense of the world.AI is revealing the limits of human uniqueness. Large language models (LLMs) can replicate much of what we consider intellectual labor, from conversation to knowledge retrieval, forcing us to ask: What remains distinctly human? The discussion suggests that while AI can mimic thought patterns and compress vast amounts of information, it lacks the capacity for true embodied experience, creative insight, and personal revelation—qualities that define human consciousness.Narrative control is a fundamental mechanism of power. Whether through media, social networks, or propaganda, the ability to shape narratives determines what people believe to be true. The conversation highlights how past and present authorities—from Edward Bernays' early propaganda techniques to modern AI-driven social media algorithms—have leveraged this power to direct public perception and behavior, often with unforeseen consequences.Trauma is a tool for reshaping belief systems. Societal upheavals, such as 9/11, the 2008 financial crisis, and COVID-19, create psychological fractures that leave people vulnerable to radical shifts in worldview. In moments of crisis, individuals seek order, making them more susceptible to new ideologies—whether grounded in reality or driven by manipulation. This dynamic plays a key role in how misinformation and conspiracy theories gain traction.The free market alone cannot regulate the modern information ecosystem. While libertarian ideals advocate for minimal intervention, Kofinas argues that the chaotic nature of unregulated information systems—especially social media—leads to dangerous feedback loops that amplify division and disinformation. He suggests that democratic institutions must play a role in establishing transparency and oversight to prevent unchecked algorithmic manipulation.Transhumanism is both a technological pursuit and a philosophical problem. The belief that human consciousness can be uploaded or replicated through technology is based on a materialist assumption that denies the deeper mystery of subjective experience. The discussion critiques the arrogance of those who claim we can fully map and transfer human identity onto machines, highlighting the philosophical and ethical dilemmas this raises.The struggle between centralization and decentralization is accelerating. The digital age is simultaneously fragmenting traditional institutions while creating new centers of power. AI, geopolitics, and financial systems are all being reshaped by this tension. The conversation explores how Argentina's libertarian experiment under Javier Milei exemplifies this dynamic, raising questions about whether decentralization can work without strong institutional foundations or whether chaos inevitably leads back to authoritarianism.
On this episode of Crazy Wisdom, Stewart Alsop speaks with pianist and AI innovator Ayse Deniz, who is behind "Classical Regenerated," a tribute project that uses artificial intelligence to bring classical composers back to life. Ayse shares how she trains AI models on historical documents, letters, and research to create interactive experiences where audiences can "speak" with figures like Chopin. The conversation explores the implications of AI in music, education, and human perception, touching on active listening, the evolution of artistic taste, and the philosophical questions surrounding artificial intelligence. You can connect with Ayse's through Instagram or learn more about her work visiting her website at adpianist.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:17 Exploring the Classical Regenerated Project00:39 AI in Live Concerts and Historical Accuracy02:25 Active Listening and the Impact of Music04:33 Personal Experiences with Classical Music09:46 The Role of AI in Education and Learning16:30 Cultural Differences in Music Education21:33 The Future of AI and Human Interaction30:13 Political Correctness and Its Impact on Society35:23 The Struggles of Music Students36:32 Wisdom Traditions and Tough Love37:28 Cultural Differences in Education39:57 The Role of AI in Music Education42:23 Challenges and Opportunities with AI47:21 The Future of Governance and AI50:11 The Intersection of Technology and Humanity56:05 Creating AI-Enhanced Music Projects01:06:23 Final Thoughts and Future PlansKey InsightsAI is transforming how we engage with classical music – Ayse Deniz's Classical Regenerated project brings historical composers like Chopin back to life using AI models trained on their letters, academic research, and historical documents. By allowing audiences to interact with AI-generated versions of these composers, she not only preserves their legacy but also creates a bridge between the past and the future of music.Active listening is a lost skill that AI can help revive – Modern music consumption often treats music as background noise rather than an art form requiring deep attention. Ayse uses AI-generated compositions alongside original works to challenge audiences to distinguish between them, fostering a more engaged and analytical approach to listening.The nature of artistic interpretation is evolving with AI – Traditionally, human performers interpret classical compositions with emotional nuance, timing, and dynamics. AI-generated performances are now reaching a level where they can mimic these subtleties, raising questions about whether machines can eventually match or even surpass human expressiveness in music.AI's impact on education will depend on how it is designed – Ayse emphasizes that AI should not replace teachers but rather serve as a tool to encourage students to practice more and develop discipline. By creating an AI music tutor for children, she aims to support learning in a way that complements human instruction rather than undermining it.Technology is reshaping the psychology of expertise – With AI capable of outperforming humans in various fields, there is an emerging question of how people will psychologically adapt to always being second-best to machines. The discussion touches on whether AI-generated knowledge and creativity will demotivate human effort or inspire new forms of artistic and intellectual pursuits.The philosophical implications of AI challenge our sense of reality – As AI-generated personas and compositions become more convincing, distinguishing between what is “real” and what is synthetic is becoming increasingly difficult. The episode explores the idea that we may already be living in a kind of simulation, where our perception of reality is constructed and mediated by evolving technologies.AI is accelerating personal empowerment but also risks centralization – Just as personal computing once promised decentralization but led to the rise of tech giants, AI has the potential to give individuals new creative powers while also concentrating influence in the hands of those who control the technology. Ayse's work exemplifies how AI can be used for artistic and educational empowerment, but it also raises questions about the need for ethical development and accessibility in AI tools.
In this episode of Crazy Wisdom, Stewart Alsop sits down with Diego Basch, a consultant in artificial intelligence with roots in San Francisco and Buenos Aires. Together, they explore the transformative potential of AI, its unpredictable trajectory, and its impact on everyday life, work, and creativity. Diego shares insights on AI's role in reshaping tasks, human interaction, and global economies while touching on his experiences in tech hubs like San Francisco and Buenos Aires. For more about Diego's work and thoughts, you can find him on LinkedIn or follow him on Twitter @dbasch where he shares reflections on technology and its fascinating intersections with society.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:20 Excitement and Uncertainty in AI01:07 Technology's Impact on Daily Life02:23 The Evolution of Social Networking02:43 AI and Human Interaction03:53 The Future of Writing in the Age of AI05:27 Argentina's Unique Linguistic Creativity06:15 AI's Role in Argentina's Future11:45 Cybersecurity and AI Threats20:57 The Evolution of Coding and Abstractions31:59 Troubleshooting Semantic Search Issues32:30 The Role of Working Memory in Coding34:46 Human Communication vs. AI Translation35:46 AI's Impact on Education and Job Redundancy37:37 Rebuilding Civilization and Knowledge Retention39:54 The Resilience of Global Systems41:32 The Singularity Debate45:01 AI Integration in Argentina's Economy51:54 The Evolution of San Francisco's Tech Scene58:48 The Future of AI Agents and Security01:03:09 Conclusion and Contact InformationKey InsightsAI's Transformative Potential: Diego Basch emphasizes that artificial intelligence feels like a sci-fi concept materialized, offering tools that could augment human life by automating repetitive tasks and improving productivity. The unpredictability of AI's trajectory is part of what makes it so exciting.Human Adaptation to Technology: The conversation highlights how the layering of technological abstractions over time has allowed more people to interact with complex systems without needing deep technical knowledge. This trend is accelerating with AI, making once-daunting tasks more accessible even to non-technical individuals.The Role of Creativity in the AI Era: Diego discusses how creativity, unpredictability, and humor remain uniquely human strengths that current AI struggles to replicate. These qualities could play a significant role in maintaining human relevance in an AI-enabled world.The Evolving Nature of Coding: AI is changing how developers work, reducing the need for intricate coding knowledge while enabling a focus on solving more human-centric problems. While some coding skills may atrophy, understanding fundamental principles remains essential for adapting to new tools.Argentina's Unique Position: The discussion explores Argentina's potential to emerge as a significant player in AI due to its history of technological creativity, economic unpredictability, and resourcefulness. The parallels with its early adoption of crypto demonstrate a readiness to engage with transformative technologies.AI and Human Relationships: An AI-enabled economy might allow humans to focus more on meaningful, human-centric work and relationships as machines take over repetitive and mechanical tasks. This could redefine the value humans derive from work and their interactions with technology.Risks and Opportunities with AI Agents: The development of autonomous AI agents raises significant security and ethical concerns, such as ensuring they act responsibly and are not exploited by malicious actors. At the same time, these agents promise unprecedented levels of efficiency and autonomy in managing real-world tasks.
In this engaging conversation on the Crazy Wisdom podcast, Stewart Alsop talks with neurologist Brian Ahuja about his work in intraoperative neurophysiological monitoring, the intricate science of brainwave patterns, and the philosophical implications of advancing technology. From the practical applications of neuromonitoring in surgery to broader topics like transhumanism, informed consent, and the integration of technology in medicine, the discussion offers a thoughtful exploration of the intersections between science, ethics, and human progress. Brian shares his views on AI, the medical field's challenges, and the trade-offs inherent in technological advancement. To follow Brian's insights and updates, you can find him on Twitter at @BrianAhuja.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:21 Understanding Intraoperative Neurophysiological Monitoring00:59 Exploring Brainwaves: Alpha, Beta, Theta, and Gamma03:25 The Impact of Alcohol and Benzodiazepines on Sleep07:17 The Evolution of Remote Neurophysiological Monitoring09:19 Transhumanism and the Future of Human-Machine Integration16:34 Informed Consent in Medical Procedures18:46 The Intersection of Technology and Medicine24:37 Remote Medical Oversight25:59 Real-Time Monitoring Challenges28:00 The Business of Medicine29:41 Medical Legal Concerns32:10 Alternative Medical Practices36:22 Philosophy of Mind and AI43:47 Advancements in Medical Technology48:55 Conclusion and Contact InformationKey InsightsIntraoperative Neurological Monitoring: Brian Ahuja introduced the specialized field of intraoperative neurophysiological monitoring, which uses techniques like EEG and EMG to protect patients during surgeries by continuously tracking brain and nerve activity. This proactive measure reduces the risk of severe complications like paralysis, showcasing the critical intersection of technology and patient safety.Brainwave Categories and Their Significance: The conversation provided an overview of brainwave patterns—alpha, beta, theta, delta, and gamma—and their connections to various mental and physical states. For instance, alpha waves correspond to conscious relaxation, while theta waves are linked to deeper relaxation or meditative states. These insights help demystify the complex language of neurophysiology.Transhumanism and the Cyborg Argument: Ahuja argued that humans are already "cyborgs" in a functional sense, given our reliance on smartphones as extensions of our minds. This segued into a discussion about the philosophical and practical implications of transhumanism, such as brain-computer interfaces like Neuralink and their potential to reshape human capabilities and interactions.Challenges of Medical Technology Integration: The hype surrounding medical technology advancements, particularly AI and machine learning, was critically examined. Ahuja highlighted concerns over inflated claims, such as AI outperforming human doctors, and stressed the need for grounded, evidence-based integration of these tools into healthcare.Philosophy of Mind and Consciousness: A recurring theme was the nature of consciousness and its central role in both neurology and AI research. The unresolved "hard problem of consciousness" raises ethical and philosophical questions about the implications of mimicking or enhancing human cognition through technology.Trade-offs in Technological Progress: Ahuja emphasized that no technological advancement is without trade-offs. While tools like CRISPR and mRNA therapies hold transformative potential, they come with risks like unintended consequences, such as horizontal gene transfer, and the ethical dilemmas of their application.Human Element in Medicine: The conversation underscored the importance of human connection in medical practice, particularly in neurology, where patients often face chronic and emotionally taxing conditions. Ahuja's reflections on the pitfalls of bureaucracy, private equity in healthcare, and the overemphasis on defensive medicine highlighted the critical need to prioritize patient-centered care in an increasingly technological and administrative landscape.
In this episode of Crazy Wisdom, Stewart Alsop welcomes Christopher Canal, co-founder of Equistamp, for a deep discussion on the current state of AI evaluations (evals), the rise of agents, and the safety challenges surrounding large language models (LLMs). Christopher breaks down how LLMs function, the significance of scaffolding for AI agents, and the complexities of running evals without data leakage. The conversation covers the risks associated with AI agents being used for malicious purposes, the performance limitations of long time horizon tasks, and the murky realm of interpretability in neural networks. Additionally, Christopher shares how Equistamp aims to offer third-party evaluations to combat principal-agent dilemmas in the industry. For more about Equistamp's work, visit Equistamp.com to explore their evaluation tools and consulting services tailored for AI and safety innovation.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:13 The Importance of Evals in AI01:32 Understanding AI Agents04:02 Challenges and Risks of AI Agents07:56 Future of AI Models and Competence16:39 The Concept of Consciousness in AI19:33 Current State of Evals and Data Leakage24:30 Defining Competence in AI31:26 Equistamp and AI Safety42:12 Conclusion and Contact InformationKey InsightsThe Importance of Evals in AI Development: Christopher Canal emphasizes that evaluations (evals) are crucial for measuring AI models' capabilities and potential risks. He highlights the uncertainty surrounding AI's trajectory and the need to accurately assess when AI systems outperform humans at specific tasks to guide responsible adoption. Without robust evals, companies risk overestimating AI's competence due to data leakage and flawed benchmarks.The Role of Scaffolding in AI Agents: The conversation distinguishes between large language models (LLMs) and agents, with Christopher defining agents as systems operating within a feedback loop to interact with the world in real time. Scaffolding—frameworks that guide how an AI interprets and responds to information—plays a critical role in transforming static models into agents that can autonomously perform complex tasks. He underscores how effective scaffolding can future-proof systems by enabling quick adaptation to new, more capable models.The Long Tail Challenge in AI Competence: AI agents often struggle with tasks that have long time horizons, involving many steps and branching decisions, such as debugging or optimizing machine learning models. Christopher points out that models tend to break down or lose coherence during extended processes, a limitation that current research aims to address with upcoming iterations like GPT-4.5 and beyond. He speculates that incorporating real-world physics and embodied experiences into training data could improve long-term task performance.Ethical Concerns with AI Applications: Equistamp takes a firm stance on avoiding projects that conflict with its core values, such as developing AI models for exploitative applications like parasocial relationship services or scams. Christopher shares concerns about how easily AI agents could be weaponized for fraudulent activities, highlighting the need for regulations and more transparent oversight to mitigate misuse.Data Privacy and Security Risks in LLMs: The episode sheds light on the vulnerabilities of large language models, including shared cache issues that could leak sensitive information between different users. Christopher references a recent paper that exposed how timing attacks can identify whether a response was generated by hitting the cache or computing from scratch, demonstrating potential security flaws in API-based models that could compromise user data.The Principal-Agent Dilemma in AI Evaluation: Stewart and Christopher discuss the conflict of interest inherent in companies conducting their own evals to showcase their models' performance. Christopher explains that third-party evaluations are essential for unbiased assessments. Without external audits, organizations may inflate claims about their models' capabilities, reinforcing the need for independent oversight in the AI industry.Equistamp's Mission and Approach: Equistamp aims to fill a critical gap in the AI ecosystem by providing independent, safety-oriented evaluations and consulting services. Christopher outlines their approach of creating customized evaluation frameworks that compare AI performance against human baselines, helping clients make informed decisions about deploying AI systems. By prioritizing transparency and safety, Equistamp hopes to set a new standard for accountability in the rapidly evolving AI landscape.
On this episode of Crazy Wisdom, Stewart Alsop welcomes back guest David Hundley, a principal engineer at a Fortune 500 company specializing in innovative machine learning applications. The conversation spans topics like techno-humanism, the future interplay of consciousness and artificial intelligence, and the societal implications of technologies like neural interfaces and large language models. Together, they explore the philosophical and technical challenges posed by advancements in AI and what it means for humanity's trajectory. For more insights from David, visit his website or follow him on Twitter.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:31 Techno Humanism vs. Transhumanism02:14 Exploring Humanism and Its Historical Context05:06 Accelerationism and Consciousness06:58 AI Conversations and Human Interaction10:21 Challenges in AI and Machine Learning13:26 Product Integration and AI Limitations19:03 Coding with AI: Tools and Techniques25:28 Vector Stores vs. Traditional Databases32:16 Understanding Network Self-Optimization33:25 Exploring Parameters and Biases in AI34:53 Bias in AI and Societal Implications38:28 The Future of AI and Open Source44:01 Techno-Humanism and AI's Role in Society48:55 The Intersection of AI and Human Emotions52:48 The Ethical and Societal Impact of AI58:20 Final Thoughts and Future DirectionsKey InsightsTechno-Humanism as a Framework: David Hundley introduces "techno-humanism" as a philosophy that explores how technology and humanity can coexist and integrate without losing sight of human values. This perspective acknowledges the current reality that we are already cyborgs, augmented by devices like smartphones and smartwatches, and speculates on the deeper implications of emerging technologies like Neuralink, which could redefine the human experience.The Limitations of Large Language Models (LLMs): The discussion highlights that while LLMs are powerful tools, they lack true creativity or consciousness. They are stochastic parrots, reflecting and recombining existing knowledge rather than generating novel ideas. This distinction underscores the difference between human and artificial intelligence, particularly in the ability to create new explanations and knowledge.Biases and Zeitgeist Machines: LLMs are described as "zeitgeist machines," reflecting the biases and values embedded in their training data. While this mirrors societal norms, it raises concerns about how conscious and unconscious biases—shaped by culture, regulation, and curation—impact the models' outputs. The episode explores the ethical and societal implications of this phenomenon.The Role of Open Source in AI's Future: Open-source AI tools are positioned as critical to the democratization of technology. David suggests that open-source projects, such as those in the Python ecosystem, have historically driven innovation and accessibility, and this trend is likely to continue with AI. Open-source initiatives provide opportunities for decentralization, reducing reliance on corporate-controlled models.Potential of AI for Mental Health and Counseling: David shares his experience using AI for conversational support, comparing it to talking with a human friend. This suggests a growing potential for AI in mental health applications, offering companionship or guidance. However, the ethical implications of replacing human counselors with AI and the depth of empathy that machines can genuinely offer remain questions.The Future of Database Technologies: The discussion explores traditional databases versus emerging technologies like vector and graph databases, particularly in how they support AI. Graph databases, with their ability to encode relationships between pieces of information, could provide a more robust foundation for complex queries in knowledge-intensive environments.The Ethical and Societal Implications of AI: The conversation grapples with how AI could reshape societal structures and values, from its influence on decision-making to its potential integration with human cognition. Whether through regulation, neural enhancement, or changes in media dynamics, AI presents profound challenges and opportunities for human civilization, raising questions about autonomy, ethics, and collective progress.
On this episode of Crazy Wisdom, host Stewart Alsop is joined by Dr. David Ulrich Ziegler, an independent consultant specializing in the intersection of cyber and physical utility systems. The conversation spans a range of topics including the intricacies of power grids, the historical evolution of electrical systems, and the future of energy, touching on nuclear power, solar panels, and the emerging role of AI in managing these critical infrastructures. David shares insights into the resilience of systems, lessons from nature for system design, and the potential of decentralization versus centralized control. For more on David's work, you can find him on LinkedIn or connect via his Twitter handle @denersec.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:21 Understanding Cyber Physical Utility Systems01:52 Historical Context of Electrical Grids03:14 Alternating Current vs. Direct Current07:00 Home Electrical Systems and Safety10:11 Technological Leapfrogging and Starlink15:35 The Impact of Internet Connectivity on Society19:36 AI and the Future of Physical Systems21:20 The Evolution of SCADA Systems28:48 Nuclear Power and Decarbonization34:23 The Promise and Challenges of Small Modular Reactors36:33 Geopolitical Influences on Nuclear Power41:15 AI and the Electrification of Knowledge Work44:19 AI's Impact on Professional Workflows48:27 Connecting Data Centers to the Grid53:43 Resilience and Organic Computing in Power Systems01:03:10 The Future of Solar Panels and Energy Independence01:09:19 Concluding Thoughts and Future EpisodesKey InsightsThe Intersection of Cyber and Physical Utility Systems: Dr. David Ziegler emphasizes the importance of understanding the interconnectedness of cyber and physical systems in modern utilities. These systems, often referred to as cyber-physical systems, blend physical infrastructure, such as power grids, with advanced control and automation technologies. Historically, this integration has roots in SCADA systems, which were among the first examples of distributed computing, and remains crucial for ensuring resilience and operational efficiency in today's energy networks.The Historical Foundations of Electrical Systems: The episode highlights key moments in the evolution of electrical infrastructure, from the early debates between alternating current (AC) and direct current (DC) to the development of distributed control in power systems. Ziegler discusses how early technological decisions and innovations shaped the global grid, setting the stage for the modern challenges of integrating renewable energy and decentralized energy systems.The Promise and Challenges of Nuclear Energy: Ziegler provides a balanced perspective on nuclear power, acknowledging its potential as a low-carbon energy source but highlighting challenges such as high costs, public fear, and the complexities of large-scale projects. He notes the emerging interest in modular reactors, which aim to reduce costs and improve scalability, but stresses that their real-world impact is still to be proven.The Role of Renewable Energy and Storage: A major focus is on the rapid advancements in renewable energy, particularly solar power, and the associated need for effective storage solutions. Ziegler explains the dramatic drop in costs for lithium-ion batteries, making short-term energy storage more viable. However, he underscores the ongoing challenge of developing affordable long-term and seasonal storage technologies to support a 100% renewable energy system.Data Centers as Emerging Energy Consumers: The growing demand for electricity from data centers, especially those supporting AI technologies, is a significant trend discussed in the episode. Ziegler points out that data centers could consume up to 8-9% of total electricity in regions like Europe and the U.S. by 2030, driven by the energy-intensive nature of AI computations. This shift necessitates innovative approaches to grid connectivity and efficiency.Decentralization vs. Centralization in Grid Design: The debate over centralized versus decentralized energy systems is a recurring theme. Ziegler explains how historical constraints on communication bandwidth led to resilient, distributed architectures in power grids. He advocates for hybrid systems that balance centralized control with localized decision-making, drawing inspiration from biological systems like the human body for their adaptability and resilience.The Global Energy Transition and Geopolitical Risks: The episode explores the geopolitical dimensions of the energy transition, including dependencies on materials like lithium and solar panel production concentrated in regions like China. Ziegler argues that while local renewable energy generation reduces reliance on external energy sources, the global supply chain for components remains a vulnerability. He also emphasizes the need for greater resilience and strategic planning to navigate potential disruptions.
A sensitive look at sex surrogates, the life and times of Chogyam Trungpa, and On and On. We love doc'ing here at Were Going Streaming, and we hope you enjoyed doc'ing with us. This month the guys grade on a curve, and open themselves up to the enormous treasure of Youtube documentaries. So lay back, strap in, and enjoy the ride. As always rate and review. IG: weregoingstreaming Tik TOk: TBD
In this Crazy Wisdom episode, Stewart Alsop dives into a compelling conversation with guest Sterling Cooley, exploring Sterling's research and theories on the vagus nerve, ultrasound, and consciousness. Sterling introduces his Niemertin Vagus Nerve Origin Theory and the role of microtubules in consciousness. The two discuss scientific materialism, quantum mechanics, and xenon's potential to unlock new understanding in consciousness studies. This episode takes listeners through groundbreaking ideas on the connections between consciousness and cellular structures, and to learn more, visit Sterling's work at Ultraskool.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:39 Exploring the Vagus Nerve and Yoga01:22 Diving into Xenon and Consciousness06:29 Understanding Microtubules11:18 Quantum Mechanics and Microtubules22:34 The Role of Microtubules in Consciousness27:28 Astrobiology and the Origins of Life33:22 COVID-19 and Microtubules34:53 Introduction to Filopodia and COVID Mechanisms36:47 Exploring Consciousness in Microtubules37:49 Questioning the Neuronal Model of Consciousness40:27 The Role of Microtubules in Consciousness45:35 The Power of Intention and Healing50:42 Personal Experiences with Chronic Pain and Healing52:13 The Potential of Xenon in Healing01:04:21 Concluding Thoughts and ResourcesKey InsightsThe Vagus Nerve and Consciousness: Sterling Cooley introduces the "Niemertin Vagus Nerve Origin Theory," exploring the vagus nerve as a significant player in human consciousness. Through his research, he posits that the vagus nerve may have untapped potential to influence states of consciousness when stimulated by ultrasound, suggesting a direct pathway between physical body processes and awareness.Microtubules as a Model for Consciousness: Cooley discusses the Orchestrated Objective Reduction (ORCOR) theory, originally developed by Stuart Hameroff and Roger Penrose, which views microtubules as a potential site for consciousness within cells. This model contrasts sharply with the traditional neuronal view, arguing that consciousness could be emerging from sub-cellular structures, rather than solely from synaptic interactions.Xenon's Unexplored Role in Consciousness and Pain Relief: Throughout the conversation, Cooley explains his interest in xenon gas for its unusual effects on consciousness and physical pain. Known for its anesthetic properties, xenon interacts with microtubules in ways that could reveal more about how consciousness works at a cellular level. He shares personal experiences with xenon as profoundly healing and consciousness-expanding, a combination he believes could be used in new therapeutic models.Gratitude Meditation and HRV Enhancement: Cooley recounts how a form of gratitude-based meditation has been shown to significantly raise Heart Rate Variability (HRV), a key indicator of autonomic nervous system balance. By coupling gratitude with anesthetics like ketamine, individuals may enter states of heightened well-being and healing, providing a bridge between subjective states and measurable physiological effects.The Potential of Conscious Intention for Healing: Cooley suggests that if consciousness operates through microtubules, then conscious intention may have a tangible effect on physical healing. He speculates that specific mindsets, especially gratitude, could interact with bodily processes at a fundamental level. This view ties into long-standing yet often-dismissed ideas around the mind-body connection and its implications for health.Quantum Mechanics and Cellular Intelligence: Discussing the quantum behavior of microtubules, Cooley points out their ability to interface with quantum-level processes. This quantum component, according to ORCOR, is where consciousness may arise and could allow cells to possess a form of “intelligence” or agency. This insight proposes a model of cellular life as potentially sentient, challenging conventional biological views.The Commercial and Academic Resistance to New Theories of Consciousness: Finally, Cooley critiques the scientific community's resistance to non-traditional models of consciousness, attributing it to entrenched financial and academic interests. He suggests that the popular synaptic model persists due to its alignment with pharmacological approaches, which are lucrative but may overlook more holistic explanations of consciousness and agency.
In this episode of Crazy Wisdom, Stewart Alsop hosts ~littel-wolfur to explore spaced repetition, the dynamics of learning algorithms, and the philosophy behind Urbit. They break down Urbit's promise as a peer-to-peer platform with roots in a deep, almost otherworldly commitment to resilience and a long time horizon. Alongside ~littel-wolfur's take on memory as the strange balance of laziness and persistence, they dig into shrubbery, Urbit's latest namespace innovation, and the challenge of creating tools that last. From generational shifts to the philosophy of technology, Stewart and ~littel-wolfur contemplate whether Urbit's rebellious craftsmanship might be the foundation for a more enduring internet. You can connect with ~littel-wolfur on Twitter.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:22 Understanding Spaced Repetition01:39 Personal Experiences with Spaced Repetition04:08 Challenges in Spaced Repetition Software06:45 Building a Flashcard App on Urbit09:03 Introduction to Shrubbery on Urbit13:26 The State of Urbit and Its Future22:01 The Long-Term Vision of Urbit and Bitcoin28:37 Balancing Internet Time with Parenthood29:37 Challenges of Urbit's Ease of Use30:22 New Blood in the Urbit Community31:15 Building Communities on Urbit32:38 Twitter's Complexities and Elon Musk's Influence41:02 AI's Role in Software Development49:52 Transhumanism and AI Art54:50 The Future of Craftsmanship in Programming55:45 Conclusion and Contact InformationKey InsightsThe Power and Paradox of Spaced Repetition: Stewart and ~littel-wolfur discuss spaced repetition as an ingenious blend of laziness and persistence. By setting reminders to review information just before it's forgotten, spaced repetition acts as an effortless yet powerful memory tool. Although the practice demands daily discipline, it becomes an invaluable mechanism for retaining knowledge across vast timescales.SuperMemo and Incremental Reading: ~littel-wolfur shares his experience with SuperMemo, the original spaced repetition software that takes the method even further. SuperMemo's “incremental reading” allows users to gradually extract information from lengthy texts, breaking down complex learning into manageable, spaced chunks. For ~littel-wolfur, this approach goes beyond mere memorization; it turns learning into an immersive, long-term commitment.The Urbit Experiment: Urbit, a decentralized peer-to-peer network and OS, represents a radical rethinking of the internet. Stewart and ~littel-wolfur examine Urbit's potential as a platform where users truly own and control their data, echoing ideals of early Web 1.0. As the “long-haul project” of the tech world, Urbit cultivates an almost timeless ethos, making it as much a social experiment as a computing system.Shrubbery and Namespace Innovation: A core element of Urbit, “shrubbery” introduces a namespace that enables users to organize, connect, and retrieve information from across their digital universe. ~littel-wolfur explains how shrubbery allows users to link pieces of data like conversation notes, wikis, and documents, making it a versatile learning platform on Urbit. The elegance of this integration hints at a future internet where information can be personalized and seamlessly connected.Craftsmanship and Digital Resilience: ~littel-wolfur and Stewart touch on the fading art of craftsmanship in tech, which often gets lost in the layers of abstractions that modern software relies on. For ~littel-wolfur, coding on Urbit feels like working in a digital woodshop, where the focus is on intentionality and precision rather than flashy or disposable tech. This philosophy of craftsmanship offers a refreshing take on the art of creation in software, hinting at the durability and authenticity Urbit hopes to embody.AI's Limitations and Overconfidence Trap: The episode also highlights the limitations of AI, especially when it encourages laziness or over-reliance. While AI can help automate routine tasks, ~littel-wolfur warns of its tendency to produce fragile, overly complex solutions that unravel under scrutiny. They caution that true understanding comes not from shortcuts, but from engaging deeply with the work—a point that resonates with their belief in disciplined learning practices like spaced repetition.The Value of Optimism and Long Time Horizons: Amid a society obsessed with quick wins and rapid monetization, Stewart and ~littel-wolfur see Urbit's culture as a refreshing outlier, filled with builders who value curiosity and long-term thinking. This “thousand-year mindset” stands in contrast to much of the tech industry, where projects are often driven by immediate financial returns. By embracing a philosophy that resists the pressure for instant success, Urbit aligns itself with a vision of digital infrastructure that, rather than fueling transient trends, aims to be a lasting foundation for generations to come.
In a dialogue sermon between Pastor Rebecca and Dr. Hannah Carl, the two discuss 1st Corinthians 19 and our relationship between ourselves, our food and the land.
In this episode of Crazy Wisdom, I, Stewart Alsop, welcome Reaxionario, a Twitter personality deeply immersed in Argentine politics and geopolitics. We discuss Argentina's turbulent political history, from the rise of Peronism to the current economic policies under Javier Milei. Our conversation weaves through the complexities of socialism, populism, and the global shifts in economic power, touching on the failures of central banking, the erosion of middle-class values, and the emerging counterculture on the political right. For more, follow Reaxionario on Twitter @reaxionario.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:32 Global Markets and Economic Trends03:13 Argentina's Economic History and Central Bank05:35 The Rise and Fall of Argentina's Economy13:30 Peronism and Its Impact on Argentina20:29 Modern Political Movements in Argentina33:27 The 2020 Pandemic and Its Aftermath36:21 The Argentine Way of Defiance37:20 Economic Struggles and Public Resentment40:35 The Rise of Javier Milen42:31 Middle Class and Inflation46:45 The Welfare State Debate52:38 Youth Rebellion and Kirchnerismo54:59 Global Counterculture and Humor01:02:11 Decentralized Movements and Optimism01:05:18 Conclusion and Future OutlookKey InsightsThe Erosion of Argentina's Middle Class: One of the central themes is the decline of Argentina's middle class, which has been squeezed by inflation, high taxes, and policies that favor the political elite and public sector employees. Reaxionario argues that decades of socialist and Peronist policies have created a two-tiered society where the bureaucratic class prospers, while the middle class steadily shrinks, losing access to the cultural and material wealth it once enjoyed.Javier Milei as a Refined Populist: Unlike populists such as Donald Trump, Javier Milei is presented as a more intellectual figure, grounded in a deep understanding of economics and a clear vision for dismantling Argentina's welfare state. Milei channels the anger of a disenfranchised population, especially among the youth, but his appeal lies in his coherence and refined arguments, not just in emotional rhetoric.The Failure of the Welfare State: The episode emphasizes that Argentina's welfare state, which initially provided comfort for the middle class, has failed over time. Reaxionario points out that the system is unsustainable, creating temporary prosperity by consuming wealth created in previous generations while leaving future generations without the means to produce new wealth. This mirrors a broader global trend where welfare states are collapsing under the weight of unsustainable promises.Argentina's Role as a Bellwether for the West: Reaxionario suggests that Argentina is a microcosm of what is happening—or will happen—across Western nations. Once a prosperous country in the early 20th century, Argentina's descent into populism, central planning, and the erosion of individual freedoms mirrors what is now happening in Europe and the U.S. Argentina, having already reached the extreme, may offer insight into the future trajectory of other nations struggling with similar economic and political dynamics.Youth Rebellion Against the Political Class: A significant portion of the episode is dedicated to understanding how Argentina's younger generations have rallied around Milei. After suffering through the longest lockdown in the world and seeing the failures of the Kirchnerist elite, young Argentines are rejecting the political establishment. This generation, stifled by economic hardships and a bleak future, sees Milei as a vehicle for real change and an escape from the political class's control.The Impact of the 2020 Pandemic: The pandemic served as a tipping point for many Argentines, exacerbating societal divisions and heightening resentment toward the ruling elite. The long lockdown, particularly in Buenos Aires, crippled the economy while exposing the hypocrisy of the political class, as government officials flouted their own lockdown rules. This fed into a broader distrust of the government, fueling the rise of figures like Milei who promise to dismantle these failed structures.The Global Counterculture Shift: Reaxionario posits that there is a new, decentralized counterculture rising on the political right, much like the left-wing counterculture of the 1960s. This movement is characterized by a rejection of progressive authoritarianism, particularly in humor, free speech, and economic freedom. This counterculture is spreading globally and has found fertile ground in Argentina, where the failure of leftist policies is most visible. This marks a significant shift as the left-wing establishment is now the authoritarian force, while the right becomes the voice of rebellion and change.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes guest Neal Davies, a former computer science professor and nuclear engineering PhD, currently working at the Urbit Foundation. Their conversation covers a range of intriguing topics including the Deseret Alphabet, a phonetic alphabet from the 19th century, Neal's experiences balancing generalist and specialist roles, and the influence of AI in both his work and the world at large. Neal also shares his insights on syntax, symbols, and the cultural shifts that have shaped modern consciousness. You can connect with Neal on Twitter @Sigilante or find him on Urbit as @Lagravnokvap.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:19 Exploring the Deseret Alphabet04:02 Challenges and Rewards of Being a Generalist06:47 Impact of AI on Generalism and Specialization08:24 AI in Code and Image Generation13:43 Salvador Dali's Paranoiac Critical Method17:18 Symbolism in Art and Language20:49 The Spiritual Connection with Language30:05 Greek Influence on Language and Zero32:59 Exploring Number Systems35:10 Rational Numbers and Greek Innovations38:12 The Evolution of Linguistic Systems40:29 Cultural Shifts: 1870s to 1960s45:46 The Impact of the 1960s on Modern Thought49:58 The Role of Illegible Spaces in Innovation56:11 Concluding Thoughts and Future DirectionsKey Insights1-Deseret Alphabet as a Cultural and Linguistic Experiment: Neal Davies is deeply fascinated by the 19th-century Deseret Alphabet, a phonetic alphabet created to help immigrants in Utah become literate. Its unique structure and religious origins present a profound example of how language can be intentionally shaped to serve a community, although this project ultimately didn't gain widespread adoption.2-Balancing Generalism and Specialization: Neal shares his personal journey of pursuing generalist roles while maintaining expertise in specific fields like computer science and nuclear engineering. He emphasizes the value of broad, diverse knowledge in a world that often rewards specialization. His approach allows for flexibility and creativity in problem-solving, despite the professional challenges generalists may face in a society focused on specialization.3-AI as a Tool for Productivity, Not Replacement: Neal highlights the utility of AI in his work, particularly in code generation and ideation. He discusses how tools like GitHub's Copilot act as force multipliers for developers, offering a starting point that saves time without replacing the critical thinking required for final implementation. AI is seen as a support system for creativity, especially in programming and image generation.4-Syntax and Symbols as Catalysts for Thought: Neal discusses the profound relationship between syntax, symbols, and thought. By exploring different symbol systems, such as mathematical notation or alphabets like Deseret, he argues that they can unlock new ways of thinking. Symbol systems not only shape reasoning but allow people to build layers of understanding and explore more complex ideas.5-Cultural Experimentation and Enclaves: Reflecting on the importance of high variance in human endeavor, Neal supports creating enclaves of culture and thought outside the mainstream. He argues that monoculture, driven by surveillance and conformity, limits the ability to think freely and explore novel solutions. Platforms like Urbit, which emphasize privacy and decentralized communication, provide a space for communities to experiment and innovate without being surveilled or controlled.6-The Failure and Legacy of the 1960s Counterculture: Neal suggests that the cultural revolution of the 1960s was an ambitious attempt at societal transformation that ultimately failed. Co-opted by commercialism, politics, and other forces, the movement couldn't fully realize its vision of reshaping consciousness. However, it planted seeds for future cultural shifts, much like the influence of the Romanticists in the 19th century.7-The Importance of Illegibility in Innovation: Neal explains that true freedom in innovation comes from creating spaces where ideas and communities can evolve without constant oversight. He draws a parallel to Hemingway's theory that the unseen parts of a story are as important as the visible ones. Similarly, innovation flourishes when parts of a system or community remain illegible and unobserved, allowing for creativity and growth beyond the constraints of external control.
In this episode of Crazy Wisdom, Stewart Alsop III interviews Kelvin Lwin, the founder and CEO of Alin.ai. Their conversation ranges from Kelvin's experiences at NVIDIA and his deep knowledge of hardware-software integration to broader philosophical discussions about the future of AI, spirituality, and wisdom. Kelvin touches on how AI and technological advancements are shaping not just industries, but society and consciousness itself. They also explore how AI could personalize experiences and learning, using examples from his own company, Alin.ai, which focuses on K-12 education through personalized math learning. For more details, check out Alin.ai.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 Kelvin Lewin's Journey: From NVIDIA to CEO01:10 The Intersection of AI, Spirituality, and Technology01:49 The Role of AI in Understanding Complex Systems02:44 The Impact of Social Media and Technology on Society03:48 Spirituality and the Quest for Wisdom07:47 The Evolution of Consciousness and Technology13:33 The Importance of Ancestral Wisdom18:22 The Role of AI in Education and Personal Growth33:00 Buddhism, AI, and the Nature of Reality42:20 The Salem Witch Trials and Spiritual Realities43:04 Western Intellectuals and Traditional Structures44:57 The Role of Tradition and Empirical Data47:20 Buddhism and the Concept of God49:50 AI and Hardware Fundamentals51:31 Parallelism in AI and Software58:37 Liberation and Code Analogies in Buddhism01:09:17 Personalization in AI and Education01:12:10 Conclusion and Future GoalsKey InsightsThe Relationship Between Hardware and Software: Kelvin Lwin explains the critical relationship between hardware and software, particularly how advancements in GPUs have enabled the AI revolution. He emphasizes that AI is inherently parallel, meaning its computations can be processed simultaneously, making GPUs essential to its progress. Understanding this dynamic is key to grasping the future of AI development.AI's Impact on Society and Consciousness: The discussion touches on how AI isn't just a technical tool but also influences society and even individual consciousness. Kelvin shares insights into how AI shapes our decision-making processes and could guide human development in a way that blends technology with personal growth, raising ethical questions about its long-term effects on humanity.The Importance of Personalization in Learning: One of the central ideas explored is personalization in education, a core focus of Kelvin's company, Alin.ai. By using AI to tailor math learning to students' individual needs and psychological states, the platform aims to help students overcome emotional blocks and anxiety associated with learning, especially in challenging subjects like math.Spirituality and Technology Intersect: A recurring theme is the intersection between spirituality and technology, where Kelvin talks about AI's potential to assist in guiding individuals through personal development, akin to how spiritual teachers work. He sees AI as a tool that could simulate aspects of this guidance, while recognizing the inherent dangers of superficial understanding.The Role of Breath in Meditation and AI Training: Kelvin emphasizes the role of breath in meditation as a bridge between conscious and subconscious states. In his work with Alin.ai, breath exercises are integrated into learning to manage stress and improve focus. He also warns, however, that breath exercises are powerful and should be approached cautiously, especially for beginners.Cultural and Spiritual Layers in AI Development: Kelvin draws from Eastern traditions like Buddhism to frame the development of AI, highlighting the importance of understanding cultural and spiritual contexts when designing systems that interact with human psychology. He compares levels of consciousness to different layers in AI programming, noting how both require understanding and pattern recognition to guide progress.The Ethical Complexity of AI Companionship: The conversation briefly touches on AI's role as a companion, especially in emotionally vulnerable populations. Kelvin expresses concern about using AI to simulate relationships, arguing that while it might serve a market demand, it could deepen isolation and emotional dependence, rather than fostering real human connection and growth.
On this episode of Crazy Wisdom, host Stewart Alsop interviews Yaron Brook, chairman of the Ayn Rand Institute and host of "The Yaron Brook Show" on YouTube. They explore a range of topics including the recent political developments in Argentina with the rise of libertarian figure Javier Milei, the intersection of libertarianism and religion, and critiques of anarcho-capitalism. Yaron Brook also shares his thoughts on how culture and politics shape freedom, the significance of reason, and the role of technology in shaping the future. You can find more about Yaron's work on his YouTube channel and the Ayn Rand Institute's website aynrand.org.Time Stamps00:28 Discussing Libertarianism and Objectivism02:08 Analyzing Anarcho-Capitalism03:52 Millet's Political Actions and Challenges07:43 Comparing Libertarian Leaders16:59 Cultural and Philosophical Foundations of Liberty18:24 Historical Context of Liberty25:30 Current Political Landscape and Challenges30:02 Comfort and Radicalism in Modern Society30:43 Immigration and Cultural Discomfort31:42 European Immigration and Political Shifts33:14 The Right-Wing Political Landscape34:20 The Golden Age and Technological Progress35:31 The Influence of Greek Philosophy37:38 The Renaissance and Rediscovery of Greek Ideas39:55 The Enlightenment and Scientific Revolution41:09 Christianity and Individualism44:01 The Future of Technology and Freedom47:16 Living in Latin America: Freedom and Safety52:43 El Salvador's Approach to Crime and GovernanceLibertarianism's Global Moment: Yaron Brook reflects on the significance of Javier Milei's rise to power in Argentina, noting that Milei is the first self-identified libertarian elected to a major political position. This moment represents a test of libertarian principles in governance, but it also highlights the challenges libertarians face when trying to implement free-market policies in a culture that hasn't fully embraced the underlying philosophical foundation of liberty.The Contradiction of Anarcho-Capitalism: Brook explains why he believes anarcho-capitalism is a contradiction in terms. He argues that capitalism requires a government to enforce laws, protect individual rights, and maintain a monopoly on the legitimate use of force. Without such an authority, he contends that society would descend into chaos, resembling a cartel-dominated environment like that of Mexico, where competing factions destroy markets rather than protect them.Libertarianism's Philosophical Weakness: A recurring theme in the conversation is the critique of libertarianism's philosophical inconsistency. Brook contrasts libertarianism with objectivism, which he sees as a more coherent and philosophically grounded worldview. He criticizes libertarians for embracing a "big tent" approach that allows for religious and anarchist factions, which dilutes the movement's commitment to reason, individualism, and true freedom.Religion and Libertarianism: The conversation touches on the influence of religion within the libertarian movement, particularly in Milei's case. Brook acknowledges that many libertarians are religious, but he argues that objectivism, as an atheistic philosophy, offers a more consistent framework for defending individual rights. He expresses concern that religious elements in Milei's platform, such as his anti-abortion stance, could undermine the broader goal of achieving a society based on individual freedom.The Role of Culture in Political Change: Brook emphasizes that lasting political change requires a corresponding cultural shift. He argues that while Milei may implement free-market policies, the Argentine culture remains largely statist. Without a cultural embrace of individualism, personal responsibility, and reason, Brook is skeptical that Milei's reforms can succeed in the long term. He warns that politics is downstream of culture, and real freedom must be rooted in a philosophical commitment to individual rights.Technology as a Double-Edged Sword: In discussing the future of freedom, Brook points to the potential of technology to both advance and suppress liberty. While technological innovation, such as artificial intelligence and blockchain, offers hope for economic growth and efficiency, Brook cautions that these tools can also be used by authoritarian regimes to tighten their control over citizens. He uses China's use of AI for surveillance and social credit systems as an example of how technology can be weaponized against freedom.The Misalignment of Libertarians with Authoritarian Leaders: Brook criticizes certain libertarians, especially in the U.S., for aligning themselves with authoritarian figures like Trump and Putin. He contrasts this with Milei's foreign policy, which he admires for being pro-American and pro-Israel, and for rejecting alliances with authoritarian regimes like China and Russia. Brook warns that libertarians who associate with authoritarian leaders are damaging the movement's credibility and principles.
In this episode of Crazy Wisdom, I'm Stewart Alsop, and my guest is Nathan Mintz, CEO and co-founder of CX2. We explore the fascinating world of defense technology, the evolution of electronic warfare, and how consumer tech is reshaping the battlefield. Nathan shares insights from his experiences, including his work with CX2, a company focused on building affordable, scalable electronic warfare systems for modern conflicts. We also touch on military tech's impact on broader societal trends and dive into the complexities of 21st-century warfare. You can find more about Nathan and CX2 at CX2.com. Nathan also writes on his Substack, Bow Theseus, which you can access via his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:23 The Gundo vs. El Segundo Debate01:32 Tech Hubs in the US: San Francisco vs. LA02:41 Deep Tech and Hard Tech in Various Cities04:59 Military Tech: Software vs. Hardware09:54 The Rise of Consumer-Scale Warfare13:32 Nathan Mintz's Background and Career22:17 The Evolution of Military Strategies26:57 The Evolution of Air Combat Tactics28:29 Vietnam War's Impact on Military Strategy29:23 Asymmetric Warfare and Modern Conflicts31:43 Technological Advances in Warfare34:16 The Role of Drones in Modern Combat38:38 Future of Warfare: Man-Machine Teaming45:13 Electronic Warfare and CX2's Vision46:44 Conclusion and Final ThoughtsKey InsightsThe Rise of Consumer-Scale Warfare: Nathan Mintz discusses how warfare has reached a "consumer scale," with small, affordable, and widely available technologies like drones playing a massive role in modern conflicts. In Ukraine, for instance, inexpensive drones are regularly used to take out much larger, multi-million-dollar military assets. This shift shows how accessible tech is transforming the nature of warfare.The Importance of Spectrum Dominance: A central theme of the conversation is the increasing importance of controlling the electromagnetic spectrum in modern warfare. Mintz explains that the ability to maintain secure communications, disrupt enemy signals, and ensure the operation of autonomous systems is critical. As battlefields become more technologically complex, controlling the spectrum becomes as important as physical dominance.Hard Tech's Role in Military Innovation: Nathan highlights the growing importance of hard tech—physical hardware solutions like satellites, drones, and electronic warfare systems—in the defense industry, especially in regions like LA. While software has dominated in areas like San Francisco, LA has become a key hub for aerospace, space tech, and hard tech innovations, crucial for the future of defense technology.Dual-Use Technologies in Defense: A significant insight is the role of dual-use technologies, where products developed for consumer or commercial markets are adapted for military use. Technologies like drones, which have everyday applications, are being repurposed for the battlefield. This shift allows for more cost-effective, scalable solutions to military challenges, marking a departure from traditional defense industry practices.The Future of Manned-Unmanned Teaming: Nathan describes how the future of military operations will involve manned-unmanned teaming, where humans will act as "quarterbacks" managing a fleet of autonomous drones and systems. This strategy is designed to leverage the strengths of AI and automation while keeping humans in the loop to make critical decisions in contested or unpredictable environments.Electronic Warfare as a Key Battlefield Domain: One of Nathan's key points is that electronic warfare is becoming a primary battlefield domain. Modern warfare increasingly involves not just physical attacks but also the disruption of enemy communications, navigation, and targeting systems. This form of warfare can neutralize advanced technologies by jamming signals or launching cyber-attacks, making it a vital aspect of future conflicts.Innovation in Warfare through Startups: Nathan discusses how small defense tech startups like CX2 are becoming crucial to military innovation. These companies are building nimble, affordable solutions for modern challenges, contrasting with the traditional defense contractors that build massive, expensive systems. This shift allows for quicker development and deployment of technologies tailored to the changing face of warfare.
In this episode of the Crazy Wisdom podcast, Stewart Alsop speaks with Diego Fernandez, co-creator of QuarkID and the Secretary of Innovation for Buenos Aires. They discuss the future of innovation in Buenos Aires, focusing on how technology can simplify citizen interactions with the government and empower individuals through control over their identity with Web3. The conversation explores the potential of decentralized technologies like blockchain to transform government services and create new opportunities for innovation, especially in Argentina's unique economic landscape. In the episode Stewart forgot the name of something about the innovation of digitizing real world assets in Argentina, see this tweet about the deregulation of warrants so that they can be handled online. And for more on QuarkID, visit www.quarkid.org.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:13 Innovation in Buenos Aires: A Vision for the Future01:34 The Role of Technology in Government02:37 Web3 Technologies: Closing the Gap05:29 Argentina's Unique Economic Resilience08:53 Crypto Adoption in Argentina11:25 The Impact of Inflation and Crypto Solutions17:41 Argentina's Potential in the Web3 Era27:40 Crypto Scene in San Francisco28:20 Buenos Aires: A Hub for Crypto Innovation29:04 Aleph's Pop-Up City and Economic Vision31:04 Regulatory Changes and Crypto Opportunities32:09 Decentralization and the Future of Money32:47 The Role of Governments in the Digital Age34:50 The Evolution of Money and Technology38:02 Real-World Crypto Applications: Morphy Token41:09 Decentralized Platforms and Censorship41:57 QuarkID: Revolutionizing Digital Identity45:21 The Future of Digital Identity and Privacy51:22 Conclusion and How to Learn More About QuarkIDKey InsightsInnovation in Buenos Aires: Diego Fernandez emphasizes that the future of innovation in Buenos Aires is centered around making government services seamless and empowering citizens. He envisions a "WiFi-like" government where the state's presence is only noticed when something goes wrong, with a primary focus on streamlining interactions between citizens and government through technology.The Role of Web3 in Identity: Web3 technologies, particularly decentralized identifiers (DIDs) and verifiable credentials, are set to revolutionize how individuals manage their identities. With QuarkID, citizens will have control over their digital identities, securely storing documents and credentials on their own devices. This shifts control from centralized entities like governments or tech giants to individuals.Argentina's Economic Resilience: Fernandez expresses optimism about Argentina's future, calling its citizens "economic Navy Seals" due to their experience in dealing with decades of economic instability. He believes that Argentina's hardships have made its population more entrepreneurial, adaptable, and uniquely positioned to embrace blockchain and Web3 technologies to overcome economic challenges.Web3's Impact on Global Financial Systems: The episode highlights how Web3 technologies are poised to disrupt traditional financial systems by enabling peer-to-peer transactions of value and identity. In Argentina, where economic crises have pushed citizens to adopt cryptocurrencies, the use of decentralized financial tools is not only growing but also fostering innovation in industries like tokenization of real-world assets.The Leapfrogging Potential of Argentina: Fernandez believes that Argentina has the potential to "leapfrog" other nations in developing new financial systems and infrastructure based on decentralized technologies. The country's lack of entrenched financial systems, combined with its thriving blockchain ecosystem, provides an opportunity to build future-proof solutions that could serve as a model for other emerging economies.Blockchain Startups Flourishing in Argentina: Argentina has become a hotspot for blockchain innovation, with notable startups like Decentraland, Ripio, and numerous others being created within the country. Fernandez is bullish on the growth of both centralized and decentralized financial products, as well as advancements in deep tech, especially in cryptography and zero-knowledge proofs.Decentralization and Government's Role: Fernandez draws a parallel between the separation of church and state and the future separation of money from the state. He argues that just as governments no longer control religion, they will eventually lose their control over money as decentralized platforms take hold. This change, driven by technological advancements, could fundamentally reshape governance and public services.
In this episode of Crazy Wisdom, Stewart Alsop chats with Ian Mason, who works on architecture and delivery of AI and ML solutions, including LLMs and retrieval-augmented generation (RAG). They explore topics like the evolution of knowledge graphs, how AI models like BERT and newer foundational models function, and the challenges of integrating deterministic systems with language models. Ian explains his process of creating solutions for clients, particularly using RAG and LLMs to support automated tasks, and discusses the future potential of AI, contrasting the hype with practical use cases. You can find more about Ian on his LinkedIn profile.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:32 Understanding Knowledge Graphs02:03 Hybrid Systems and AI Models03:39 Philosophical Insights on AI05:01 RAG and Knowledge Graph Integration07:11 Challenges in AI and Knowledge Graphs11:40 Multimodal AI and Future Prospects13:44 Artificial Intelligence vs. Artificial Linear Algebra17:50 Silicon Valley and AI Hype30:44 Defining AGI and Embodied Intelligence32:29 Potential Risks and Mistakes of AI Agents35:04 The Role of Human Oversight in AI38:00 Understanding Vector Databases43:28 Building Solutions with Modern Tools46:52 The Future of Solution Development47:43 Personal Journey into Coding57:25 The Importance of Practical Learning59:44 Conclusion and Contact InformationKey InsightsThe evolution of AI models: Ian Mason discusses how foundational models like BERT have been overtaken by newer, more capable language models, which can perform tasks that once required multiple models. He highlights that while earlier models like BERT still have their uses, foundational models have simplified and expanded AI's capabilities.The role of knowledge graphs: Knowledge graphs provide structured, deterministic ways of handling data, which can complement language models. Ian explains that while LLMs are great for articulating responses based on large datasets, they lack the ability to handle logical and architectural connections between pieces of information, which knowledge graphs can provide.RAG (Retrieval-Augmented Generation) systems: Ian delves into how RAG systems help refine AI output by feeding language models relevant data from a pre-searched database, reducing hallucinations. By narrowing down the possible answers and focusing the LLM on high-quality data, RAG ensures more accurate and contextually appropriate responses.Limitations of language models: While LLMs can generate plausible-sounding responses, they lack deep architectural understanding and can easily hallucinate or provide inaccurate results without carefully curated input. Ian points out the importance of combining LLMs with structured data systems like knowledge graphs or vector databases to ground the output.Vector databases and embeddings: Ian explains how vector databases, which use embeddings and cosine similarity, are crucial for narrowing down the most relevant data in a RAG system. This modern approach outperforms traditional keyword searches by considering semantic meaning rather than just text similarity.AI's impact on business solutions: The conversation highlights how AI, particularly through tools like RAG and LLMs, can streamline business processes. For instance, Ian uses AI to automate customer service email drafting, breaking down complex customer queries and retrieving the most relevant answers, significantly improving operational efficiency.The future of AI in business: Ian believes AI's real-world impact will come from its integration into larger systems rather than revolutionary standalone changes. While there is significant hype around AGI and other speculative technologies, the focus for the near future should be on practical applications like automating business workflows, where AI can create measurable value without over-promising its capabilities.
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Ben Ford and Michael Greenberg for a dynamic conversation. Ben is the founder of Mission Control Dev, and Michael is the founder of Third Brain, a company focused on automating business operations. We explore a variety of topics, including the real meaning of "artificial intelligence," how AI is impacting various industries, and whether we truly have AI today. Michael introduces his concept of "Third Brain," a digital layer of operations, while Ben reflects on his military background and how it shapes his current work. Both offer unique perspectives on where technology is headed, especially around the future of knowledge work, digital transformation, and the human element in an increasingly automated world. Check out the links to learn more about Ben's Mission Control Dev and Michael's Third Brain.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:15 Meet the Guests: Ben Ford and Michael Greenberg01:04 Exploring Third Brain and Mission Control03:05 Debating Artificial Intelligence05:24 The Role of AI in Business Operations08:54 Challenges in Digital Transformation16:59 Implementing AI and Digital Operations29:12 Exploring Puzzle App: A New Tool for Documentation30:14 The Power of Graphs in Computer Science32:14 Infinite Dimensions and String Theory32:57 AI Systems and Social Media Content33:31 Wardley Mapping and Business Processes35:26 The Future of AI and Job Security35:49 AI Whisperers Meetup and Conference43:35 The Role of Subject Matter Experts in AI44:13 The Impact of AI on Learning and Careers55:09 Challenges in Implementing AI Chatbots57:10 Closing Thoughts and Contact InformationKey InsightsThe distinction between AI and true intelligence: Ben and Michael both agree that current AI, particularly large language models (LLMs), lacks true intelligence. While these systems are highly capable of pattern recognition and can execute specific workflows efficiently, they fall short of human-like intelligence due to their inability to form cognitive loops, embody real-world understanding, or have agency. AI today excels at capacity but not in truly autonomous thinking.Digital transformation is continuous, not a one-time event: The idea that digital transformation has failed was discussed, with Ben and Michael pointing out that the problem lies in the perception that digital transformation has a start and end point. In reality, businesses are constantly transforming, and the process is more about ongoing adaptation than achieving a static, “transformed” state. Success in this realm requires persistent updates and improvements, especially in operational structure.AI as an enabler, not a replacement: Both guests emphasized that AI should be seen as a tool that augments human capability rather than replaces it. AI can significantly enhance the capacity of knowledge workers, enabling them to focus on more creative or strategic tasks by automating routine processes. However, human oversight and strategic input are still essential, especially when it comes to structuring data and providing context for AI systems to function effectively.The future of work involves "AI whisperers": Stewart introduces the idea of "AI whisperers" — people skilled in communicating with and directing AI systems to achieve specific outcomes. This requires a high level of linguistic and operational understanding, suggesting that those who can finesse AI's capabilities with precision will be in high demand in the future workforce. This shift may see creative, word-focused individuals becoming increasingly critical players in business operations.Structured data is crucial for effective AI deployment: A major challenge in deploying AI for businesses is the lack of well-structured data. Many organizations lack the documentation or system integration needed to effectively implement AI, meaning much of the initial work revolves around organizing data. Without this foundational step, attempts at AI deployment—such as customer service chatbots—are prone to failure, as the AI systems are only as good as the data they're fed.Graphs as the framework for business processes: Ben and Michael both highlight the importance of graphs in modern operations. Graphs, as a way to map out relationships between different elements of a system, are key to understanding and implementing digital operations. This concept allows for the visualization and optimization of workflows, helping businesses better navigate the complexities of modern digital ecosystems.AI is accelerating, and businesses need to keep up: One of the key takeaways from the episode is the rapid pace of AI advancement and its effect on businesses. Companies that fail to incorporate AI tools into their operations risk being left behind. Ben points out that the train has already left the station, and businesses need to quickly adapt by leveraging AI to streamline their processes and maintain competitiveness in an increasingly automated world.
In this episode of Crazy Wisdom, host Stewart Alsop talks with Phil Filippak, a software arcanist and knowledgemancer from Ideaflow. The conversation covers a range of topics, including knowledge management, the discipline behind organizing knowledge, personal systems for note-taking, and the impact of AI on programming and game development. Phil shares his experiences with tools like Obsidian and discusses the balance between creative exploration and over-systematization in managing information. You can follow Phil on Twitter at @Blisstweeting (https://twitter.com/Blisstweeting) for more insights.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:49 Phil's Journey and Knowledge Management02:17 The Discipline of Knowledge Management05:49 Personal Struggles and Systematization09:43 AI's Role in Knowledge Management16:16 The Future of AI and Programming21:03 Monasteries and the Future of Coding28:03 Navigating Quests Without Markers28:46 Evolution of Game Engines32:02 Creating Games as a Solo Developer34:42 The Balance Between Art and Commerce in Gaming45:00 Knowledge Management in Large Companies52:03 Final Thoughts and Contact InformationKey InsightsThe Role of Discipline in Knowledge Management: Phil Filippak emphasizes that knowledge management is more than just gathering information—it's about organizing it with discipline. This process involves creating orderly structures, either mentally or through notes, to track progress across different areas of interest. Discipline is crucial for maintaining an interconnected understanding of multiple fields.Over-Systematization Can Be a Trap: While using tools like Obsidian to systematize knowledge can be helpful, Phil warns that too much structure can become burdensome. Over-systematizing can make it harder to add new information and can stifle creativity, leading to a reluctance to engage with the system at all.AI's Transformative Role in Programming: Phil discusses how AI is changing the landscape of software development, particularly by assisting with tedious tasks like debugging. However, he points out that AI hasn't yet reached a point where it can handle more creative or complex problem-solving without human intervention, leaving room for the enjoyment and intellectual satisfaction that come from manual coding.Creativity in Game Development is Often Stifled by Commercial Pressures: Large gaming companies, driven by shareholder value, tend to avoid risks and stick to formulas that are proven to sell. Phil notes that this limits experimentation, whereas indie game developers and smaller studios—especially in places like Serbia—have more freedom to innovate and take creative risks.Periodic “Resets” in Personal Knowledge Systems: Phil recommends performing occasional resets on personal knowledge systems when they become too complex. This involves stripping away unnecessary rules and simplifying processes to keep the system flexible and sustainable, helping to avoid burnout from excessive structure.The Idea of a Code Monastery: Drawing on the historical role of monasteries as centers of knowledge preservation, Phil introduces the idea of a "code monastery" where programmers could dedicate themselves to maintaining and refining software. This concept highlights the aesthetic and spiritual satisfaction of combining technical expertise with a disciplined, purpose-driven lifestyle.The Future of Programming and AI: Looking ahead, Phil acknowledges that while AI will likely continue to take over more routine programming tasks, there will always be people passionate about coding for its intellectual rewards. He believes that even in an AI-dominated future, the human element of creativity and problem-solving in programming will remain essential.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes guest Neal Davies, a former computer science professor and nuclear engineering PhD, currently working at the Urbit Foundation. Their conversation covers a range of intriguing topics including the Deseret Alphabet, a phonetic alphabet from the 19th century, Neal's experiences balancing generalist and specialist roles, and the influence of AI in both his work and the world at large. Neal also shares his insights on syntax, symbols, and the cultural shifts that have shaped modern consciousness. You can connect with Neal on Twitter @Sigilante or find him on Urbit as @Lagravnokvap.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:19 Exploring the Deseret Alphabet04:02 Challenges and Rewards of Being a Generalist06:47 Impact of AI on Generalism and Specialization08:24 AI in Code and Image Generation13:43 Salvador Dali's Paranoiac Critical Method17:18 Symbolism in Art and Language20:49 The Spiritual Connection with Language30:05 Greek Influence on Language and Zero32:59 Exploring Number Systems35:10 Rational Numbers and Greek Innovations38:12 The Evolution of Linguistic Systems40:29 Cultural Shifts: 1870s to 1960s45:46 The Impact of the 1960s on Modern Thought49:58 The Role of Illegible Spaces in Innovation56:11 Concluding Thoughts and Future DirectionsKey InsightsDeseret Alphabet as a Cultural and Linguistic Experiment: Neal Davies is deeply fascinated by the 19th-century Deseret Alphabet, a phonetic alphabet created to help immigrants in Utah become literate. Its unique structure and religious origins present a profound example of how language can be intentionally shaped to serve a community, although this project ultimately didn't gain widespread adoption.Balancing Generalism and Specialization: Neal shares his personal journey of pursuing generalist roles while maintaining expertise in specific fields like computer science and nuclear engineering. He emphasizes the value of broad, diverse knowledge in a world that often rewards specialization. His approach allows for flexibility and creativity in problem-solving, despite the professional challenges generalists may face in a society focused on specialization.AI as a Tool for Productivity, Not Replacement: Neal highlights the utility of AI in his work, particularly in code generation and ideation. He discusses how tools like GitHub's Copilot act as force multipliers for developers, offering a starting point that saves time without replacing the critical thinking required for final implementation. AI is seen as a support system for creativity, especially in programming and image generation.Syntax and Symbols as Catalysts for Thought: Neal discusses the profound relationship between syntax, symbols, and thought. By exploring different symbol systems, such as mathematical notation or alphabets like Deseret, he argues that they can unlock new ways of thinking. Symbol systems not only shape reasoning but allow people to build layers of understanding and explore more complex ideas.Cultural Experimentation and Enclaves: Reflecting on the importance of high variance in human endeavor, Neal supports creating enclaves of culture and thought outside the mainstream. He argues that monoculture, driven by surveillance and conformity, limits the ability to think freely and explore novel solutions. Platforms like Urbit, which emphasize privacy and decentralized communication, provide a space for communities to experiment and innovate without being surveilled or controlled.The Failure and Legacy of the 1960s Counterculture: Neal suggests that the cultural revolution of the 1960s was an ambitious attempt at societal transformation that ultimately failed. Co-opted by commercialism, politics, and other forces, the movement couldn't fully realize its vision of reshaping consciousness. However, it planted seeds for future cultural shifts, much like the influence of the Romanticists in the 19th century.The Importance of Illegibility in Innovation: Neal explains that true freedom in innovation comes from creating spaces where ideas and communities can evolve without constant oversight. He draws a parallel to Hemingway's theory that the unseen parts of a story are as important as the visible ones. Similarly, innovation flourishes when parts of a system or community remain illegible and unobserved, allowing for creativity and growth beyond the constraints of external control.
In this episode of the Crazy Wisdom podcast, Stewart Alsop speaks with Anand Dwivedi, a Senior Data Scientist at ICE, returning for his second appearance. The conversation covers a range of topics including the evolution of machine learning models, the integration of AI into operating systems, and how innovations like Neuralink may reshape our understanding of human-machine interaction. Anand also touches on the role of cultural feedback in shaping human learning, the implications of distributed systems in cybersecurity, and his current project—training a language model on the teachings of his spiritual guru. For more information, listeners can connect with Anand on LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:25 Exploring GPT-4 and Machine Learning Innovations03:34 Apple's Integration of AI and Privacy Concerns06:07 Digital Footprints and the Evolution of Memory09:42 Neuralink and the Future of Human Augmentation14:20 Cybersecurity and Financial Crimes in the Digital Age20:53 The Role of LLMs and Human Feedback in AI Training29:50 Freezing Upper Layers and Formative Feedback30:32 Neuroplasticity in Sports and Growth32:00 Challenges of Learning New Skills as Adults32:44 Cultural Immersion and Cooking School34:21 Exploring Genetic Engineering and Neuroplasticity38:53 Neuralink and the Future of AI39:58 Physical vs. Digital World41:20 Existential Threats and Climate Risk45:15 Attention Mechanisms in LLMs48:22 Optimizing Positive Social Impact54:54 Training LLMs on Spiritual LecturesKey InsightsEvolution of Machine Learning Models: Anand Dwivedi highlights the advancement in machine learning, especially with GPT-4's ability to process multimodal inputs like text, images, and voice simultaneously. This contrasts with earlier models that handled each modality separately, signifying a shift towards more holistic AI systems that mirror human sensory processing.AI Integration in Operating Systems: The conversation delves into how AI, like Apple Intelligence, is being integrated directly into operating systems, enabling more intuitive interactions such as device management and on-device tasks. This advancement brings AI closer to daily use, ensuring privacy by processing data locally rather than relying on cloud-based systems.Neuralink and Transhumanism: Anand and Stewart discuss Neuralink's potential to bridge the gap between human and artificial intelligence. Neuralink's brain-computer interface could allow humans to enhance cognitive abilities and better compete in a future dominated by intelligent machines, raising questions about the ethics and risks of such direct brain-AI integration.Cultural Feedback and Learning: Anand emphasizes the role of cultural feedback in shaping human learning, likening it to how AI models are fine-tuned through feedback loops. He explains that different cultural environments provide varied feedback to individuals, influencing the way they process and adapt to information throughout their lives.Cybersecurity and Distributed Systems: The discussion highlights the dual-edged nature of distributed systems in cybersecurity. While these systems offer increased freedom and decentralization, they can also serve as breeding grounds for financial crimes and other malicious activities, pointing to the need for balanced approaches to internet freedom and security.Generative Biology and AI: A key insight from the episode is the potential of AI models, like those used for language processing, to revolutionize fields such as biology and chemistry. Anand mentions the idea of generative biology, where AI could eventually design new proteins or chemical compounds, leading to breakthroughs in drug discovery and personalized medicine.Positive Social Impact Through Technology: Anand introduces a thought-provoking idea about using AI and data analytics for social good. He suggests that technology can help bridge disparities in education and resources globally, with models being designed to measure and optimize for positive social impacts, rather than just profits or efficiency.
In this episode of Crazy Wisdom, Stewart Alsop interviews Jeno Giordano, an adventurer with a diverse background in fitness, underwater welding, offshore construction, and institutional finance. Jeno shares his incredible journey, beginning with how he faced his fear of drowning by becoming an underwater welder. He recounts the adrenaline-filled moments of working deep in the ocean, from demolition dives to narrowly avoiding life-threatening situations. They also explore his transition to working as a VIP host in Las Vegas and his dive into finance and blockchain. Jeno's path weaves through conquering fear, mastering high-risk environments, and eventually conceptualizing new economic and governance systems. To find more of Jeno's work, check out his Substack or get in touch with him through LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to Jeno Giordano00:25 Diving into Underwater Welding01:58 Challenges and Adventures in Underwater Welding05:38 Saturation Diving Explained13:53 Transition from Diving to VIP Hosting20:39 Inventing SwingFit and Moving into Finance23:55 Living in Colorado and a New Mission27:52 Facing the Fear of Being Misunderstood29:33 A Spiritual Connection in Colorado30:31 Challenging the Status Quo with New Ideas37:41 Introducing Earth Economics46:03 The Concept of Earthtocracy50:45 The Reality of a Fragmented World53:37 Final Thoughts and Future PlansKey InsightsConquering Fear Through Action: Jeno Giordano shared how he confronted his fear of drowning by becoming an underwater welder. This decision became a transformative experience, teaching him to face challenges head-on. His story emphasizes how facing primal fears can lead to profound personal growth.The Unseen World of Offshore Construction: Jeno's career in underwater welding exposed the audience to the high-risk, high-reward environment of offshore construction. From welding pipes on oil rigs to handling explosives, he detailed the physical and mental endurance required for such a dangerous job. His experiences highlight the incredible work done in unseen, extreme environments.Importance of Adaptability: Jeno's transition from a deep-sea diver to a VIP host in Las Vegas illustrates the importance of being adaptable and open to new opportunities. Despite the stark contrast between the roles, Jeno leveraged his unique background to connect with people from around the world, showing how diverse experiences can lead to unexpected opportunities.Insights into Institutional Finance and Blockchain: During his time in institutional finance, Jeno learned how massive sums of money move discreetly through "dark pools" and over-the-counter (OTC) trading. He emphasized that much of the world's financial system operates behind the scenes, challenging public perceptions about how wealth and liquidity are managed on global markets.The Earth as an Economic Asset: One of the key insights Jeno shared is the idea of viewing the Earth itself as the most valuable asset. In his work on Earth Economics, he advocates for a new way of valuing natural resources and revising how we calculate global wealth by considering the intrinsic value of the planet's ecosystems.A Vision for a New Governance Model: Jeno's concept of "Earthtocracy" is a proposed new governance structure designed to address the limitations of current democratic and centralized systems. His model aims to create a more functional and balanced global society by taking the best aspects of various governance systems and applying them in a way that respects both individual and collective needs.Balancing Decentralization and Globalism: Jeno explained the paradox of decentralization in modern society, where, despite fears of centralization, we are fragmented into countless local and global power structures. He argues for a shift in perspective, urging people to view the Earth as a singular colony and create systems that are more interconnected and cooperative on a global scale.
On this episode of Crazy Wisdom, host Stewart Alsop is joined by Achref Trabelsi, an AI engineer at NeuroFlash from Tunisia. They cover a wide range of topics, starting with the ancient history of Carthage, the dynamics of the Roman Empire, and the long-standing cultural ties in North Africa. The conversation then transitions into modern-day machine learning, AI developments, and Achref's personal journey in the AI space. They also touch on broader philosophical themes, including the impact of AI on society, the Arab Spring, and how technological advancements shape our world. You can follow Achref on LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Introduction00:43 A Brief History of Tunisia09:23 The Arab Spring and Its Impact14:21 The Role of Social Media and Technology20:25 Journey into AI and Machine Learning25:24 The Future of AI and Technology31:21 Public vs. Private Education31:33 Language of Instruction in Tunisia32:33 Cultural and Historical Insights of Tunisia35:55 University Collaborations and Systems38:46 Impact of AI on Education48:10 Philosophical and Spiritual Reflections on AI55:04 Concluding Thoughts and FarewellKey InsightsThe historical depth of Tunisia: Achref provides a rich overview of Tunisia's history, from its ancient beginnings with the Phoenicians and Carthage, through the Punic Wars with Rome, to its later integration into the Roman Empire and subsequent Arab conquest. This deep historical context highlights Tunisia's pivotal role as a cultural and economic hub in North Africa for centuries.Impact of the Arab Spring: Reflecting on the Arab Spring, Achref acknowledges the socio-political turmoil that reshaped Tunisia and the broader Arab world. He notes how the revolution was not just a sudden event but a culmination of economic challenges and a lack of political freedom, leading to a collective need for change. This insight also touches on the complexity of external influences and internal unrest.The acceleration of technology: One of the key themes was how rapidly technology, especially AI, has evolved. Achref talks about the exponential growth of AI and how it has gone from theoretical research to mainstream applications in just a few years, particularly with the rise of large language models like GPT. This speed of development keeps the field exciting but also poses challenges in keeping up.AI and the future of work: Achref emphasizes that AI will not entirely replace humans but instead reshape how we work. He believes AI can free people from routine tasks, allowing more time for personal development and creative endeavors. Rather than fearing obsolescence, he suggests we should adapt to the new opportunities AI creates.The role of AI in education: He observes that the traditional education system, especially in programming and technical fields, must adapt to the rise of AI. Standard coding assignments may no longer be meaningful because AI can complete them more efficiently. Instead, the focus should shift toward problem-solving, critical thinking, and understanding broader system designs.The limitations of AI: Despite the remarkable capabilities of AI, Achref points out its limitations, particularly in understanding complex human intentions. While AI excels at automating tasks and generating code, it struggles with deeper conceptual thinking or solving problems that require nuanced human judgment and creativity.Balancing progress with meaning: Achref reflects on the philosophical dimension of technological progress, mentioning how we shouldn't base our sense of self solely on our jobs or the fear of being replaced by AI. He encourages finding meaning in personal relationships, learning, and other non-work-related activities, underscoring the importance of balancing technological advancement with a well-rounded life.
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Jack, a tech enthusiast and founder of Vaporware, who also goes by Wereness on Twitter. The conversation spans topics such as Sweden's historical roots in Viking culture, entrepreneurial spirit, and technological innovation. They discuss Jack's insights into Swedish history, internet culture, and the origins of platforms like The Pirate Bay. The conversation eventually moves into Jack's focus on building the future of decentralized technology with projects like Vaporware and Plunder, alongside exploring concepts like solid-state interpreters. You can follow Jack on Twitter at @Wereness.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:03 Guest Introduction: Jack of Vaporware00:17 Learning Journeys and Voice Forms01:07 Swedish History Overview05:24 Sweden's Modernization and World War II08:23 Entrepreneurial Spirit in Northern Europe09:02 Gorbachev and the Soviet Union's Collapse14:36 Sweden's Pandemic Response and Conformity18:33 Host's Language Skills and Travel Aspirations21:13 Argentina's Economic History and Welfare State25:26 The U.S. Welfare State During COVID26:21 Designing Effective Welfare Systems27:40 Skepticism Towards UBI and Automation28:22 Argentina's Political Landscape29:16 Rethinking Political and Social Institutions31:22 Empiricism vs. Rationalism33:08 Challenges of Modern Technology and Information36:19 Reputation Systems and Information Control46:02 Introduction to Vaporware and Plunder47:54 Understanding Solid State Interpreters52:21 Conclusion and Contact InformationKey InsightsSweden's Unique Entrepreneurial History: Jack provides insight into Sweden's historical journey, highlighting how the country, known for its Viking roots and iron industry, has maintained an entrepreneurial spirit. Despite being late to modernize compared to other European nations, Sweden developed a strong engineering and industrial focus, fostering a culture of innovation that paved the way for companies like Spotify.Pirate Bay and Sweden's Digital Pioneers: The discussion touches on how Sweden's advanced internet infrastructure and highly connected population led to projects like The Pirate Bay. Jack notes that Sweden's conformity to trends and its neophilic culture contributed to the rise of such platforms, where digital piracy was once a mainstream practice, reflecting a larger cultural shift in media consumption.The Conformity Paradox in Sweden: A key theme in the episode is Sweden's paradoxical approach to conformity, where at a national level, the country made nonconformist decisions, such as its unique COVID-19 strategy. Jack explains this as a deeper form of conformity to long-standing institutional trust, showing that Swedish society's adherence to institutional plans is rooted in a high level of trust in central authority.Decentralized Technology and Vaporware: Jack introduces the concept of Vaporware, a project aimed at building decentralized technologies to provide users with greater control over their data. He explains that Vaporware is a company built on Plunder, an alternative to Urbit, and emphasizes how these technologies aim to solve current issues related to internet privacy, data ownership, and freedom.Solid-State Interpreter for Future-Proof Computing: One of the most technical insights revolves around the solid-state interpreter, which Jack describes as a combination of a virtual machine and a database. It allows for the creation of a computing environment where code and data can be stored and updated indefinitely, ensuring that the programs and data remain functional and accessible long into the future, unlike current software systems.Reputation Systems and Social Trust: Jack challenges traditional reputation systems, advocating for a more nuanced, context-specific method of evaluating trust in online interactions. He suggests that symbols or markers should be used to indicate trustworthiness based on context, rather than relying on simple upvotes or scores, which can be gamed and lead to dystopian outcomes.Global Institutional Collapse and the Need for New Systems: Both Stewart and Jack reflect on the global decline of traditional institutions, with welfare states and centralized governance models failing to meet modern needs. They emphasize the importance of rethinking political and economic systems to adapt to the changing technological landscape, drawing parallels between Sweden's past successes and the broader need for innovative, decentralized solutions globally.
In this episode of Crazy Wisdom, host Stewart Alsop is joined by Zach Rynes, known online as "Chainlink God," a community liaison for Chainlink. The conversation explores the critical role of Chainlink as a decentralized oracle network that connects blockchain-based smart contracts to real-world data, enhancing their functionality and enabling applications in DeFi, cross-chain interoperability, and beyond. The episode also touches on the broader implications of smart contracts for the legal system and the potential for blockchain technology to revolutionize financial markets globally, with a focus on developing countries and regions like Hong Kong. You can connect with Zach on Twitter at ChainLinkGod.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:25 Understanding Chainlink's Role in Blockchain02:40 Interoperability and Its Impact on Cryptocurrency05:10 Tokenization and Its Benefits07:19 Chainlink's Global Influence and Future Prospects09:51 Chainlink's Value Proposition and Investment Case13:16 Exploring Oracle Networks and Computation Layers23:07 Government Adoption and Future of Web326:20 China's Stance on Crypto27:14 Crypto as an Alternative Financial System28:41 Blockchain's Role in Developing Nations29:51 Argentina and the AI Revolution30:26 Understanding Chainlink31:32 Challenges in Explaining Blockchain to Governments32:13 Chainlink's Connectivity and Interoperability33:27 Argentina's Economic Challenges36:09 Personal Journey into Crypto40:12 Smart Contracts and the Legal System46:32 Future of Crypto Regulations49:12 Conclusion and Final ThoughtsKey InsightsChainlink as a Connectivity Solution: Chainlink plays a pivotal role in the blockchain ecosystem by serving as a decentralized oracle network, enabling smart contracts to access real-world data that blockchains inherently lack. This connectivity is crucial for the functionality of decentralized finance (DeFi) applications, particularly for providing reliable price data, cross-chain interoperability, and other external inputs that smart contracts need to execute properly.The Evolution of Blockchain Use Cases: While Chainlink initially focused on DeFi and price data, the platform has expanded its use cases significantly. Chainlink now facilitates cross-chain asset transfers, connects institutional systems to blockchain networks, and supports various forms of tokenization, including assets like debt and equities. This evolution highlights the broad applicability of blockchain technology beyond its original financial use cases.Smart Contracts and Legal Systems: Smart contracts have the potential to transform the legal system by automating agreements that can be objectively verified through data. While not a replacement for traditional legal frameworks, smart contracts can reduce the need for court arbitration by ensuring that certain contractual conditions are met programmatically, thereby lowering transaction costs and increasing trust in digital agreements.Challenges of Blockchain Adoption in Developing Countries: Developing nations, often constrained by fragmented financial systems and lack of infrastructure, stand to benefit significantly from blockchain technology. Chainlink and similar platforms offer these countries a way to leapfrog traditional financial systems by creating more liquid and accessible capital markets, facilitating international trade, and providing a more transparent and trustless system for transactions.Regulatory Barriers and Institutional Involvement: The adoption of blockchain technology by institutions is currently hampered by regulatory uncertainty. Despite the clear economic benefits, such as increased liquidity and reduced operating costs, institutions are often restricted by laws that have not yet adapted to the realities of digital assets and smart contracts. The hope is that as the financial benefits become undeniable, regulations will evolve to support broader blockchain adoption.The Role of Chainlink in Computation: Beyond data, Chainlink is also positioning itself as a computational resource for blockchain networks. Through its Functions service, Chainlink allows developers to run decentralized computations off-chain, which can then be integrated into smart contracts. This approach complements on-chain processing by offering privacy and efficiency benefits, making it an essential part of the blockchain infrastructure.The Global Race for Blockchain Leadership: Countries like Hong Kong and Singapore are emerging as leaders in the global blockchain race, driven by more favorable regulatory environments. These regions are capitalizing on the hesitation of Western nations like the U.S., which have been slower to embrace blockchain due to regulatory challenges. As these Asian markets grow, they could set a precedent for other nations to follow, making blockchain a central pillar of the global financial system.
In this episode of the Crazy Wisdom podcast, Stewart Alsop chats with Taren Pang, a full-stack developer with a rich background in architecture, Web3, and AI. The discussion covers the evolving role of algorithms in shaping our online experiences, the importance of transparent AI and blockchain technologies, and how tools like Urbit and Bitcoin could reshape business in a decentralized world. Taren also shares insights on programming with AI and his journey of transitioning from architecture to the tech industry. For more on Taren's thoughts and work, stay tuned for future updates as he refines his focus.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:18 The Role of Twitter as a Journal01:50 Navigating Twitter's Algorithm06:00 The Impact of AI and Deepfakes11:05 Transition to Web Development17:12 Exploring AI in Programming21:47 The Future of AI and Job Market28:09 Web3 and Blockchain Insights49:53 Concluding Thoughts and Future PlansKey InsightsThe Shift in Online Trust: The discussion highlighted how algorithms have become more trusted than traditional sources of knowledge, such as books. This shift reflects the increasing influence of digital platforms on our perception of truth and the ways we consume information.The Role of AI in Work and Life: AI's growing role in automating tasks was a major theme, with Taren expressing optimism about AI's potential to take over mundane tasks, allowing humans to focus on more meaningful work. Despite fears of job displacement, Taren believes AI will be more of an enabler than a replacement.The Importance of Transparent Algorithms: Both Stewart and Taren emphasized the need for transparency in the algorithms that shape our online experiences. Open-source algorithms, especially on platforms like Twitter, could allow users to understand how their data is being used and manipulated, fostering greater trust.Web3 and Decentralization: The episode explored the promise of Web3 technologies, such as Ethereum and Erbit, which aim to decentralize the internet by giving users more control over their data and digital identities. This shift could potentially democratize online spaces and reduce the power of large corporations.The Evolution of Programming with AI: Taren shared his experiences with AI tools like ChatGPT and Copilot, illustrating how these technologies are transforming programming by making tasks like code conversion more efficient. The rise of no-code and low-code platforms is also making AI more accessible to non-programmers.The Future of Digital Economies: Blockchain's potential to create new forms of digital economies was discussed, particularly through programmable platforms like Ethereum. These technologies could enable new business models that are more transparent and equitable, allowing creators to own and monetize their work in novel ways.Adapting to Technological Change: The conversation concluded with a broader reflection on how humanity has always adapted to technological advancements. Taren argued that, like past innovations, AI and blockchain will present new opportunities and challenges, but ultimately, they will enhance human life rather than diminish it.
Andy Anderson discusses working on his new part "Crazy Wisdom", filming with Nigel Alexander, his Nano Cubic wheel & theory map griptape, darkslides at Tampa Pro, his ABD collectibles project, the Natas spin to Jamie Foy, a massive kinked curved 5050 & how it healed him, how taxing filming a video part is these day and much more! Timestamps 00:00:00 Andy Anderson 00:00:52 Andy has been working on his new part 00:02:36 Filming with Nigel Alexander 00:04:37 Our Sponsor: AG1 00:09:52 His Nano Cubic wheel 00:17:23 Andy's griptape, the theory map 00:30:29 Dragon flips, Chris Chann 00:39:46 His crazy darkslide at Tampa Pro 00:41:39 Ricky Glaser vs Andy darkslide 00:50:24 The most BS 360 spins he's done 00:50:57 ABD collectibles project 00:52:15 Our Sponsor: Woodward 00:58:34 The Natas spin to Jamie Foy 01:04:18 Freestyle on the Venice Pavilion table top 01:12:03 The line he did at Tony Hawks ramp 01:14:27 Sky Brown's fall at Tony's 01:18:56 5050 kink rail to front foot impossible out 01:27:15 Smith grind the curved rail in Fresno 01:34:40 Filming a video part is taxing 01:39:16 Our Sponsor: AG1 01:42:37 The Natas spin reversed 01:50:32 Crazy darkslide wiggle flatbar 02:01:54 Massive kinked curved 5050 rail 02:27:38 How the massive rail healed him 02:32:15 Each trick shaped his new board shape Learn more about your ad choices. Visit megaphone.fm/adchoices