Artificiality

Follow Artificiality
Share on
Copy link to clipboard

Artificiality is dedicated to understanding the emerging community that is humans and machines. We combine AI and big data with decision science, psychology and design to help you understand how to work better with machines and your fellow humans.

Sonder Studio


    • Apr 5, 2025 LATEST EPISODE
    • weekly NEW EPISODES
    • 50m AVG DURATION
    • 95 EPISODES


    Search for episodes from Artificiality with a specific topic:

    Latest episodes from Artificiality

    David Wolpert: The Thermodynamics of Meaning

    Play Episode Listen Later Apr 5, 2025 76:19


    In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity. * Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.Our conversation ventures into:How AI might help us understand meaning in ways we cannot perceive ourselvesWhat a mathematically rigorous definition of meaning could mean for AI alignmentHow contexts shape our understanding of what's meaningfulThe distinction between causal information and mere correlationWe finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here. For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers. About David Wolpert:David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.Thanks again to Jonathan Coulton for our music.

    Blaise Aguera y Arcas and Michael Levin: The Computational Foundations of Life and Intelligence

    Play Episode Listen Later Mar 12, 2025 70:14


    In this remarkable conversation, Michael Levin (Tufts University) and Blaise Aguera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.Michael's "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.The conversation unfolds around several interwoven questions:- How does genuine agency emerge from simple rule-following components?- Why might intelligence be more fundamental than life itself?- How do we recognize cognition in systems that operate unlike human intelligence?- What constitutes the difference between patterns and the physical substrates expressing them?- How might symbiosis between humans and synthetic intelligence reshape both?Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.------Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

    Maggie Jackson: Embracing Uncertainty

    Play Episode Listen Later Mar 7, 2025 60:08


    In this episode, we welcome Maggie Jackson, whose latest book, Uncertain, has become essential reading for navigating today's complex world. Known for her groundbreaking work on attention and distraction, Maggie now turns her focus to uncertainty—not as a problem to be solved, but as a skill to be cultivated. Note: Uncertain won an Artificiality Book Award in 2024—check out our review here: https://www.artificiality.world/artificiality-book-awards-2024/In the interview, we explore the neuroscience of uncertainty, the cultural biases that make us crave certainty, and why our discomfort with the unknown may be holding us back. Maggie unpacks the two core types of uncertainty—what we can't know and what we don't yet know—and explains why understanding this distinction is crucial for thinking well in the digital age.Our conversation also explores the implications of AI—as technology increasingly mediates our reality, how do we remain critical thinkers? How do we resist the illusion of certainty in a world of algorithmically generated answersMaggie's insights challenge us to reframe uncertainty—not as fear, but as an opportunity for discovery, adaptability, and even creativity. If you've ever felt overwhelmed by ambiguity or pressured to always have the “right” answer, this episode offers a refreshing perspective on why being uncertain might be one of our greatest human strengths.Links:Maggie: https://www.maggie-jackson.com/Uncertain: https://www.prometheusbooks.com/9781633889194/uncertain/Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

    Greg Epstein: Tech Agnostic

    Play Episode Listen Later Mar 6, 2025 58:40


    In this episode, we talk with Greg Epstein—humanist chaplain at Harvard and MIT, bestselling author, and a leading voice on the intersection of technology, ethics, and belief systems. Greg's latest book, Tech Agnostic, offers a provocative argument: Silicon Valley isn't just a powerful industry—it has become the dominant religion of our time. Note: Tech Agnostic won an Artificality Book Award in 2024—check out our review here. In this interview, we explore the deep parallels between big tech and organized religion, from sacred texts and prophets to digital congregations and AI-driven eschatology. The conversation explores digital Puritanism, the "unwitting worshipers" of tech's altars, and the theological implications of AI doomerism.But this isn't just a critique—it's a call for a Reformation. Greg lays out a path toward a more humane and ethical future for technology, one that resists unchecked power and prioritizes human values over digital dogma.Join us for a thought-provoking conversation on faith, fear, and the future of being human in an age where technology defines what we believe in.Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

    Chris Messina: Reimagining AI

    Play Episode Listen Later Feb 28, 2025 53:20


    In this episode, we sit down with the ever-innovative Chris Messina—creator of the hashtag, top product hunter on Product Hunt, and trusted advisor to startups navigating product development and market strategy.Recording from Ciel Media's new studio in Berkeley, we explore the evolving landscape of generative AI and the widening gap between its immense potential and real-world usability. Chris introduces a compelling framework, distinguishing AI as a *tool* versus a *medium*, which helps explain the stark divide in how different users engage with these technologies.Our conversation examines key challenges: How do we build trust in AI? Why is transparency in computational reasoning critical? And how might community collaboration shape the next generation of AI products? Drawing from his deep experience in social media and emerging tech, Chris offers striking parallels between early internet adoption and today's AI revolution, suggesting that meaningful integration will require both time and a generational shift in thinking.What makes this discussion particularly valuable is Chris's vision for the future of AI interaction—where technology moves beyond query-response models to become a truly collaborative medium, transforming how we create, problem-solve, and communicate.Links:Chris: https://chrismessina.meCiel Media: https://cielcreativespace.com

    D. Graham Burnett: Attention and much more...

    Play Episode Listen Later Feb 27, 2025 72:25


    D. Graham Burnett will tell you his day job is as a professor of science history at Princeton University. He is also co-founder of the Strother School of Radical Attention and has been associated with the Friends of Attention since 2018. But none of those positions adequately describe Graham.His bio says that he “works at the intersection of historical inquiry and artistic practice.” He writes, he performs, he makes things. He describes himself as an attention activist. Perhaps most importantly for us, Graham helps you see the world differently—and more clearly. Graham has powerful views on the effect of technology on our attention. We often riff on his idea that technology has fracked our attention into little commoditizable bits. His work has highly influenced our concern about what might happen if the same extractive practices of the attention economy are applied to the future AI-powered intimacy economy. We were thrilled to have Graham on the pod for a wide ranging conversation about attention, intimacy, and much more. Links:https://dgrahamburnett.nethttps://www.schoolofattention.orghttps://www.friendsofattention.net---If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your email: https://www.artificiality.worldThanks to Jonathan Coulton for our music.

    Michael Levin—The Future of Intelligence: Synthbiosis

    Play Episode Listen Later Feb 5, 2025 78:01


    At the Artificiality Summit 2024, Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He sketched out a space of possibilities—freedom of embodiment—which facilitates imagining a hopeful future of "synthbiosis", in which AI is just one of a wide range of new bodies and minds. Bio: Michael Levin, Distinguished Professor in the Biology department and Vannevar Bush Chair, serves as director of the Tufts Center for Regenerative and Developmental Biology. Recent honors include the Scientist of Vision award and the Distinguished Scholar Award. His group's focus is on understanding the biophysical mechanisms that implement decision-making during complex pattern regulation, and harnessing endogenous bioelectric dynamics toward rational control of growth and form. The lab's current main directions are: - Understanding how somatic cells form bioelectrical networks for storing and recalling pattern memories that guide morphogenesis; - Creating next-generation AI tools for helping scientists understand top-down control of pattern regulation (a new bioinformatics of shape); and - Using these insights to enable new capabilities in regenerative medicine and engineering. www.artificiality.world/summit

    Artificiality Keynote at the Imagining Summit 2024

    Play Episode Listen Later Jan 28, 2025 14:46


    Our opening keynote from the Imagining Summit held in October 2024 in Bend, Oregon. Join us for the next Artificiality Summit on October 23-25, 2025! Read about the 2024 Summit here: https://www.artificiality.world/the-imagining-summit-we-imagined-and-hoped-and-we-cant-wait-for-next-year-2/ And join us for the 2025 Summit here: https://www.artificiality.world/summit/

    DeepSeek: What Happened, What Matters, 
and Why It's Interesting

    Play Episode Listen Later Jan 28, 2025 25:58


    First: - Apologies for the audio! We had a production error… What's new: - DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient) Details - FP8 Training: Working With Less Precise Numbers - Traditional AI training requires extremely precise numbers - DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86) - Cut memory and computation needs significantly with minimal impact - Like teaching someone math using rounded numbers instead of carrying every decimal place - Learning from Other AIs (Distillation) - Traditional approach: AI learns everything from scratch by studying massive amounts of data - DeepSeek's approach: Use existing AI models as teachers - Like having experienced programmers mentor new developers: - Trial & Error Learning (for their R1 model) - Started with some basic "tutoring" from advanced models - Then let it practice solving problems on its own - When it found good solutions, these were fed back into training - Led to "Aha moments" where R1 discovered better ways to solve problems - Finally, polished its ability to explain its thinking clearly to humans - Smart Team Management (Mixture of Experts) - Instead of one massive system that does everything, built a team of specialists - Like running a software company with: - 256 specialists who focus on different areas - 1 generalist who helps with everything - Smart project manager who assigns work efficiently - For each task, only need 8 specialists plus the generalist - More efficient than having everyone work on everything - Efficient Memory Management (Multi-head Latent Attention) - Traditional AI is like keeping complete transcripts of every conversation - DeepSeek's approach is like taking smart meeting minutes - Captures key information in compressed format - Similar to how JPEG compresses images - Looking Ahead (Multi-Token Prediction) - Traditional AI reads one word at a time - DeepSeek looks ahead and predicts two words at once - Like a skilled reader who can read ahead while maintaining comprehension Why This Matters - Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants. - Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that's the best conclusion at the moment). What's Interesting - Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force. - Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own. - AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time. - IP for Free: If DeepSeek can be such a fast follower through distillation, what's the advantage of OpenAI, Google, or another company to release a novel model?

    Hans Block & Moritz Riesewieck: Eternal You

    Play Episode Listen Later Jan 25, 2025 44:51


    We're excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,' about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals. We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them? Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care. Bravo. ------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    How AI Affects Critical Thinking and Cognitive Offloading

    Play Episode Listen Later Jan 25, 2025 30:44


    Briefing: How AI Affects Critical Thinking and Cognitive Offloading What This Paper Highlights - The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems. - Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving. - Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking. Why This Is Important - Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship. - Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made. - Education and Policy: There's an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers. What's Curious and Interesting - Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information. - AI's Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints. - Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks. What This Tells Us About the Future - Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation. - Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI's convenience, teaching people how to engage critically even when AI offers "easy answers." - Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlichhttps://www.mdpi.com/2075-4698/15/1/6

    J. Craig Wheeler: The Path to Singularity

    Play Episode Listen Later Jan 19, 2025 51:28


    We're excited to welcome Craig Wheeler to the podcast. Craig is an astrophysicist and Professor at the University of Texas at Austin. Over his career, he has made significant contributions to our understanding of supernovae, black holes, and the nature of the universe itself. Craig's new book, The Path to Singularity: How Technology Will Challenge the Future of Humanity, offers an exploration of how exponential technological change could upend life as we know it. Drawing on his background as an astrophysicist, Craig examines how humanity's current trajectory is shaped by forces like AI, robotics, neuroscience, and space exploration—all of which are advancing at speeds that may outpace our ability to adapt. The book is an extension of a course Craig taught at UT Austin, where he challenged students to project humanity's future over the next 100, 1,000, and even 100,000 years. His students explored ideas about AI, consciousness, and human evolution, ultimately shaping the themes that inspired the book. We found it fascinating, as he says in the interview, that the majority of the scenarios projected into the future were not positive for humanity. We wonder: Who wants to live in a dystopian future? And, for those of us who don't: What can we do about it? This led to our interest in talking with Craig. We hope you enjoy our conversation with Craig Wheeler. --------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025

    Play Episode Listen Later Jan 17, 2025 27:14


    Science Briefing: What AI Agents Tell Us About the Future of Human Experience * What These Papers Highlight - AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups. - One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren't enough—we need a broader system to make them useful. * Why This Matters - Human Compatibility: Agents don't just need to complete tasks—they need to work in ways that humans trust and find relatable. - New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences. - Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context. * What's Interesting - Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It's great at structure, bad at messiness. - Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator. - Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically. *What's Next for Agents - Can agents learn to navigate our complexity, like social norms or context-sensitive decisions? - Will ecosystems with Sims and Assistants make AI feel more human—and less robotic? - How will trust and personalization shape whether people actually adopt these systems? Product Briefing: Always On AI Wearables * What's new: - New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants. * Why This Matters - From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in. - Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording. - Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription. * What's Interesting - Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking. - It's About Apps: The app store is back as a business model. But are these startups ready for the challenge? - Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more. * The Human Experience - AI as a Second Self?: These devices don't just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences? - Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go? - Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?

    Doyne Farmer: Making Sense of Chaos

    Play Episode Listen Later Dec 12, 2024 55:46


    We're excited to welcome Doyne Farmer to the podcast. Doyne is a pioneering complexity scientist and a leading thinker on economic systems, technological change, and the future of society. Doyne is a Professor of Complex Systems at the University of Oxford, an external professor at the Santa Fe Institute, and Chief Scientist at Macrocosm. Doyne's work spans an extraordinary range of topics, from agent-based modeling of financial markets to exploring how innovation shapes the long-term trajectory of human progress. At the heart of Doyne's thinking is a focus on prediction—not in the narrow sense of forecasting next week's market trends, but in understanding the deep, generative forces that shape the evolution of technology and society. His new book, Making Sense of Chaos: A Better Economics for a Better World, is a reflection on the limitations of traditional economics and a call to embrace the tools of complexity science. In it, Doyne argues that today's economic models often fall short because they assume simplicity where there is none. What's especially compelling about Doyne's perspective is how he uses complexity science to challenge conventional economic assumptions. While traditional economics often treats markets as rational and efficient, Doyne reveals the messy, adaptive, and unpredictable nature of real-world economies. His ideas offer a powerful framework for rethinking how we approach systemic risk, innovation policy, and the role of AI-driven technologies in shaping our future. We believe Doyne's ideas are essential for anyone trying to understand the uncertainties we face today. He doesn't just highlight the complexity—he shows how to navigate it. By tracking the hidden currents that drive change, he helps us see the bigger picture of where we might be headed. We hope you enjoy our conversation with Doyne Farmer. ------------------------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    James Boyle: The Line—AI And the Future of Personhood

    Play Episode Listen Later Sep 28, 2024 58:04


    We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood. In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality. Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness. What's particularly compelling about Jamie's approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights. We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now. To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." Let's dive into our conversation with Jamie Boyle. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Shannon Vallor: The AI Mirror

    Play Episode Listen Later Sep 13, 2024 56:34


    We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror. In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality. In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn't represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we've already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence. We think this is one of the best books on AI for a general audience that has been published this year. Shannon's mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it's still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us. Let's dive into our conversation with Shannon Vallor. ----------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Matt Beane: The Skill Code

    Play Episode Listen Later Aug 30, 2024 55:34


    We're excited to welcome to the podcast Matt Beane, Assistant Professor at UC Santa Barbara and the author of the book "The Skill Code: How to Save Human Ability in an Age of Intelligent Machines." Matt's research investigates how AI is changing the traditional apprenticeship model, creating a tension between short-term performance gains and long-term skill development. His work has particularly focused on the relationship between junior and senior surgeons in the operating theater. As he told us, "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship." He observed that junior surgeons now often just set up the robot and watch the senior surgeon operate for hours, epitomizing a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices. In "The Skill Code," Matt argues that three key elements are essential for developing expertise: challenge, complexity, and connection. He points out that real learning often involves discomfort, saying, "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?" Matt's research shows that while AI can significantly boost productivity, it may be undermining critical aspects of skill development. He warns that the traditional model of "See one, do one, teach one" is becoming "See one, and if-you're-lucky do one, and not-on-your-life teach one." In our conversation, we explore these insights and discuss how we might preserve human ability in an age of intelligent machines. Let's dive into our conversation with Matt Beane on the future of human skill in an AI-driven world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Emily M. Bender: AI, Linguistics, Parrots, and more!

    Play Episode Listen Later Aug 2, 2024 57:18


    We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington. As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily's expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that's why we were eager to talk with her. Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development. In this conversation, we explore the issues of current AI systems with a particular focus on Emily's view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines. Let's dive into our conversation with Emily Bender. ---------------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    John Havens: Heartificial Intelligence

    Play Episode Listen Later Jul 13, 2024 62:29


    We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing. In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values" - our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us. Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems. In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. We explore questions like: What are the core components of human values that AI systems need to understand? How can we design AI systems to augment rather than replace human decision-making? Why has the field of AI ethics lagged behind technological development, and what role can positive psychology play in bridging this gap? Should we be concerned about AI systems usurping our ability to define our own values, or are there inherent limits to what machines can understand about human ethics? Let's dive into our conversation with John Havens. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Leslie Valiant: Educability

    Play Episode Listen Later Jun 22, 2024 56:41


    We're excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn. Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge. Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts. In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like: What are the core components of educability that make human intelligence special? How can we design AI systems to augment rather than replace human learning? Why has the science of education lagged behind other fields, and what role can AI play in accelerating pedagogical research and practice? Should we be concerned about a potential "intelligence explosion" as machines grow more sophisticated, or are there limits to the power of AI? Let's dive into our conversation with Leslie Valiant. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Jonathan Feinstein: The Context of Creativity

    Play Episode Listen Later Jun 8, 2024 53:31


    We're excited to welcome to the podcast Jonathan Feinstein, professor at the Yale School of Management and author of Creativity in Large-Scale Contexts: Guiding Creative Engagement and Exploration. Our interest in creativity is broader than the context of the creative professions like art, design, and music. We see creativity as the foundation of how we move ahead as a species including our culture, science, and innovation. We're interested in the huge combinatorial space of creativity, linked together by complex networks. And that interest led us to Jonathan. Through his research and interviews with a wide range of creative individuals, from artists and writers to scientists and entrepreneurs, Jonathan has developed a framework for understanding the creative process as an unfolding journey over time. He introduces key concepts such as guiding conceptions, guiding principles, and the notion of finding "golden seeds" amidst the vast landscape of information and experiences that shape our creative context. By looking at creativity mathematically, Jonathan has exposed the tremendous beauty of the creative process as being intuitive, exploratory, and supported by math and machines and knowledge and structure. He shows how creativity is much broader and more interesting than the stereotypical idea of creativity as simply a singular lightbulb moment. In our conversation, we explore some of the most surprising and counterintuitive findings from Jonathan's work, how his ideas challenge conventional wisdom about creativity, and the implications for individuals and organizations seeking to innovate in an increasingly AI-driven world. Let's dive into our conversation with Jonathan Feinstein. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Karaitiana Taiuru: Indigenous AI

    Play Episode Listen Later May 25, 2024 48:01


    We're excited to welcome to the podcast Karaitiana Taiuru. Dr Taiuru is a leading authority and a highly accomplished visionary Māori technology ethicist specialising in Māori rights with AI, Māori Data Sovereignty and Governance with emerging digital technologies and biological sciences. Karaitiana has been a champion for Māori cultural and intellectual property rights in the digital space since the late 1990s. With the recent emergence of AI into the mainstream, Karaitiana sees both opportunities and risks for indigenous peoples like the Māori. He believes AI can either be a tool for further colonization and cultural appropriation, or it can be harnessed to empower and revitalize indigenous languages, knowledge, and communities. In our conversation, Karaitiana shares his vision for incorporating Māori culture, values and knowledge into the development of AI technolo gies in a way that respects data sovereignty. We explore the importance of Māori representation in the tech sector, the role of AI in language and cultural preservation, and how indigenous peoples around the world can collaborate to shape the future of AI. Karaitiana offers a truly unique and thought-provoking perspective that I believe is crucial as we grapple with the societal implications of artificial intelligence. I learned a tremendous amount from our conversation and I'm sure you will too. Let's dive into our conversation with Karaitiana Taiuru. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Omri Allouche: Gong AI

    Play Episode Listen Later May 4, 2024 40:47


    We're excited to welcome to the podcast Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At Gong, Omri leads research into how AI and machine learning can transform the way sales teams operate. In our conversation today, we'll explore Omri's perspective on managing AI research and innovation. We'll discuss Gong's approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. Omri will share how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically. Let's dive into our conversation with Omri Allouche. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 

    Susannah Fox: Rebel Health

    Play Episode Listen Later Apr 20, 2024 48:52


    We're excited to welcome to the podcast Susannah Fox, a renowned researcher who has spent over 20 years studying how patients and caregivers use the internet to gather information and support each other. Susannah has collected countless stories from the frontlines of healthcare and has keen insights into how patients are stepping into their power to drive change. Susannah recently published a book called "Rebel Health: A Field Guide to the Patient-Led Revolution in Medical Care." In it, she introduces four key personas that represent different ways patients and caregivers are shaking up the status quo in healthcare: seekers, networkers, solvers, and champions. The book aims to bridge the divide between the leaders at the top of the healthcare system and the patients, survivors, and caregivers on the ground who often have crucial information and ideas that go unnoticed. By profiling examples of patient-led innovation, Susannah hopes to inspire healthcare to become more participatory. In our conversation, we dive into the insights from Susannah's decades of research, hear some compelling stories of patients, and discuss how medicine can evolve to embrace the power of peer-to-peer healthcare. As you'll hear, this is a highly personal episode as Susannah's work resonates with both of us and our individual and shared health experiences. Let's dive into our conversation with Susannah Fox. --------------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Angel Acosta: Contemplation, Healing, and AI

    Play Episode Listen Later Mar 2, 2024 43:22


    We're excited to welcome to the podcast Dr. Angel Acosta, an expert on healing-centered education and leadership. Angel runs the Acosta Institute which helps communities process trauma and build environments for people to thrive. He also facilitates leadership programs at the Garrison Institute that support the next generation of contemplative leaders. With his background in social sciences, curriculum design, and adult education, Angel has been thinking deeply about how artificial intelligence intersects with mindfulness, social justice and education. In our conversation, we explore how AI can help or hinder our capacity for contemplation and healing. For example, does offloading cognitive tasks to AI tools like GPT create more mental space for mindfulness? How do we ensure these technologies don't increase anxiety and threaten our sense of self? We also discuss the promise and perils of AI for transforming education. What roles might AI assistants play in helping educators be more present with students? How can we design assignments that account for AI without compromising learning? What would a decolonized curriculum enabled by AI look like? And we envision more grounded, humanistic uses of rapidly evolving AI—from thinking of it as "ecological technology" interdependent with the natural world, to leveraging its pattern recognition in service of collective healing and wisdom. What guiding principles do we need for AI that enhances both efficiency and humanity? How can we consciously harness it to create the conditions for people and communities to thrive holistically? We'd like to thank our friends at the House of Beautiful Business for sparking our relationship with Angel—we highly recommend you check out their events and join their community. Let's dive into our conversation with Angel Acosta. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Doug Belshaw: Serendipity Surface & AI

    Play Episode Listen Later Feb 23, 2024 45:28


    We're excited to welcome Doug Belshaw to the show today. Doug is a founding member of the We Are Open Co-op which helps organizations with sensemaking and digital transformation. Doug coined the term "serendipity surface" to describe cultivating an attitude of curiosity and increasing the chance encounters we have by putting ourselves out there. We adopted the term quite some time ago and were eager to talk with Doug about how he thinks about serendipity surfaces in the age of generative AI. As former Head of Web Literacy at Mozilla and now currently pursuing a master's degree in systems thinking, Doug has a wealth of knowledge on topics spanning education, technology, productivity and more. In our conversation today, we'll explore concepts like productive ambiguity, cognitive ease, and rewilding your attention. Doug shares perspectives from his unique career journey as well as personal stories and projects exemplifying the creative potential of AI. We think you'll find this a thought-provoking discussion on human-AI collaboration, lifelong learning, digital literacy, ambiguity, and the future of work. Let's dive into our conversation with Doug Belshaw. Key points: Doug coined the term Serendipity Surface to describe cultivating curiosity, increasing random encounters and possibilities by putting ourselves out there. He sees it as the opposite of reducing "attack surface" in security; it's about expanding opportunities. Doug shares an example of prompting ChatGPT extensively over 24 hours with a flood risk report, personas and perspectives to decide on a complex house purchase. This shows the creative potential of using AI tools to augment human thinking and decisions. Doug discusses the sweet spot of productive ambiguity where concepts resonate with a common meaning yet leave room for interpretation by individuals based on their contexts. It encourages engagement and spreading of ideas. As an educator, Doug advocates thoughtfully adopting emerging tech to develop engaged, literate and curious learners rather than reactively banning tools. Friction facilitates learning. Ultimately, Doug sees potential for AI collaboration that brings our humanity, empathy, creativity and curiosity to the forefront if we prompt and apply these tools judiciously. Links for Doug Belshaw: Dr Doug Belshaw We Are Open Cooperative Thought Shrapnel Open Thinkering Ambiguiti.es If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Richard Kerris of NVIDIA: AI, Creators, and Developers

    Play Episode Listen Later Feb 17, 2024 50:07


    We're excited to welcome Richard Kerris, Vice President of Developer Relations and GM of Media & Entertainment at NVIDIA, to the show today. Richard has had an extensive career working with creators and developers across film, music, gaming, and more. He offers valuable insights into how AI and machine learning are transforming creative tools and workflows. In particular, Richard shares his perspective on how these advanced technologies are democratizing access to high-end capabilities, putting more power into the hands of a broader range of storytellers. He discusses the implications of this for the media industry—will it displace roles or expand opportunities? And we explore Richard's vision for what the future may look like in 5-10 years in terms of applications being auto-generated to meet specialized user needs. We think you'll find the wide-ranging conversation fascinating as we explore topics from AI-enhanced content creation to digital twins and AI assistants. Let's dive into our discussion with Richard Kerris . Key Points: Democratization of creative tools is enabling more people to produce high-quality media content. This expands opportunities, rather than displacing roles. Future apps may be auto-generated via AI to meet specialized user needs, rather than created as generic solutions. Industrial metaverse allows manufacturers to optimize workflows through virtual prototyping. Open USD provides a common language for 3D tools to communicate, improving collaboration. AI agents will become commonplace assistants customized to individual interests and needs. Computing power growth is enabling complex digital twins like the human body to improve health outcomes. Generative AI introduces new considerations around rights of trained content and output. Education benefits from AI's ability to showcase different artistic styles and techniques. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Tyler Marghetis: The Leaps of Human Imagination

    Play Episode Listen Later Feb 6, 2024 50:42


    We're excited to welcome Tyler Marghetis, Assistant Professor of Cognitive & Information Sciences at the University of California, Merced, to the show today. Tyler studies what he calls the "lulls and leaps" or "ruts and ruptures" of human imagination and experience. He's fascinated by how we as humans can get stuck in certain patterns of thinking and acting, but then also occasionally experience radical transformations in our perspectives. In our conversation, Tyler shares with us some of his lab's fascinating research into understanding and even predicting these creative breakthroughs and paradigm shifts. You'll hear about how he's using AI tools to analyze patterns in things like Picasso's entire body of work over his career. Tyler explains why he believes isolation and slowness are actually key ingredients for enabling many of history's greatest creative leaps. And he shares with us how his backgrounds in high-performance sports and in the LGBTQ community shape his inclusive approach to running his university research lab. It's a wide-ranging and insightful discussion about the complexity of human creativity and innovation. Let's dive in to our interview with Tyler Marghetis. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    James Evans: Scientific Progress

    Play Episode Listen Later Jan 30, 2024 62:33


    Why is scientific progress slowing down? That's a question that's been on the minds of many. But before we dive into that, let's ponder this—how do we even know that scientific progress is decelerating? And in an era where machines are capable of understanding complexities that sometimes surpass human cognition, how should we approach the future of knowledge? Joining us in this exploration is Professor James Evans from the University of Chicago. As the director of the Knowledge Lab at UChicago, Professor Evans is at the forefront of integrating machine learning into the scientific process. His work is revolutionizing how new ideas are conceived, shared, and developed, not just in science but in all knowledge and creative processes. Our conversation today is a journey through various intriguing landscapes. We'll delve into questions like: Why has the pace of scientific discovery slowed, and can AI be the catalyst to reaccelerate it? How does the deluge of information and the battle for attention affect the evolution of ideas? What are the predictive factors for disruption and novelty in the idea landscape? How should teams be structured to foster disruptive innovation? What risks do homogeneity in thinking and AI model concentration pose? How can we redefine diversity in the context of both human and AI collaboration? What does the evolving nature of knowledge mean for the next generation of scientists? Can tools like ChatGPT enhance diversity and innovative capabilities in research? Is it time to debunk the myth of the lone genius and focus instead on the collective intelligence of humans and AI? We're also thrilled to discuss Professor Evans' upcoming book, "Knowing," which promises to be a groundbreaking addition to our understanding of these topics. So, whether you're a scientist, a creative, a business leader, a data scientist or just someone fascinated by the interplay of human intelligence and artificial technology, this episode is sure to offer fresh perspectives and insights.

    Ed Sim: AI Venture Capital

    Play Episode Listen Later Jan 23, 2024 50:28


    Few understand how to anticipate major technology shifts in the enterprise better than today's guest, Ed Sim. Ed is a pioneer in the world of venture capital, specifically focusing on enterprise software and infrastructure since 1996. He founded Boldstart in 2010 to invest at the earliest stages of enterprise software companies, growing the firm from $1M to around $375M today. So where does an experienced investor who has seen countless tech waves come and go place his bets in this new AI-first future? That's the key topic we dive into today. While AI forms a core part of our dialogue, Ed emphasizes that he doesn't look at pitches and go “Oh, AI, I need to invest in that.” Rather, he tries to see if founders have identified a real pain point, have a unique approach to solving it, and can clearly articulate how they will provide a significant improvement over status quo. AI is an important component, of course, but it isn't a reason to invest alone. With that framing in mind, Ed shares where he is most excited to invest in light of recent generative AI breakthroughs. Unsurprisingly, AI security ranks high on his list given enterprises' skittishness around adopting any technology that could compromise sensitive data or infrastructure. Ed saw this need early, backing a startup called Protect AI in March 2022 that focuses specifically on monitoring and certifying the security of AI systems. The implications of AI have branched into virtually every sector, but Ed reminds us that as investors and builders, we must stay grounded in solving real problems vs just chasing the shiny new thing. Key Points: Ed Sim started Boldstart Ventures in 2010 to provide early stage funding for enterprise startups, writing smaller checks than typical VC firms. The firm now manages a nearly $200 million main fund and a $175 million opportunity fund. Generative AI is an exciting new technology, but the key is backing founders who are solving real problems for end users in a unique way that is 10x better than current solutions. AI is just the underlying technology. AI security is critical for enterprise adoption. Ed invested early in Protect AI, which helps monitor AI models for security, privacy, and compliance issues. AI security will be key to scale adoption. There are still open questions around data governance with large language models that access sensitive company data. Approaches that check governance policies before providing answers are the safest for now. Factors like inference cost, subscription fatigue, and proving ROI will impact how quickly some of the consumer generative AI applications gain traction. Creative solutions around caching, pricing models, and hybrid human+AI loops can help. There will be opportunities related to embedding expertise into systems to empower junior and senior employees. Tools like GitHub Copilot show potential to augment technical skills. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Rodrigo Liang: SambaNova and Generative AI in the Enterprise

    Play Episode Listen Later Jan 16, 2024 56:53


    One of our research obsessions is Edge AI through which we study opportunities to build and deploy AI on a computing device at the edge of a network. The premise is that AI in the cloud benefits from scale but is challenged by cost and privacy and Edge AI solves many of these challenges by eliminating cloud computing costs and keeping data within secure environments. Given this interest, we were excited to talk with Rodrigo Liang, the Co-Founder and CEO of SambaNova Systems which has built a platform to deliver enterprise grade chips, software, and models in a fully integrated system, purpose built for AI. In this interview, Rodrigo discusses how his company is enabling enterprises to adopt AI in a secure, customizable way that builds long-term value by building AI assets. Their full-stack solutions aim to simplify AI model building and deployment, especially by leveraging open source frameworks and using modular, fine-tuned expert models tailored to clients' private data. Key Points: SambaNova Systems aims to help companies adopt AI technology, particularly in enterprise environments. It provides full-stack AI solutions including hardware, software, models, etc. to simplify adoption. The company's offerings are designed to enable companies to leverage AI while maintaining data privacy and security. A modular approach provides the flexibility to adapt to diverse enterprise needs. SambaNova takes an "AI asset" approach focused on creating long-term value rather than just providing "AI tools." A focus on open source models provides diversity of technology while reducing vendor lock-in. The company's software stack enables fine-tuning of granular models on customer data, creating a multitude of AI experts to serve the enterprise. Unlimited use encourages experimentation without the same cost challenges as public cloud AI. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Best of: Barbara Tversky & Spacial Cognition

    Play Episode Listen Later Jan 11, 2024 66:19


    One of our long-time subscribers recently said to us: “What I love about you is that you're regularly talking about things three years ahead of everyone else.” That inspired us to look back through our catalog of conversations to see which ones we think are most relevant now. Today, we're revisiting one of our most thought-provoking episodes, originally recorded in April 2022, featuring Barbara Tversky, the author of "Mind in Motion: How Action Shapes Thought." This episode is great way to start 2024 because we are all about to experience what are known as Large Multimodal Models or LMMs, models which go beyond text and bring in more sensory modalities including spatial information. Tversky's insights into spatial reasoning and embodied cognition are more relevant than ever in the era of multimodal models in AI. These models, which combine text, images, and other data types, mirror our human ability to process information across various sensory inputs. The parallels between Tversky's research and Large Multimodal Models (LMMs) in AI are striking. Just as our physical interactions with the world shape our cognitive processes, these AI models learn and adapt by integrating diverse data types, offering a more holistic understanding of the world. Her work sheds light on how we might improve AI's ability to 'think' and 'reason' spatially, enhancing its application in fields ranging from navigation systems to virtual reality. As we revisit our interview with Tversky, we're reminded of the importance of considering human-like spatial reasoning and embodied cognition in advancing AI technology. Join us as we explore these intriguing concepts with Barbara Tversky, uncovering the essential role of spatial reasoning in both human cognition and artificial intelligence. Barbara Tversky is an emerita professor of psychology at Stanford University and a professor of psychology at Teachers College at Columbia University. She is also the President of the Association for Psychological Science. Barbara has published over 200 scholarly articles about memory, spatial thinking, design, and creativity, and regularly speaks about embodied cognition at interdisciplinary conferences and workshops around the world. She lives in New York. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Stephen Fleming: Consciousness and AI

    Play Episode Listen Later Dec 13, 2023 62:01


    In this episode, we speak with cognitive neuroscientist Stephen Fleming about theories of consciousness and how they relate to artificial intelligence. We discuss key concepts like global workspace theory, higher order theories, computational functionalism, and how neuroscience research on consciousness in humans can inform our understanding of whether machines may ever achieve consciousness. In particular, we talk with Steve about a recent ⁠research paper⁠, Consciousness in Artificial Intelligence, which he co-authored with Patrick Butlin, Robert Long, Yoshua Bengio, and several others. Steve provides an overview of different perspectives from philosophy and psychology on what mechanisms may give rise to consciousness. He explains global and local theories, the idea of a higher order system monitoring lower level representations, and similarities and differences between human and machine intelligence. The conversation explores current limitations in neuroscience for studying consciousness empirically and opportunities for interdisciplinary collaboration between neuroscientists and AI researchers. Key Takeaways: Consciousness and intelligence are separate concepts—you can have one without the other Global workspace theory proposes consciousness arises when information is broadcast to widespread brain areas Higher order theories suggest a higher system monitoring lower representations enables consciousness Computational functionalism looks at information processing rather than biological substrate Attributing intelligence versus attributing experience/consciousness invoke different dimensions of social perception More research needed in neuroscience and social psychology around people's intuitions about machine consciousness ⁠Stephen Fleming⁠ is Professor of Cognitive Neuroscience at the Department of Experimental Psychology, University College London. Steve's work aims to understand the mechanisms supporting human subjective experience and metacognition by employing a combination of psychophysics, brain imaging and computational modeling. He is the author of *⁠Know Thyself*,⁠ a book on the science of metacognition, about which we interviewed him on Artificiality in December of 2021. Episode Notes:  2:13 - Origins of the paper Stephen co-authored on consciousness in artificial intelligence 5:17 - Discussion of demarcating intelligence vs phenomenal consciousness in AI 6:34 - Explanation of computational functionalism and mapping functions between humans and machines 13:42 - Examples of theories like global workspace theory and higher order theories 19:27 - Clarifying when sensory information reaches consciousness under global theories 23:02 - Challenges in precisely defining aspects like the global workspace computationally 28:35 - Connections between higher order theories and generative adversarial networks 30:43 - Ongoing empirical evidence still needed to test higher order theories 36:52 - Iterative process needed to update theories based on advancing neuroscience 40:40 - Open questions remaining despite foundational research on consciousness 46:14 - Mismatch between public perceptions and indicators from neuroscience theories 50:30 - Experiments probing anthropomorphism and consciousness attribution 56:17 - Surprising survey results on public views of AI experience 59:36 - Ethical issues raised if public acceptance diverges from scientific consensus If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get ⁠Artificiality⁠ delivered to your email Learn about our book ⁠Make Better Decisions⁠ and buy it on ⁠Amazon⁠ Thanks to ⁠Jonathan Coulton⁠ for our music

    Steven Sloman: LLMs and Deliberative Reasoning

    Play Episode Listen Later Dec 6, 2023 61:06


    If you've used a large language model, you've likely had one or more moments of amazement as the tool immediately responded with impressive content from its massive data cosmos training set. But you've likely also had moments of confusion or disillusionment as the tool responded with irrelevant or incorrect responses, displaying a lack of reasoning. A recent research paper from Meta caught our eye because it proposes a new mechanism called System 2 Attention which “leverages the ability of LLMs to reason in natural language and follow instructions in order to decide what to attend to.” The name System 2 is derived from the work of Daniel Kahneman who in his 2011 book, Thinking, Fast and Slow, differentiated between System 1 thinking as intuitive and near-instantaneous and System 2 thinking as slower and effortful. The Meta paper also references our friend Steven Sloman who in 1996 made the case for two systems of reasoning—associative and deliberative or rule-based. Given our interest in the idea of LLMs being able to help people make better decisions—which often requires more deliberative thinking—we asked Steve to come back on the podcast to get his reaction to this research and generative AI in general. Yet again, we had a dynamic conversation about human cognition and modern AI, which field is learning what from the other, and a few speculations about the future. We're grateful for Steve taking the time to talk with us again and hope that he'll join us for a third time when his next book is released sometime in 2024. Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University where he has taught since 1992. He studies how people think, including how we think as a community, a topic he wrote a fantastic book about with Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone. For more about that work, please check out our first interview with Steve from June of 2021. About Artificiality from Helen & Dave Edwards: Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our essays, events, newsletter, and podcast interviews with academics, authors, entrepreneurs, and executives. Subscribe at www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

    Julia Rhodes Davis: Advancing Racial Equity

    Play Episode Listen Later Oct 29, 2023 47:01


    In this episode, we speak with Julia Rhodes Davis, a Senior Advisor at Data & Society, about her recent report "Advancing Racial Equity Through Technology Policy" published by the AI Now Institute. This comprehensive report provides an in-depth examination of how the technology industry impacts racial inequity and concrete policy recommendations for reform. A critical insight from the report is that advancing racial equity requires a holistic approach. The report provides policy recommendations to reform antitrust law, ensure algorithmic accountability, and support tech entrepreneurship for people of color.In our interview, Julia explains how advancing racial equity requires policy change as well as coalition-building with impacted communities. She discusses the urgent need to reform practices of algorithmic discrimination that restrict opportunities for marginalized groups. Julia highlights some positive momentum from federal and state policy efforts and she encourages people to get involved with local organizations, providing a great list of organizations you might consider.Links:AI Now InstituteAdvancing Racial Equity Through Technology Policy reportAlgorithmic Justice LeagueAthenaColor of ChangeData for Black LivesData & SocietyMedia JusticeOur Data BodiesAbout Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

    Alicia Juarrero: Context Changes Everything

    Play Episode Listen Later Oct 22, 2023 64:12


    Grounding her work in the problem of causation, Alicia Juarrero challenges previously held beliefs that only forceful impacts are causes. Constraints, she claims, bring about effects as well, and they enable the emergence of coherence.Alicia is the author of multiple books, most recently Context Changes Everything: How Constraints Create Coherence. Helen says in this interview that it feels like this book is from the future. It's about using the tools of complexity science to understand identity, hierarchy, and top-down causation, and in so doing, presents a new way of thinking about the natural world but also the artificial world. In this interview, we discuss how to use concepts from complexity—including the important role of constraints—to enlighten our perspectives on the community of humans and machines.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

    Jai Vipra: Computational Power and AI

    Play Episode Listen Later Oct 15, 2023 40:39


    Jai Vipra is a research fellow at the AI Now Institute where she focuses on competition issues in frontier AI models. She recently published the report Computational Power and AI which focuses on compute as a core dependency in building large-scale AI. We found this report to be an important addition to the work covering the generative AI industry because compute is incredibly important but not very well understood. In the report, Jai breaks down the key components of compute, analyzes the supply chain and competitive dynamics, and aggregates all the known economics. In this interview, we talk with Jai about the report, its implications, and her recommendations for industry and policy responses.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

    Wendy Wong: We, the Data, and Human Rights

    Play Episode Listen Later Oct 8, 2023 58:02


    Wendy Wong is a professor of political science and principal's research chair at the University of British Columbia where she researches and teaches about the governance of emerging technologies, human rights, and civil society/non-state actors.In this interview, we talk with Wendy about her new book We, the Data: Human Rights in the Digital Age which is described as “a rallying call for extending human rights beyond our physical selves—and why we need to reboot rights in our data-intensive world.” Given the explosion of generative AI and mass data capture that fuels generative AI models, Wendy's argument for extending human rights to the digital age seems very timely. We talk with her about how human rights might be applied to the age of data, the datafication by big tech, individuals as stakeholders in the digital world, and our awe of the human contributions that enables generative AI.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

    Chris Summerfield: Natural General Intelligence

    Play Episode Listen Later Oct 1, 2023 56:15


    Chris Summerfield is a Professor of Cognitive Science at the University of Oxford. His work is concerned with understanding how humans learn and make decisions. He is interested in how humans acquire new concepts or patterns in data, and how they use this information to make decisions in novel settings. He's also a research scientist at Deepmind. Earlier this year, Chris released a book called Natural General Intelligence, How understanding the brain can help us build AI. This couldn't be more timely given talk of AGI and in this episode we talk with Chris about his work and what he's learned about humans from studying AI and what he's learned about AI by studying humans. We talk about his aim to provide a bridge between the theories of those who study biological brains and the practice of those who are seeking to build artificial brains, something we find perpetually fascinating. About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

    Michael Bungay Stanier: How to Work with (Almost) Anyone

    Play Episode Listen Later Aug 13, 2023 57:23


    Michael Bungay Stanier has an extraordinary talent for distilling the complexity of human relationships into easy to remember and follow frameworks—doing so with just the right amount of Australian humor and plenty of vulnerability. Despite his remarkable success with books like The Coaching Habit, The Advice Trap, and How to Begin, Michael never comes across as one of those gurus who thinks they have all the answers. That mindset comes through perfectly in the title of his newest book, How to Work with (Almost) Anyone—not absolutely anyone, almost anyone. The book is built around five questions for building the best possible relationships which we have found to be very helpful in our working relationship.We have grown to be friends with Michael through our repeated gatherings at the House of Beautiful Business. I know all three of us would encourage all of our listeners and readers to join us at the next House as well.In this interview, we talk about Michael's new book, how to use a keystone conversation to build the best possible relationship, and we even consider how to apply Michael's frameworks to working with generative AI.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Synthesis & Generative AI

    Play Episode Listen Later Aug 5, 2023 32:43


    An exploration of how we might conceptualize the design of AGI within the context of human left and right brains. The tension between AI and human functioning highlights unique cooperation. AI uses language, abstraction, and analysis, while humans rely on experience, empathy, and metaphor. AI manipulates static elements well but struggles in a changing world.This leads to two distinct design approaches for future AI and considerations for "artificial general intelligence" (AGI).One approach focuses on "left-brained" AI—controlling facts with internal consistency, while relying on humans for context, meaning, and care. Here, machines serve humans. This path is popular due to the challenge of developing AI mimicking human right hemisphere functions.However, we want machines that can correct contextual mistakes and understand our intended meanings. The design challenge here lies in connecting highly "left-brained" AI to holistic humans in a way that enhances human capabilities.Alternatively, we could design AI with asymmetry, mirroring the human brain's evolution. Such AI would provide a synthesized perspective before interacting with a human, applying computational power to intuition and addressing human paradoxes. Some envision this as AGI—an all-knowing synthesis machine.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Jonathan Coulton: Generative AI, songwriting, and creativity

    Play Episode Listen Later Jul 29, 2023 60:59


    Jonathan CoultonIf you've read our show notes, you'll know that our music was written and performed by Jonathan Coulton. I've known Jonathan for more than 30 years, dating back to when we sang together in college. But that's a story for another day, or perhaps never.Jonathan spent his first decade post college as a software coder and then through a bit of happenstance and throwing care to the wind, he transitioned to music. In the mid 2000s, he blazed a trail of creating his own career on the internet—without a label or any of the support that musicians normally have. While he was pushing out a new song each week as part of his Thing-A-Week Project, he became known as the “internet music-business guy” since he had successfully used the internet to build his career and a dedicated fanbase. He has since released several albums, toured plenty, and launched an annual cruise for his fans.Throughout his career, technology, and specifically AI, has been a theme—starting with his song Chiron Beta Prime in 2006 about a planet where all humans have been enslaved by uncaring and violent robots. During this interview we talk about his 2017 album Solid State which is well described by writer Emily Nussbaum who wrote, “Coulton's latest album, “Solid State,” is, like so many breakthrough albums, the product of a raging personal crisis—one that is equally about making music and living online, getting older, and worrying about the apocalypse. A concept album about digital dystopia, it's Coulton's warped meditation on the ugly ways the internet has morphed since 2004. At the same time, it's a musical homage to his earliest Pink Floyd fanhood, a rock-opera about artificial intelligence. It's a worried album by a man hunting for a way to stay hopeful.”In this interview, we talk with Jonathan about how he feels about Solid State now, his reaction to generative AI, and his experiences trying to use generative AI in songwriting. We're grateful to be able to grab Jonathan just before he left on tour with Aimee Mann. We hope you all take time to listen to Solid State and catch him live.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Is it possible for AI to be meaningful?

    Play Episode Listen Later Jul 22, 2023 35:52


    An exploration of the intersection of AI, meaning, and human relationships. In this episode, we dive deep into the role of AI in our lives and how it can influence our perception of meaning. We explore how AI, and specifically generative AI, is impacting our collective experiences and the ways we make authentic choices. We discuss the idea of intimacy with AI and the future trajectory of human-AI interaction. We consider the possibility of AI enabling more time for meaningful experiences by taking over less meaningful tasks but also wonder if it's possible for AI to truly have a place in human meaning. Note: According to our research, Doug Belshaw is the original author of the term “serendipity surface.” You can find his first post here and a follow up here. Apologies to Doug for forgetting your name during recording!About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Existential risk of AI

    Play Episode Listen Later Jul 14, 2023 31:59


    ChatGPT, Dall-e, Midjourney, Bard, and Bing are here. Many others are coming. At every dinner conversation, classroom lesson, and business meeting people are talking about AI. There are some big questions that now everyone is seeking answers for, not just the philosophers, ethicists, researchers, and venture folks. It's noisy, complicated, technical and often agenda-driven.In this episode, we tackle the question of existential risk. Will AI kill us all? We start by talking about why this question is important at all. And why we are finally tackling it ourselves (since we've largely avoided it for quite some time). We talk about the scenarios that people are worried about and the three premises that underly this risk:* We will build an intelligence that will outsmart us* We will not be able to control it* It will do things we don't want it toJoin us as we talk about the risk that AI might end humanity. And, if you'd like to dig deeper, subscribe to Artificiality at https://artificiality.substack.com to get all of our content on this topic including our weekly essay, a gallery of AI images, book recommendations, and more. (Note: the essay will be emailed to subscribers a couple of days after this podcast first airs—thanks for your patience!).About Artficiality:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Values & Generative AI

    Play Episode Listen Later Jul 9, 2023 24:27


    As Silicon Valley lunges towards creating AI that is considered superior to humans (at times called Artificial General Intelligence or Super-intelligent AI), it does so with the premise that it is possible to encode values in AI so that the AI won't harm us. But values are individual, elusive, and ever-changing. They resist being mathematized. Join us as we discuss human values, how they form, how they change, and why trying to encode them in algorithms is so difficult, if not impossible. About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Culture & Generative AI

    Play Episode Listen Later Jul 2, 2023 31:51


    Culture plays a vital role in connecting individuals and communities, enabling us to leverage our unique talents, share knowledge, and solve problems together. However, the rise of an intelligentsia of machine soothsayers highlights the need to consciously design new coherence strategies for the age of machines. Why? Because generative AI is a cultural technology that produces different outcomes depending on its cultural context.Who will take on this challenge, and how will culture evolve in response to the growing influence of machines? This is the essential question that requires careful consideration as we navigate the complex interplay between human culture and technology, seeking to preserve sonder as for humans only.Listen in as we discuss human culture and the impact of generative AI. About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Mind for our Minds: Introduction

    Play Episode Listen Later Jun 18, 2023 25:50


    This episode is the first in our summer series based on our thesis for designing AI to be a Mind for our Minds. We recently presented this idea for the first time at our favorite event of the year hosted by The House of Beautiful Business. We are grateful for our long-term relationship with the House and its founders, Tim Leberecht and Till Grusche, and head of curation and community, Monika Jiang. The House puts on public and corporate events that are like none you've ever experienced. We encourage everyone to consider attending a public event and bringing the House to your organization.We always meet fascinating people at the house—too many to mention in one podcast. During this episode we highlight Hannah Critchlow and her book Joined Up Thinking and Michael Bungay Stanier and his book How to Work with (Almost) Anyone. Check them both out: we are big fans.Stay tuned over the summer as we will dig deeper into how to design AI to be a Mind for our Minds.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    C. Thi Nguyen: Metrification

    Play Episode Listen Later May 21, 2023 67:00


    AI is based on data. And data is frequently collected with the intent to be quantified, understood, and used across context. That's why we have things like grade point averages that translate across subject matters and educational institutions. That's why we perform cost-benefit analyses to normalize the forecasted value of projects—no matter the details. As we deploy more AI that is based on a metrified world, we're encouraging the quantification of our lives and risk losing the context and subjective value that creates meaning.In this interview, we talk with C. Thi Nguyen about these large scale metrics, about objectivity and judgment, about how this quantification removes the nuance, contextual sensitivity, and variability to make these measurements legible to the state. And that's just scratching the surface of this interview.Thi Nguyen used to be a food writer and is now a philosophy professor at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He writes about trust, art, games, and communities. His book, Games: Agency as Art, was awarded the American Philosophical Association's 2021 Book Prize.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Harpreet Sareen: Cyborg Botany

    Play Episode Listen Later May 14, 2023 48:06


    We are deeply interested in the intersection of the digital and material worlds, both living and not living. Most of our interviews are focused on the intersection of humans and machines—how does the digital world affect humans and how do humans affect the digital world. This interview, however, is about the intersection of plants and machines.Harpreet Sareen works at the intersection of digital and material, plant and machine, and art and science. His work challenges people to consider the life of plants, what we can learn from them, what we can see and what we can't see. His art and science projects challenge us to wonder if we can actually believe what we're seeing.We moved to the Cascade Mountains to be able to spend more time in the wilderness. We likely spend quite a bit more time in nature than most people. Despite our strong connections to nature, Sarpreet's work accomplishes his goal of encouraging us to reconsider this relationship, to consider what an increased symbiosis might be.Harpreet Sareen is a designer, researcher and artist creating mediated digital interactions through the living world, with growable electronics, organic robots and bionic materials. His work has been shown in museums, featured in media in 30+ countries, published in academic conferences, viewed on social media 5M+ times and used by thousands of people around the world. He has also worked professionally in museums, corporates and international research centers in five countries. He is currently an Assistant Professor at Parsons School of Design in New York City and directs the Synthetic Ecosystems Lab that focuses on post-human and non-human design.Learn more about Harpreet SareenInteresting links:* What biodesign means to me* Bionic plants, from PopSci* Elephant project: Hybrid Enrichment System (ACM article)* Elowan: A Robot-Plant Hybrid -- Plant with a robotic body* Cyborg Botany: Electronics grown inside plants* Cyborg Botany: In-Planta Cybernetic SystemsMost recent papers:* Helibots (attached) at CAADRIA 2023, and related exhibition in ADM Gallery, Singapore* BubbleTex at CHI 2023, and related exhibition in Ars Electronica, Austria* Algaphon: Sounds of macroalgae under water (Installation at Ars Electronica, Austria)About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Arvind Jain: Glean, Enterprise Search, and Generative AI

    Play Episode Listen Later May 7, 2023 47:24


    Anyone working in a large organization has likely asked this question: Why is it that I can seemingly find anything on the internet but I can't seem to find anything inside my organization? It is counter-intuitive that it's easier to organize the vast quantity of information on the public internet than it is to organize the smaller amount of information inside a single organization.The reality is that enterprise knowledge management and search is very difficult. Data does not reside in easily organized forms. It is spread across systems which provide varying levels of access. Knowledge can be fleetingly exchanged in communication systems. And each individual person has their own access rights, creating a complex challenge.These challenges may be amplified by large language models in the enterprise which seek to help people with analytical and creative tasks by tapping into an organization's knowledge. How can these systems access enough enterprise data to develop a useful level of understanding? How can they provide the best answers to each individual that follows data access governance requirements?To answer these questions, we talked with Arvind Jain, the CEO of Glean, which provides AI-powered workplace search. Glean searches across an organizations applications to build a trusted knowledge model that respects data access governance when presenting information to users. Glean's knowledge models also provide a way for enterprises to introduce the power of generative AI while providing boundaries for its use that can be challenging to create.Prior to founding Glean, Arvind co-founded Rubrik, one of the fastest growing companies in cloud data management. For more than a decade Arvind worked at Google, serving as a Distinguished Engineer, leading teams in Search, Maps, and YouTube.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Lukas Egger: Generative AI, a view from SAP

    Play Episode Listen Later Apr 30, 2023 45:57


    The world has been upended by the introduction of generative AI. We think this could be the largest advance in technology—ever. All of our clients are trying to figure out what to do, how to de-risk the introduction of these technologies, and how to design new, innovative solutions.To get a perspective on these changes created by AI, we talked with Lukas Egger who leads the Innovation Office & Strategic Projects team at SAP Signavio, where he focuses on de-risking new product ideas and establishing best-in-class product discovery practices. With a successful track record in team building and managing challenging projects, Lukas has expertise in data-driven technology, cloud-native development, and has created and implemented new product discovery methodologies. Excelling at bridging the gap between technical and business teams, he has worked in AI, operations, and product management in fast-growth environments. Lukas has movie credits for his work in Computer Graphics research, published a book on philosophy, and is passionate about the intersection of technology and people, regularly speaking on how to improve organizations.We love Lukas' concept that we are in the peacock phase of generative AI when everyone is trying to show off their colorful feathers—and not yet showing off new value creation. We enjoyed talking with Lukas about his views on the realities of today and his forecasts and speculations on the future.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

    Claim Artificiality

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel