Podcasts about artificiality

  • 20PODCASTS
  • 81EPISODES
  • 50mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about artificiality

Latest podcast episodes about artificiality

Artificiality
Blaise Aguera y Arcas and Michael Levin: The Computational Foundations of Life and Intelligence

Artificiality

Play Episode Listen Later Mar 12, 2025 70:14


In this remarkable conversation, Michael Levin (Tufts University) and Blaise Aguera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.Michael's "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.The conversation unfolds around several interwoven questions:- How does genuine agency emerge from simple rule-following components?- Why might intelligence be more fundamental than life itself?- How do we recognize cognition in systems that operate unlike human intelligence?- What constitutes the difference between patterns and the physical substrates expressing them?- How might symbiosis between humans and synthetic intelligence reshape both?Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.------Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

Artificiality
Maggie Jackson: Embracing Uncertainty

Artificiality

Play Episode Listen Later Mar 7, 2025 60:08


In this episode, we welcome Maggie Jackson, whose latest book, Uncertain, has become essential reading for navigating today's complex world. Known for her groundbreaking work on attention and distraction, Maggie now turns her focus to uncertainty—not as a problem to be solved, but as a skill to be cultivated. Note: Uncertain won an Artificiality Book Award in 2024—check out our review here: https://www.artificiality.world/artificiality-book-awards-2024/In the interview, we explore the neuroscience of uncertainty, the cultural biases that make us crave certainty, and why our discomfort with the unknown may be holding us back. Maggie unpacks the two core types of uncertainty—what we can't know and what we don't yet know—and explains why understanding this distinction is crucial for thinking well in the digital age.Our conversation also explores the implications of AI—as technology increasingly mediates our reality, how do we remain critical thinkers? How do we resist the illusion of certainty in a world of algorithmically generated answersMaggie's insights challenge us to reframe uncertainty—not as fear, but as an opportunity for discovery, adaptability, and even creativity. If you've ever felt overwhelmed by ambiguity or pressured to always have the “right” answer, this episode offers a refreshing perspective on why being uncertain might be one of our greatest human strengths.Links:Maggie: https://www.maggie-jackson.com/Uncertain: https://www.prometheusbooks.com/9781633889194/uncertain/Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

Artificiality
Greg Epstein: Tech Agnostic

Artificiality

Play Episode Listen Later Mar 6, 2025 58:40


In this episode, we talk with Greg Epstein—humanist chaplain at Harvard and MIT, bestselling author, and a leading voice on the intersection of technology, ethics, and belief systems. Greg's latest book, Tech Agnostic, offers a provocative argument: Silicon Valley isn't just a powerful industry—it has become the dominant religion of our time. Note: Tech Agnostic won an Artificality Book Award in 2024—check out our review here. In this interview, we explore the deep parallels between big tech and organized religion, from sacred texts and prophets to digital congregations and AI-driven eschatology. The conversation explores digital Puritanism, the "unwitting worshipers" of tech's altars, and the theological implications of AI doomerism.But this isn't just a critique—it's a call for a Reformation. Greg lays out a path toward a more humane and ethical future for technology, one that resists unchecked power and prioritizes human values over digital dogma.Join us for a thought-provoking conversation on faith, fear, and the future of being human in an age where technology defines what we believe in.Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.

Artificiality
D. Graham Burnett: Attention and much more...

Artificiality

Play Episode Listen Later Feb 27, 2025 72:25


D. Graham Burnett will tell you his day job is as a professor of science history at Princeton University. He is also co-founder of the Strother School of Radical Attention and has been associated with the Friends of Attention since 2018. But none of those positions adequately describe Graham.His bio says that he “works at the intersection of historical inquiry and artistic practice.” He writes, he performs, he makes things. He describes himself as an attention activist. Perhaps most importantly for us, Graham helps you see the world differently—and more clearly. Graham has powerful views on the effect of technology on our attention. We often riff on his idea that technology has fracked our attention into little commoditizable bits. His work has highly influenced our concern about what might happen if the same extractive practices of the attention economy are applied to the future AI-powered intimacy economy. We were thrilled to have Graham on the pod for a wide ranging conversation about attention, intimacy, and much more. Links:https://dgrahamburnett.nethttps://www.schoolofattention.orghttps://www.friendsofattention.net---If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your email: https://www.artificiality.worldThanks to Jonathan Coulton for our music.

Artificiality
Artificiality Keynote at the Imagining Summit 2024

Artificiality

Play Episode Listen Later Jan 28, 2025 14:46


Our opening keynote from the Imagining Summit held in October 2024 in Bend, Oregon. Join us for the next Artificiality Summit on October 23-25, 2025! Read about the 2024 Summit here: https://www.artificiality.world/the-imagining-summit-we-imagined-and-hoped-and-we-cant-wait-for-next-year-2/ And join us for the 2025 Summit here: https://www.artificiality.world/summit/

Artificiality
Hans Block & Moritz Riesewieck: Eternal You

Artificiality

Play Episode Listen Later Jan 25, 2025 44:51


We're excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,' about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals. We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them? Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care. Bravo. ------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025

Artificiality

Play Episode Listen Later Jan 17, 2025 27:14


Science Briefing: What AI Agents Tell Us About the Future of Human Experience * What These Papers Highlight - AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups. - One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren't enough—we need a broader system to make them useful. * Why This Matters - Human Compatibility: Agents don't just need to complete tasks—they need to work in ways that humans trust and find relatable. - New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences. - Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context. * What's Interesting - Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It's great at structure, bad at messiness. - Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator. - Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically. *What's Next for Agents - Can agents learn to navigate our complexity, like social norms or context-sensitive decisions? - Will ecosystems with Sims and Assistants make AI feel more human—and less robotic? - How will trust and personalization shape whether people actually adopt these systems? Product Briefing: Always On AI Wearables * What's new: - New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants. * Why This Matters - From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in. - Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording. - Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription. * What's Interesting - Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking. - It's About Apps: The app store is back as a business model. But are these startups ready for the challenge? - Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more. * The Human Experience - AI as a Second Self?: These devices don't just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences? - Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go? - Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?

Artificiality
James Boyle: The Line—AI And the Future of Personhood

Artificiality

Play Episode Listen Later Sep 28, 2024 58:04


We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood. In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality. Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness. What's particularly compelling about Jamie's approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights. We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now. To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." Let's dive into our conversation with Jamie Boyle. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Shannon Vallor: The AI Mirror

Artificiality

Play Episode Listen Later Sep 13, 2024 56:34


We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror. In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality. In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn't represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we've already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence. We think this is one of the best books on AI for a general audience that has been published this year. Shannon's mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it's still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us. Let's dive into our conversation with Shannon Vallor. ----------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Matt Beane: The Skill Code

Artificiality

Play Episode Listen Later Aug 30, 2024 55:34


We're excited to welcome to the podcast Matt Beane, Assistant Professor at UC Santa Barbara and the author of the book "The Skill Code: How to Save Human Ability in an Age of Intelligent Machines." Matt's research investigates how AI is changing the traditional apprenticeship model, creating a tension between short-term performance gains and long-term skill development. His work has particularly focused on the relationship between junior and senior surgeons in the operating theater. As he told us, "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship." He observed that junior surgeons now often just set up the robot and watch the senior surgeon operate for hours, epitomizing a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices. In "The Skill Code," Matt argues that three key elements are essential for developing expertise: challenge, complexity, and connection. He points out that real learning often involves discomfort, saying, "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?" Matt's research shows that while AI can significantly boost productivity, it may be undermining critical aspects of skill development. He warns that the traditional model of "See one, do one, teach one" is becoming "See one, and if-you're-lucky do one, and not-on-your-life teach one." In our conversation, we explore these insights and discuss how we might preserve human ability in an age of intelligent machines. Let's dive into our conversation with Matt Beane on the future of human skill in an AI-driven world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Emily M. Bender: AI, Linguistics, Parrots, and more!

Artificiality

Play Episode Listen Later Aug 2, 2024 57:18


We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington. As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily's expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that's why we were eager to talk with her. Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development. In this conversation, we explore the issues of current AI systems with a particular focus on Emily's view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines. Let's dive into our conversation with Emily Bender. ---------------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Tech Talk with Mathew Dickerson
Nascar Next-Gen Green Machine, Spot Authentic or Artificial AI in Videos and AI Animates the Afterlife.

Tech Talk with Mathew Dickerson

Play Episode Listen Later Jul 28, 2024 53:29


Electric Excitement: Nascar's Next-Gen Green Machine Gears up.  AI Analysis: Authenticity or Artificiality in Videos?  Famous Figures' Voices: AI Animates the Afterlife.  AI Accelerates Ahead: Gran Turismo's Technological Triumph.  Thermal Tech Triumph: Transforming Traffic Safety.  Skyward Surfing: Aerolane's Airborne Innovation.  Laser-Loaded 3D Printing: Mizzou's Marvel of Multi-Material Manufacturing.  Mind-Mapping Mystery: Macaques, MRI and Miraculous Image Reconstructions.  Brain Bots: Brilliant Bioengineered Brains Boost Robotic Brilliance. 

Artificiality
John Havens: Heartificial Intelligence

Artificiality

Play Episode Listen Later Jul 13, 2024 62:29


We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing. In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values" - our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us. Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems. In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. We explore questions like: What are the core components of human values that AI systems need to understand? How can we design AI systems to augment rather than replace human decision-making? Why has the field of AI ethics lagged behind technological development, and what role can positive psychology play in bridging this gap? Should we be concerned about AI systems usurping our ability to define our own values, or are there inherent limits to what machines can understand about human ethics? Let's dive into our conversation with John Havens. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Leslie Valiant: Educability

Artificiality

Play Episode Listen Later Jun 22, 2024 56:41


We're excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn. Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge. Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts. In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like: What are the core components of educability that make human intelligence special? How can we design AI systems to augment rather than replace human learning? Why has the science of education lagged behind other fields, and what role can AI play in accelerating pedagogical research and practice? Should we be concerned about a potential "intelligence explosion" as machines grow more sophisticated, or are there limits to the power of AI? Let's dive into our conversation with Leslie Valiant. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Jonathan Feinstein: The Context of Creativity

Artificiality

Play Episode Listen Later Jun 8, 2024 53:31


We're excited to welcome to the podcast Jonathan Feinstein, professor at the Yale School of Management and author of Creativity in Large-Scale Contexts: Guiding Creative Engagement and Exploration. Our interest in creativity is broader than the context of the creative professions like art, design, and music. We see creativity as the foundation of how we move ahead as a species including our culture, science, and innovation. We're interested in the huge combinatorial space of creativity, linked together by complex networks. And that interest led us to Jonathan. Through his research and interviews with a wide range of creative individuals, from artists and writers to scientists and entrepreneurs, Jonathan has developed a framework for understanding the creative process as an unfolding journey over time. He introduces key concepts such as guiding conceptions, guiding principles, and the notion of finding "golden seeds" amidst the vast landscape of information and experiences that shape our creative context. By looking at creativity mathematically, Jonathan has exposed the tremendous beauty of the creative process as being intuitive, exploratory, and supported by math and machines and knowledge and structure. He shows how creativity is much broader and more interesting than the stereotypical idea of creativity as simply a singular lightbulb moment. In our conversation, we explore some of the most surprising and counterintuitive findings from Jonathan's work, how his ideas challenge conventional wisdom about creativity, and the implications for individuals and organizations seeking to innovate in an increasingly AI-driven world. Let's dive into our conversation with Jonathan Feinstein. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Karaitiana Taiuru: Indigenous AI

Artificiality

Play Episode Listen Later May 25, 2024 48:01


We're excited to welcome to the podcast Karaitiana Taiuru. Dr Taiuru is a leading authority and a highly accomplished visionary Māori technology ethicist specialising in Māori rights with AI, Māori Data Sovereignty and Governance with emerging digital technologies and biological sciences. Karaitiana has been a champion for Māori cultural and intellectual property rights in the digital space since the late 1990s. With the recent emergence of AI into the mainstream, Karaitiana sees both opportunities and risks for indigenous peoples like the Māori. He believes AI can either be a tool for further colonization and cultural appropriation, or it can be harnessed to empower and revitalize indigenous languages, knowledge, and communities. In our conversation, Karaitiana shares his vision for incorporating Māori culture, values and knowledge into the development of AI technolo gies in a way that respects data sovereignty. We explore the importance of Māori representation in the tech sector, the role of AI in language and cultural preservation, and how indigenous peoples around the world can collaborate to shape the future of AI. Karaitiana offers a truly unique and thought-provoking perspective that I believe is crucial as we grapple with the societal implications of artificial intelligence. I learned a tremendous amount from our conversation and I'm sure you will too. Let's dive into our conversation with Karaitiana Taiuru. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Omri Allouche: Gong AI

Artificiality

Play Episode Listen Later May 4, 2024 40:47


We're excited to welcome to the podcast Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At Gong, Omri leads research into how AI and machine learning can transform the way sales teams operate. In our conversation today, we'll explore Omri's perspective on managing AI research and innovation. We'll discuss Gong's approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. Omri will share how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically. Let's dive into our conversation with Omri Allouche. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 

Artificiality
Susannah Fox: Rebel Health

Artificiality

Play Episode Listen Later Apr 20, 2024 48:52


We're excited to welcome to the podcast Susannah Fox, a renowned researcher who has spent over 20 years studying how patients and caregivers use the internet to gather information and support each other. Susannah has collected countless stories from the frontlines of healthcare and has keen insights into how patients are stepping into their power to drive change. Susannah recently published a book called "Rebel Health: A Field Guide to the Patient-Led Revolution in Medical Care." In it, she introduces four key personas that represent different ways patients and caregivers are shaking up the status quo in healthcare: seekers, networkers, solvers, and champions. The book aims to bridge the divide between the leaders at the top of the healthcare system and the patients, survivors, and caregivers on the ground who often have crucial information and ideas that go unnoticed. By profiling examples of patient-led innovation, Susannah hopes to inspire healthcare to become more participatory. In our conversation, we dive into the insights from Susannah's decades of research, hear some compelling stories of patients, and discuss how medicine can evolve to embrace the power of peer-to-peer healthcare. As you'll hear, this is a highly personal episode as Susannah's work resonates with both of us and our individual and shared health experiences. Let's dive into our conversation with Susannah Fox. --------------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Angel Acosta: Contemplation, Healing, and AI

Artificiality

Play Episode Listen Later Mar 2, 2024 43:22


We're excited to welcome to the podcast Dr. Angel Acosta, an expert on healing-centered education and leadership. Angel runs the Acosta Institute which helps communities process trauma and build environments for people to thrive. He also facilitates leadership programs at the Garrison Institute that support the next generation of contemplative leaders. With his background in social sciences, curriculum design, and adult education, Angel has been thinking deeply about how artificial intelligence intersects with mindfulness, social justice and education. In our conversation, we explore how AI can help or hinder our capacity for contemplation and healing. For example, does offloading cognitive tasks to AI tools like GPT create more mental space for mindfulness? How do we ensure these technologies don't increase anxiety and threaten our sense of self? We also discuss the promise and perils of AI for transforming education. What roles might AI assistants play in helping educators be more present with students? How can we design assignments that account for AI without compromising learning? What would a decolonized curriculum enabled by AI look like? And we envision more grounded, humanistic uses of rapidly evolving AI—from thinking of it as "ecological technology" interdependent with the natural world, to leveraging its pattern recognition in service of collective healing and wisdom. What guiding principles do we need for AI that enhances both efficiency and humanity? How can we consciously harness it to create the conditions for people and communities to thrive holistically? We'd like to thank our friends at the House of Beautiful Business for sparking our relationship with Angel—we highly recommend you check out their events and join their community. Let's dive into our conversation with Angel Acosta. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Board Game Faith
Episode 49: Book Club: 4,000 Weeks

Board Game Faith

Play Episode Listen Later Feb 25, 2024 62:38


Oliver Burkeman's 4,000 Weeks: Time Management for Mortals (2022) is our pick for our monthly book club. We loved how it made us think about our modern drive to master time and efficiency, and how this debilitates human happiness. Rethinking our lives and our use of time means more time for flourishing, games, and play, even if we don't get everything done (because we never will). We explore the concept of time and our relationship with it, highlighting the illusion of time management and the artificiality of modern time. We also discuss the idea of embracing our limits and the futility of trying to battle against time. Overall, the book challenges the notion that we can control time and encourages a deeper reflection on how we spend our limited time on Earth. It delves into the flawed attempts to be efficient and the instrumentalization of time in modern society. The conversation also highlights the importance of living in the present moment and the dangers of constantly living for the future. It discusses the measurement of time and how it contributes to impatience and restlessness. The conversation draws from various spiritual traditions and emphasizes the need to let go of future expectations. It explores the joy of settling and the joy of missing out, as well as the pressure to choose a path and the depth of commitment. Finally, it emphasizes the importance of focusing on the next step rather than waiting for the perfect opportunity. We emphasize the need to make time for play and challenge societal expectations that prioritize work over play. We explore the idea that play is an end in itself and can resist the Protestant work ethic. We also discuss the value of hobbies and the role of play in grounding us in the present moment. Finally, we reflect on the importance of using our time and talents well to make life more luminous for others. Takeaways Embrace the nature of time and avoid trying to make it something it's not. Beware of the dangers of efficiency as an idol and the instrumentalization of time. Learn to live in the present moment and let go of future expectations. Develop a curiosity and openness towards challenges and problems. Settle and commit to a path, finding joy in depth and commitment. Break down projects into smaller steps and focus on taking the next right step. Make time for play and challenge societal expectations that prioritize work over play. Recognize that play is an end in itself and can resist the Protestant work ethic. Engage in hobbies and embrace the value of weird and unique interests. Use your time and talents well to make life more luminous for others. Chapters 00:00 Introduction: The Battle with Time 03:13 Lent and Time 08:23 Animals and Time 11:27 The Illusion of Time Management 13:29 4,000 Weeks: Time Management for Mortals 19:36 The Artificiality of Time 21:20 The Battle with Time 22:43 Embracing the Nature of Time 23:19 The Flawed Attempt of Efficiency 24:26 The Instrumentalization of Time 25:33 Living for the Future 26:37 The Present Moment 27:31 The Measurement of Time 28:38 Impatience and Restlessness 29:52 Expectations and Frustrations 30:50 Drawing from Spiritual Traditions 31:47 Letting Go of Future Expectations 32:28 The Joy of Settling 35:20 The Joy of Missing Out 36:42 The Pressure to Choose a Path 39:38 The Depth of Commitment 40:55 Focusing on the Next Step 41:47 Taking the Next Right Step 42:21 Breaking Down Projects into Smaller Steps 43:04 Making Time for Play 43:35 Play as an End in Itself 44:02 Letting Go of Societal Expectations 45:18 The Importance of Hobbies 46:16 The Present Moment in Play 47:26 Resisting the Protestant Work Ethic 48:37 The Value of AT-like Activities 49:24 Embracing Weird Hobbies 56:56 Using Time and Talents Well CALL TO ACTION: - Subscribe to our newsletter (https://buttondown.email/BoardGameFaith) - Support us on Patreon (https://www.patreon.com/boardgamefaith/) - Interact with us on Instagram (https://www.instagram.com/boardgamefaith/) - Discord us Discord (https://discord.gg/MRqDXEJZ) - Chat with us on Wavelength (iOS and MacOS and iPadOS only) (https://wavelength.app/invite/AGSmNhIYS5B#ABhy7aXOO04TO6HTS4lelw--)

Artificiality
Doug Belshaw: Serendipity Surface & AI

Artificiality

Play Episode Listen Later Feb 23, 2024 45:28


We're excited to welcome Doug Belshaw to the show today. Doug is a founding member of the We Are Open Co-op which helps organizations with sensemaking and digital transformation. Doug coined the term "serendipity surface" to describe cultivating an attitude of curiosity and increasing the chance encounters we have by putting ourselves out there. We adopted the term quite some time ago and were eager to talk with Doug about how he thinks about serendipity surfaces in the age of generative AI. As former Head of Web Literacy at Mozilla and now currently pursuing a master's degree in systems thinking, Doug has a wealth of knowledge on topics spanning education, technology, productivity and more. In our conversation today, we'll explore concepts like productive ambiguity, cognitive ease, and rewilding your attention. Doug shares perspectives from his unique career journey as well as personal stories and projects exemplifying the creative potential of AI. We think you'll find this a thought-provoking discussion on human-AI collaboration, lifelong learning, digital literacy, ambiguity, and the future of work. Let's dive into our conversation with Doug Belshaw. Key points: Doug coined the term Serendipity Surface to describe cultivating curiosity, increasing random encounters and possibilities by putting ourselves out there. He sees it as the opposite of reducing "attack surface" in security; it's about expanding opportunities. Doug shares an example of prompting ChatGPT extensively over 24 hours with a flood risk report, personas and perspectives to decide on a complex house purchase. This shows the creative potential of using AI tools to augment human thinking and decisions. Doug discusses the sweet spot of productive ambiguity where concepts resonate with a common meaning yet leave room for interpretation by individuals based on their contexts. It encourages engagement and spreading of ideas. As an educator, Doug advocates thoughtfully adopting emerging tech to develop engaged, literate and curious learners rather than reactively banning tools. Friction facilitates learning. Ultimately, Doug sees potential for AI collaboration that brings our humanity, empathy, creativity and curiosity to the forefront if we prompt and apply these tools judiciously. Links for Doug Belshaw: Dr Doug Belshaw We Are Open Cooperative Thought Shrapnel Open Thinkering Ambiguiti.es If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

The Schlock and Awe Podcast
S&A 157 The Beauty of Artificiality: Jupiter Ascending & Matrix Resurrections W/ Chris Barreras

The Schlock and Awe Podcast

Play Episode Listen Later Feb 13, 2024 169:20


This week on S&A Lindsay is joined by Imperial Scum Co-Host Chris Barreras. As they plug themselves in for a later era Wachowski Double of Lana & Lily's Jupiter Ascending (2015) & Lana's Matrix Ressurection. This is a Double of Romance and Beautiful Artifciality. Listen to Schlock & Awe on your favourite Podcast App

Artificiality
Tyler Marghetis: The Leaps of Human Imagination

Artificiality

Play Episode Listen Later Feb 6, 2024 50:42


We're excited to welcome Tyler Marghetis, Assistant Professor of Cognitive & Information Sciences at the University of California, Merced, to the show today. Tyler studies what he calls the "lulls and leaps" or "ruts and ruptures" of human imagination and experience. He's fascinated by how we as humans can get stuck in certain patterns of thinking and acting, but then also occasionally experience radical transformations in our perspectives. In our conversation, Tyler shares with us some of his lab's fascinating research into understanding and even predicting these creative breakthroughs and paradigm shifts. You'll hear about how he's using AI tools to analyze patterns in things like Picasso's entire body of work over his career. Tyler explains why he believes isolation and slowness are actually key ingredients for enabling many of history's greatest creative leaps. And he shares with us how his backgrounds in high-performance sports and in the LGBTQ community shape his inclusive approach to running his university research lab. It's a wide-ranging and insightful discussion about the complexity of human creativity and innovation. Let's dive in to our interview with Tyler Marghetis. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Ed Sim: AI Venture Capital

Artificiality

Play Episode Listen Later Jan 23, 2024 50:28


Few understand how to anticipate major technology shifts in the enterprise better than today's guest, Ed Sim. Ed is a pioneer in the world of venture capital, specifically focusing on enterprise software and infrastructure since 1996. He founded Boldstart in 2010 to invest at the earliest stages of enterprise software companies, growing the firm from $1M to around $375M today. So where does an experienced investor who has seen countless tech waves come and go place his bets in this new AI-first future? That's the key topic we dive into today. While AI forms a core part of our dialogue, Ed emphasizes that he doesn't look at pitches and go “Oh, AI, I need to invest in that.” Rather, he tries to see if founders have identified a real pain point, have a unique approach to solving it, and can clearly articulate how they will provide a significant improvement over status quo. AI is an important component, of course, but it isn't a reason to invest alone. With that framing in mind, Ed shares where he is most excited to invest in light of recent generative AI breakthroughs. Unsurprisingly, AI security ranks high on his list given enterprises' skittishness around adopting any technology that could compromise sensitive data or infrastructure. Ed saw this need early, backing a startup called Protect AI in March 2022 that focuses specifically on monitoring and certifying the security of AI systems. The implications of AI have branched into virtually every sector, but Ed reminds us that as investors and builders, we must stay grounded in solving real problems vs just chasing the shiny new thing. Key Points: Ed Sim started Boldstart Ventures in 2010 to provide early stage funding for enterprise startups, writing smaller checks than typical VC firms. The firm now manages a nearly $200 million main fund and a $175 million opportunity fund. Generative AI is an exciting new technology, but the key is backing founders who are solving real problems for end users in a unique way that is 10x better than current solutions. AI is just the underlying technology. AI security is critical for enterprise adoption. Ed invested early in Protect AI, which helps monitor AI models for security, privacy, and compliance issues. AI security will be key to scale adoption. There are still open questions around data governance with large language models that access sensitive company data. Approaches that check governance policies before providing answers are the safest for now. Factors like inference cost, subscription fatigue, and proving ROI will impact how quickly some of the consumer generative AI applications gain traction. Creative solutions around caching, pricing models, and hybrid human+AI loops can help. There will be opportunities related to embedding expertise into systems to empower junior and senior employees. Tools like GitHub Copilot show potential to augment technical skills. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Rodrigo Liang: SambaNova and Generative AI in the Enterprise

Artificiality

Play Episode Listen Later Jan 16, 2024 56:53


One of our research obsessions is Edge AI through which we study opportunities to build and deploy AI on a computing device at the edge of a network. The premise is that AI in the cloud benefits from scale but is challenged by cost and privacy and Edge AI solves many of these challenges by eliminating cloud computing costs and keeping data within secure environments. Given this interest, we were excited to talk with Rodrigo Liang, the Co-Founder and CEO of SambaNova Systems which has built a platform to deliver enterprise grade chips, software, and models in a fully integrated system, purpose built for AI. In this interview, Rodrigo discusses how his company is enabling enterprises to adopt AI in a secure, customizable way that builds long-term value by building AI assets. Their full-stack solutions aim to simplify AI model building and deployment, especially by leveraging open source frameworks and using modular, fine-tuned expert models tailored to clients' private data. Key Points: SambaNova Systems aims to help companies adopt AI technology, particularly in enterprise environments. It provides full-stack AI solutions including hardware, software, models, etc. to simplify adoption. The company's offerings are designed to enable companies to leverage AI while maintaining data privacy and security. A modular approach provides the flexibility to adapt to diverse enterprise needs. SambaNova takes an "AI asset" approach focused on creating long-term value rather than just providing "AI tools." A focus on open source models provides diversity of technology while reducing vendor lock-in. The company's software stack enables fine-tuning of granular models on customer data, creating a multitude of AI experts to serve the enterprise. Unlimited use encourages experimentation without the same cost challenges as public cloud AI. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Best of: Barbara Tversky & Spacial Cognition

Artificiality

Play Episode Listen Later Jan 11, 2024 66:19


One of our long-time subscribers recently said to us: “What I love about you is that you're regularly talking about things three years ahead of everyone else.” That inspired us to look back through our catalog of conversations to see which ones we think are most relevant now. Today, we're revisiting one of our most thought-provoking episodes, originally recorded in April 2022, featuring Barbara Tversky, the author of "Mind in Motion: How Action Shapes Thought." This episode is great way to start 2024 because we are all about to experience what are known as Large Multimodal Models or LMMs, models which go beyond text and bring in more sensory modalities including spatial information. Tversky's insights into spatial reasoning and embodied cognition are more relevant than ever in the era of multimodal models in AI. These models, which combine text, images, and other data types, mirror our human ability to process information across various sensory inputs. The parallels between Tversky's research and Large Multimodal Models (LMMs) in AI are striking. Just as our physical interactions with the world shape our cognitive processes, these AI models learn and adapt by integrating diverse data types, offering a more holistic understanding of the world. Her work sheds light on how we might improve AI's ability to 'think' and 'reason' spatially, enhancing its application in fields ranging from navigation systems to virtual reality. As we revisit our interview with Tversky, we're reminded of the importance of considering human-like spatial reasoning and embodied cognition in advancing AI technology. Join us as we explore these intriguing concepts with Barbara Tversky, uncovering the essential role of spatial reasoning in both human cognition and artificial intelligence. Barbara Tversky is an emerita professor of psychology at Stanford University and a professor of psychology at Teachers College at Columbia University. She is also the President of the Association for Psychological Science. Barbara has published over 200 scholarly articles about memory, spatial thinking, design, and creativity, and regularly speaks about embodied cognition at interdisciplinary conferences and workshops around the world. She lives in New York. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Stephen Fleming: Consciousness and AI

Artificiality

Play Episode Listen Later Dec 13, 2023 62:01


In this episode, we speak with cognitive neuroscientist Stephen Fleming about theories of consciousness and how they relate to artificial intelligence. We discuss key concepts like global workspace theory, higher order theories, computational functionalism, and how neuroscience research on consciousness in humans can inform our understanding of whether machines may ever achieve consciousness. In particular, we talk with Steve about a recent ⁠research paper⁠, Consciousness in Artificial Intelligence, which he co-authored with Patrick Butlin, Robert Long, Yoshua Bengio, and several others. Steve provides an overview of different perspectives from philosophy and psychology on what mechanisms may give rise to consciousness. He explains global and local theories, the idea of a higher order system monitoring lower level representations, and similarities and differences between human and machine intelligence. The conversation explores current limitations in neuroscience for studying consciousness empirically and opportunities for interdisciplinary collaboration between neuroscientists and AI researchers. Key Takeaways: Consciousness and intelligence are separate concepts—you can have one without the other Global workspace theory proposes consciousness arises when information is broadcast to widespread brain areas Higher order theories suggest a higher system monitoring lower representations enables consciousness Computational functionalism looks at information processing rather than biological substrate Attributing intelligence versus attributing experience/consciousness invoke different dimensions of social perception More research needed in neuroscience and social psychology around people's intuitions about machine consciousness ⁠Stephen Fleming⁠ is Professor of Cognitive Neuroscience at the Department of Experimental Psychology, University College London. Steve's work aims to understand the mechanisms supporting human subjective experience and metacognition by employing a combination of psychophysics, brain imaging and computational modeling. He is the author of *⁠Know Thyself*,⁠ a book on the science of metacognition, about which we interviewed him on Artificiality in December of 2021. Episode Notes:  2:13 - Origins of the paper Stephen co-authored on consciousness in artificial intelligence 5:17 - Discussion of demarcating intelligence vs phenomenal consciousness in AI 6:34 - Explanation of computational functionalism and mapping functions between humans and machines 13:42 - Examples of theories like global workspace theory and higher order theories 19:27 - Clarifying when sensory information reaches consciousness under global theories 23:02 - Challenges in precisely defining aspects like the global workspace computationally 28:35 - Connections between higher order theories and generative adversarial networks 30:43 - Ongoing empirical evidence still needed to test higher order theories 36:52 - Iterative process needed to update theories based on advancing neuroscience 40:40 - Open questions remaining despite foundational research on consciousness 46:14 - Mismatch between public perceptions and indicators from neuroscience theories 50:30 - Experiments probing anthropomorphism and consciousness attribution 56:17 - Surprising survey results on public views of AI experience 59:36 - Ethical issues raised if public acceptance diverges from scientific consensus If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get ⁠Artificiality⁠ delivered to your email Learn about our book ⁠Make Better Decisions⁠ and buy it on ⁠Amazon⁠ Thanks to ⁠Jonathan Coulton⁠ for our music

Artificiality
Steven Sloman: LLMs and Deliberative Reasoning

Artificiality

Play Episode Listen Later Dec 6, 2023 61:06


If you've used a large language model, you've likely had one or more moments of amazement as the tool immediately responded with impressive content from its massive data cosmos training set. But you've likely also had moments of confusion or disillusionment as the tool responded with irrelevant or incorrect responses, displaying a lack of reasoning. A recent research paper from Meta caught our eye because it proposes a new mechanism called System 2 Attention which “leverages the ability of LLMs to reason in natural language and follow instructions in order to decide what to attend to.” The name System 2 is derived from the work of Daniel Kahneman who in his 2011 book, Thinking, Fast and Slow, differentiated between System 1 thinking as intuitive and near-instantaneous and System 2 thinking as slower and effortful. The Meta paper also references our friend Steven Sloman who in 1996 made the case for two systems of reasoning—associative and deliberative or rule-based. Given our interest in the idea of LLMs being able to help people make better decisions—which often requires more deliberative thinking—we asked Steve to come back on the podcast to get his reaction to this research and generative AI in general. Yet again, we had a dynamic conversation about human cognition and modern AI, which field is learning what from the other, and a few speculations about the future. We're grateful for Steve taking the time to talk with us again and hope that he'll join us for a third time when his next book is released sometime in 2024. Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University where he has taught since 1992. He studies how people think, including how we think as a community, a topic he wrote a fantastic book about with Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone. For more about that work, please check out our first interview with Steve from June of 2021. About Artificiality from Helen & Dave Edwards: Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our essays, events, newsletter, and podcast interviews with academics, authors, entrepreneurs, and executives. Subscribe at www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

Artificiality
Jai Vipra: Computational Power and AI

Artificiality

Play Episode Listen Later Oct 15, 2023 40:39


Jai Vipra is a research fellow at the AI Now Institute where she focuses on competition issues in frontier AI models. She recently published the report Computational Power and AI which focuses on compute as a core dependency in building large-scale AI. We found this report to be an important addition to the work covering the generative AI industry because compute is incredibly important but not very well understood. In the report, Jai breaks down the key components of compute, analyzes the supply chain and competitive dynamics, and aggregates all the known economics. In this interview, we talk with Jai about the report, its implications, and her recommendations for industry and policy responses.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

Artificiality
Wendy Wong: We, the Data, and Human Rights

Artificiality

Play Episode Listen Later Oct 8, 2023 58:02


Wendy Wong is a professor of political science and principal's research chair at the University of British Columbia where she researches and teaches about the governance of emerging technologies, human rights, and civil society/non-state actors.In this interview, we talk with Wendy about her new book We, the Data: Human Rights in the Digital Age which is described as “a rallying call for extending human rights beyond our physical selves—and why we need to reboot rights in our data-intensive world.” Given the explosion of generative AI and mass data capture that fuels generative AI models, Wendy's argument for extending human rights to the digital age seems very timely. We talk with her about how human rights might be applied to the age of data, the datafication by big tech, individuals as stakeholders in the digital world, and our awe of the human contributions that enables generative AI.About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

Artificiality
Chris Summerfield: Natural General Intelligence

Artificiality

Play Episode Listen Later Oct 1, 2023 56:15


Chris Summerfield is a Professor of Cognitive Science at the University of Oxford. His work is concerned with understanding how humans learn and make decisions. He is interested in how humans acquire new concepts or patterns in data, and how they use this information to make decisions in novel settings. He's also a research scientist at Deepmind. Earlier this year, Chris released a book called Natural General Intelligence, How understanding the brain can help us build AI. This couldn't be more timely given talk of AGI and in this episode we talk with Chris about his work and what he's learned about humans from studying AI and what he's learned about AI by studying humans. We talk about his aim to provide a bridge between the theories of those who study biological brains and the practice of those who are seeking to build artificial brains, something we find perpetually fascinating. About Artificiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world

Artificiality
Michael Bungay Stanier: How to Work with (Almost) Anyone

Artificiality

Play Episode Listen Later Aug 13, 2023 57:23


Michael Bungay Stanier has an extraordinary talent for distilling the complexity of human relationships into easy to remember and follow frameworks—doing so with just the right amount of Australian humor and plenty of vulnerability. Despite his remarkable success with books like The Coaching Habit, The Advice Trap, and How to Begin, Michael never comes across as one of those gurus who thinks they have all the answers. That mindset comes through perfectly in the title of his newest book, How to Work with (Almost) Anyone—not absolutely anyone, almost anyone. The book is built around five questions for building the best possible relationships which we have found to be very helpful in our working relationship.We have grown to be friends with Michael through our repeated gatherings at the House of Beautiful Business. I know all three of us would encourage all of our listeners and readers to join us at the next House as well.In this interview, we talk about Michael's new book, how to use a keystone conversation to build the best possible relationship, and we even consider how to apply Michael's frameworks to working with generative AI.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Synthesis & Generative AI

Artificiality

Play Episode Listen Later Aug 5, 2023 32:43


An exploration of how we might conceptualize the design of AGI within the context of human left and right brains. The tension between AI and human functioning highlights unique cooperation. AI uses language, abstraction, and analysis, while humans rely on experience, empathy, and metaphor. AI manipulates static elements well but struggles in a changing world.This leads to two distinct design approaches for future AI and considerations for "artificial general intelligence" (AGI).One approach focuses on "left-brained" AI—controlling facts with internal consistency, while relying on humans for context, meaning, and care. Here, machines serve humans. This path is popular due to the challenge of developing AI mimicking human right hemisphere functions.However, we want machines that can correct contextual mistakes and understand our intended meanings. The design challenge here lies in connecting highly "left-brained" AI to holistic humans in a way that enhances human capabilities.Alternatively, we could design AI with asymmetry, mirroring the human brain's evolution. Such AI would provide a synthesized perspective before interacting with a human, applying computational power to intuition and addressing human paradoxes. Some envision this as AGI—an all-knowing synthesis machine.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Jonathan Coulton: Generative AI, songwriting, and creativity

Artificiality

Play Episode Listen Later Jul 29, 2023 60:59


Jonathan CoultonIf you've read our show notes, you'll know that our music was written and performed by Jonathan Coulton. I've known Jonathan for more than 30 years, dating back to when we sang together in college. But that's a story for another day, or perhaps never.Jonathan spent his first decade post college as a software coder and then through a bit of happenstance and throwing care to the wind, he transitioned to music. In the mid 2000s, he blazed a trail of creating his own career on the internet—without a label or any of the support that musicians normally have. While he was pushing out a new song each week as part of his Thing-A-Week Project, he became known as the “internet music-business guy” since he had successfully used the internet to build his career and a dedicated fanbase. He has since released several albums, toured plenty, and launched an annual cruise for his fans.Throughout his career, technology, and specifically AI, has been a theme—starting with his song Chiron Beta Prime in 2006 about a planet where all humans have been enslaved by uncaring and violent robots. During this interview we talk about his 2017 album Solid State which is well described by writer Emily Nussbaum who wrote, “Coulton's latest album, “Solid State,” is, like so many breakthrough albums, the product of a raging personal crisis—one that is equally about making music and living online, getting older, and worrying about the apocalypse. A concept album about digital dystopia, it's Coulton's warped meditation on the ugly ways the internet has morphed since 2004. At the same time, it's a musical homage to his earliest Pink Floyd fanhood, a rock-opera about artificial intelligence. It's a worried album by a man hunting for a way to stay hopeful.”In this interview, we talk with Jonathan about how he feels about Solid State now, his reaction to generative AI, and his experiences trying to use generative AI in songwriting. We're grateful to be able to grab Jonathan just before he left on tour with Aimee Mann. We hope you all take time to listen to Solid State and catch him live.About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Is it possible for AI to be meaningful?

Artificiality

Play Episode Listen Later Jul 22, 2023 35:52


An exploration of the intersection of AI, meaning, and human relationships. In this episode, we dive deep into the role of AI in our lives and how it can influence our perception of meaning. We explore how AI, and specifically generative AI, is impacting our collective experiences and the ways we make authentic choices. We discuss the idea of intimacy with AI and the future trajectory of human-AI interaction. We consider the possibility of AI enabling more time for meaningful experiences by taking over less meaningful tasks but also wonder if it's possible for AI to truly have a place in human meaning. Note: According to our research, Doug Belshaw is the original author of the term “serendipity surface.” You can find his first post here and a follow up here. Apologies to Doug for forgetting your name during recording!About Artficiality from Helen & Dave Edwards:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Existential risk of AI

Artificiality

Play Episode Listen Later Jul 14, 2023 31:59


ChatGPT, Dall-e, Midjourney, Bard, and Bing are here. Many others are coming. At every dinner conversation, classroom lesson, and business meeting people are talking about AI. There are some big questions that now everyone is seeking answers for, not just the philosophers, ethicists, researchers, and venture folks. It's noisy, complicated, technical and often agenda-driven.In this episode, we tackle the question of existential risk. Will AI kill us all? We start by talking about why this question is important at all. And why we are finally tackling it ourselves (since we've largely avoided it for quite some time). We talk about the scenarios that people are worried about and the three premises that underly this risk:* We will build an intelligence that will outsmart us* We will not be able to control it* It will do things we don't want it toJoin us as we talk about the risk that AI might end humanity. And, if you'd like to dig deeper, subscribe to Artificiality at https://artificiality.substack.com to get all of our content on this topic including our weekly essay, a gallery of AI images, book recommendations, and more. (Note: the essay will be emailed to subscribers a couple of days after this podcast first airs—thanks for your patience!).About Artficiality:Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe for free at https://artificiality.substack.com.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Values & Generative AI

Artificiality

Play Episode Listen Later Jul 9, 2023 24:27


As Silicon Valley lunges towards creating AI that is considered superior to humans (at times called Artificial General Intelligence or Super-intelligent AI), it does so with the premise that it is possible to encode values in AI so that the AI won't harm us. But values are individual, elusive, and ever-changing. They resist being mathematized. Join us as we discuss human values, how they form, how they change, and why trying to encode them in algorithms is so difficult, if not impossible. About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Proper Lookout Podcast
#137 - Art and Artificiality: The rise of ChatGPT

Proper Lookout Podcast

Play Episode Listen Later Jul 5, 2023 14:09


In this episode of The Proper Lookout Podcast, Principal Peter Hunt joins Associate Professor Joseph Suttie to explore the impact of artificial intelligence, like ChatGPT, in the workplace. NOTE: no AI was deployed in recording of this Podcast episode. All thoughts are those of the participants!

ai chatgpt artificiality
Artificiality
Culture & Generative AI

Artificiality

Play Episode Listen Later Jul 2, 2023 31:51


Culture plays a vital role in connecting individuals and communities, enabling us to leverage our unique talents, share knowledge, and solve problems together. However, the rise of an intelligentsia of machine soothsayers highlights the need to consciously design new coherence strategies for the age of machines. Why? Because generative AI is a cultural technology that produces different outcomes depending on its cultural context.Who will take on this challenge, and how will culture evolve in response to the growing influence of machines? This is the essential question that requires careful consideration as we navigate the complex interplay between human culture and technology, seeking to preserve sonder as for humans only.Listen in as we discuss human culture and the impact of generative AI. About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Mind for our Minds: Introduction

Artificiality

Play Episode Listen Later Jun 18, 2023 25:50


This episode is the first in our summer series based on our thesis for designing AI to be a Mind for our Minds. We recently presented this idea for the first time at our favorite event of the year hosted by The House of Beautiful Business. We are grateful for our long-term relationship with the House and its founders, Tim Leberecht and Till Grusche, and head of curation and community, Monika Jiang. The House puts on public and corporate events that are like none you've ever experienced. We encourage everyone to consider attending a public event and bringing the House to your organization.We always meet fascinating people at the house—too many to mention in one podcast. During this episode we highlight Hannah Critchlow and her book Joined Up Thinking and Michael Bungay Stanier and his book How to Work with (Almost) Anyone. Check them both out: we are big fans.Stay tuned over the summer as we will dig deeper into how to design AI to be a Mind for our Minds.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
C. Thi Nguyen: Metrification

Artificiality

Play Episode Listen Later May 21, 2023 67:00


AI is based on data. And data is frequently collected with the intent to be quantified, understood, and used across context. That's why we have things like grade point averages that translate across subject matters and educational institutions. That's why we perform cost-benefit analyses to normalize the forecasted value of projects—no matter the details. As we deploy more AI that is based on a metrified world, we're encouraging the quantification of our lives and risk losing the context and subjective value that creates meaning.In this interview, we talk with C. Thi Nguyen about these large scale metrics, about objectivity and judgment, about how this quantification removes the nuance, contextual sensitivity, and variability to make these measurements legible to the state. And that's just scratching the surface of this interview.Thi Nguyen used to be a food writer and is now a philosophy professor at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He writes about trust, art, games, and communities. His book, Games: Agency as Art, was awarded the American Philosophical Association's 2021 Book Prize.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Mind for our Minds: Culture* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Harpreet Sareen: Cyborg Botany

Artificiality

Play Episode Listen Later May 14, 2023 48:06


We are deeply interested in the intersection of the digital and material worlds, both living and not living. Most of our interviews are focused on the intersection of humans and machines—how does the digital world affect humans and how do humans affect the digital world. This interview, however, is about the intersection of plants and machines.Harpreet Sareen works at the intersection of digital and material, plant and machine, and art and science. His work challenges people to consider the life of plants, what we can learn from them, what we can see and what we can't see. His art and science projects challenge us to wonder if we can actually believe what we're seeing.We moved to the Cascade Mountains to be able to spend more time in the wilderness. We likely spend quite a bit more time in nature than most people. Despite our strong connections to nature, Sarpreet's work accomplishes his goal of encouraging us to reconsider this relationship, to consider what an increased symbiosis might be.Harpreet Sareen is a designer, researcher and artist creating mediated digital interactions through the living world, with growable electronics, organic robots and bionic materials. His work has been shown in museums, featured in media in 30+ countries, published in academic conferences, viewed on social media 5M+ times and used by thousands of people around the world. He has also worked professionally in museums, corporates and international research centers in five countries. He is currently an Assistant Professor at Parsons School of Design in New York City and directs the Synthetic Ecosystems Lab that focuses on post-human and non-human design.Learn more about Harpreet SareenInteresting links:* What biodesign means to me* Bionic plants, from PopSci* Elephant project: Hybrid Enrichment System (ACM article)* Elowan: A Robot-Plant Hybrid -- Plant with a robotic body* Cyborg Botany: Electronics grown inside plants* Cyborg Botany: In-Planta Cybernetic SystemsMost recent papers:* Helibots (attached) at CAADRIA 2023, and related exhibition in ADM Gallery, Singapore* BubbleTex at CHI 2023, and related exhibition in Ars Electronica, Austria* Algaphon: Sounds of macroalgae under water (Installation at Ars Electronica, Austria)About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.Check out some of our recent publications:* Announcing [Your Team's] Generative AI Summit* Research brief: C-Suite Strategy Playbook for Generative AI* Mind for our Minds: Meaning* Mind for our Minds: Introduction* Research brief: aiOS—Foundation ModelsIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Arvind Jain: Glean, Enterprise Search, and Generative AI

Artificiality

Play Episode Listen Later May 7, 2023 47:24


Anyone working in a large organization has likely asked this question: Why is it that I can seemingly find anything on the internet but I can't seem to find anything inside my organization? It is counter-intuitive that it's easier to organize the vast quantity of information on the public internet than it is to organize the smaller amount of information inside a single organization.The reality is that enterprise knowledge management and search is very difficult. Data does not reside in easily organized forms. It is spread across systems which provide varying levels of access. Knowledge can be fleetingly exchanged in communication systems. And each individual person has their own access rights, creating a complex challenge.These challenges may be amplified by large language models in the enterprise which seek to help people with analytical and creative tasks by tapping into an organization's knowledge. How can these systems access enough enterprise data to develop a useful level of understanding? How can they provide the best answers to each individual that follows data access governance requirements?To answer these questions, we talked with Arvind Jain, the CEO of Glean, which provides AI-powered workplace search. Glean searches across an organizations applications to build a trusted knowledge model that respects data access governance when presenting information to users. Glean's knowledge models also provide a way for enterprises to introduce the power of generative AI while providing boundaries for its use that can be challenging to create.Prior to founding Glean, Arvind co-founded Rubrik, one of the fastest growing companies in cloud data management. For more than a decade Arvind worked at Google, serving as a Distinguished Engineer, leading teams in Search, Maps, and YouTube.About Sonder Studio:We created Sonder Studio to empower humans in our complex age of machines, data, and AI. Through our strategy, innovation, and change services, we help organizations activate the collective intelligence of humans and AI. We work with leaders in tech, data, and analytics to co-create AI strategies, design innovative AI products and services, and craft change management programs that help their people succeed in a AI-powered, data-centric, complex world. We leverage the new world of foundation models, generative AI, and low-code environments to create an amplified human-machine experience centered on machines that can be a mind for our minds. You can learn more about us at getsonder.com.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Lukas Egger: Generative AI, a view from SAP

Artificiality

Play Episode Listen Later Apr 30, 2023 45:57


The world has been upended by the introduction of generative AI. We think this could be the largest advance in technology—ever. All of our clients are trying to figure out what to do, how to de-risk the introduction of these technologies, and how to design new, innovative solutions.To get a perspective on these changes created by AI, we talked with Lukas Egger who leads the Innovation Office & Strategic Projects team at SAP Signavio, where he focuses on de-risking new product ideas and establishing best-in-class product discovery practices. With a successful track record in team building and managing challenging projects, Lukas has expertise in data-driven technology, cloud-native development, and has created and implemented new product discovery methodologies. Excelling at bridging the gap between technical and business teams, he has worked in AI, operations, and product management in fast-growth environments. Lukas has movie credits for his work in Computer Graphics research, published a book on philosophy, and is passionate about the intersection of technology and people, regularly speaking on how to improve organizations.We love Lukas' concept that we are in the peacock phase of generative AI when everyone is trying to show off their colorful feathers—and not yet showing off new value creation. We enjoyed talking with Lukas about his views on the realities of today and his forecasts and speculations on the future.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Katie Davis: Technology's Child

Artificiality

Play Episode Listen Later Apr 23, 2023


Is technology good or bad for children? How should parents think about technology in their children's lives? Are there different answers depending on the age of the child and their stage of development? What can we apply from what we know about children's play and activity in the analog world to the digital world? How should product designers think about designing technology to be good for kids? How does AI and generative AI affect the answers to these questions, if at all?To answer some of these questions, we talked with Katie Davis about her recent book, Technology's Child: Digital Media's Role in the Ages and Stages of Growing Up. In her book, Katie shares her research on how children engage with technology at each stage of development, from toddler to twenty something, and how they can best be supported.As parents of five kids, we're interested in these questions both personally and professionally. We are particularly interested in Katie's concept of “loose parts” and how we might apply this idea to digital product design, especially AI design. We think anyone who has children or has an interest in technology's impact on children will find Katie's book highly informative and a great read.Katie Davis is Associate Professor at the University of Washington Information School, where she is a founding member and Co-Director of the UW Digital Youth Lab. She is the coauthor of The App Generation: How Today's Youth Navigate Identity, Intimacy, Imagination in a Digital World and Writers in the Secret Garden: Fanfiction, Youth, and New Forms of Mentoring.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Andrew Blum: The Weather Machine

Artificiality

Play Episode Listen Later Apr 9, 2023 52:20


Weather forecasting is fascinating. It involves making predictions in the complex, natural world, using a global infrastructure for people who have varying needs and desires. Some just want to know if we should carry an umbrella today. Others want to know how to prepare for a week-long trip. And then there are those who use the weather forecast to make decisions that can have significant, even critical, consequences.We also think weather forecasting is an interesting topic given the parallels to what we are experiencing in AI. Weather forecasting and AI systems are black box prediction systems, supported by a global infrastructure that is transitioning from public to private control. In weather, our satellite industry is transitioning from publicly-funded and controlled to private. And in AI, the major models and data are transitioning from academia (which we would argue is essentially public given their interest in publishing and sharing knowledge) to corporate control.Given this backdrop and the fact that Helen is an avid weather forecasting nerd, we talked with Andrew Blum about his book The Weather Machine: A Journey Inside the Forecast. The book is a fascinating narrative about how the weather forecast works based on a surprising tour of the infrastructure and people behind it. It's a great book and we highly recommend it.Andrew Blum is an author and journalist, writing about technology, infrastructure, architecture, design, cities, art, and travel. In addition to The Weather Machine, Andrew also wrote Tubes: A Journey to the Center of the Internet which was the first ever book-length look at the physical infrastructure of the Internet—all the data centers, undersea cables and tubes filled with light. You can also find Andrew's writing in many publications and hear him talk at various conferences, universities, and corporations. At the end of our interview, we talk with Andrew about his current research and we're very much looking forward to his next book.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Juan Noguera: Generative AI in Industrial Design

Artificiality

Play Episode Listen Later Mar 26, 2023 38:47


We've heard a lot about how generative AI may negatively impact careers in design. But we wonder how might generative AI have a positive impact on designers? How might generative AI be used as a tool that helps designers rather than as a replacement for designers? How might we use generative AI in design education? How do design educators and their students feel about generative AI? How else might generative AI help designers in ways that we haven't uncovered yet?To answer these questions, we talked with Juan Noguera about his individual design work, his teaching at the Rochester Institute of Technology, and about his recent article in The Conversation entitled DALL-E 2 and Midjourney can be a boon for industrial designers. Juan proposes that AI image generation programs can be a fantastic way to improve the design process. Juan's story about using generative AI working with bronze artisans in Guatemala is particularly compelling.Juan Noguera is an Assistant Professor of Industrial Design at the Rochester Institute of Technology. A Guatemalan, he was raised in a colorful and vivid culture. He quickly developed an interest in how things were made, tearing everything he owned apart, and putting it back together, often with a few leftover pieces.We enjoyed talking with Juan about his teaching, about his student's projects, and about ideas he has for how AI might be able to help designers more in the future.Learn more about Juan Noguera.Read Juan Noguera's article in The Conversation.Learn more about Juan Noguera's work on AI in Design.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Don Norman: Design for a Better World

Artificiality

Play Episode Listen Later Mar 12, 2023 63:13


What role does design have in solving the world's biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we design solutions for problems in complex systems that are evolving, emerging, and changing?To answer these questions, we talked with Don Norman about his book, Design for a Better World: Meaningful, Sustainable, Humanity Centered. In his book, Don proposes a new way of thinking, one that recognizes our place in a complex global system where even simple behaviors affect the entire world. He identifies the economic metrics that contribute to the harmful effects of commerce and manufacturing and proposes a recalibration of what we consider important in life.Don Norman is Distinguished Professor Emeritus of Cognitive Science and Psychology and founding director of the Design Lab at the University of California, San Diego from which he has retired twice. Don is also retired from and holds the emeritus title from Northwestern University, the Nielsen Norman Group and a few other organizations. He was an Apple Vice President, has been an advisor and board member for numerous companies, and has three honorary degrees. His numerous books have been translated into over 20 languages, including The Design of Everyday Things and Living with Complexity.It was a true pleasure to talk with Don, someone who we have read and followed for decades. His work is central to much of today's design practices and we loved talking with him about where he hopes design may take us.Learn more about Don Norman.Learn more about Don's book Design for a Better World.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Sonder StudioSubscribe to get Artificiality delivered to your emailLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Jamer Hunt: Not to Scale

Artificiality

Play Episode Listen Later Mar 5, 2023 67:53


What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem overwhelmingly complex feel more manageable and knowable? How might we use the scaling tools of designers to tackle some of the world's largest and most complex problems?To answer these questions, we talked with Jamer Hunt about his book Not to Scale: How the Small Becomes Large, the Large Becomes Unthinkable, and the Unthinkable Becomes Possible. The book repositions scale as a practice-based framework for navigating soci al change in complex systems. Jamer is Professor of Transdisciplinary Design and Program Director for University Curriculum at the New School's Parsons School for Design. Jamer was the founding director of the Transdisciplinary Design graduate program at Parsons that was created to emphasize collaborative design-led research and a systems-oriented approach to social change.We're big fans of Jamer's book and have incorporated his concept of scalar framing into our work. We encourage you to check his book out as well and see how zooming in and out can help you frame complex problems in a way that makes them more addressable.Learn more about Jamer HuntLearn more about Jamer's book Not to ScaleLearn more about the Transdisciplinary Design program at ParsonsWatch the Powers of Ten by Charles & Ray EamesIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioConnect with Helen and Dave on LinkedInLearn about our book Make Better Decisions and buy it on AmazonThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
ChatGPT: Why does it matter, how special is it, and what might be ahead?

Artificiality

Play Episode Listen Later Feb 26, 2023 34:08


Why does ChatGPT matter?* People always get excited about AI advances and this one is accessible in a way that others weren't in the past.* People can use natural language to prompt a natural language response.* It's seductive because it feels like synthesis.* And it can feel serendipitous.But…* We need to remember that ChatGPT and all other generative AI are tools and they can fail us.* While it may feel serendipitous, that serendipity is more constrained than it may feel.Some other ideas we cover:* The research at Google, OpenAI, Microsoft, and Apple gives us some context for evaluating how special ChatGPT actually is and what might be ahead.* The current craze about prompt engineering.What we're reading:* Raghuveer Parthasarathy's So Simple a Beginning* Don Norman's Design for a Better World* Jamer Hunt's Not to Scale* Ann Pendleton-Jullian & John Seely Brown's Design UnboundIf you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
David Krakauer: Complexity

Artificiality

Play Episode Listen Later Feb 19, 2023 93:55


We're always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why do we seem to see emergent, adaptive, open, and networked problems more often? And why don't they yield to traditional problem solving techniques?Our research has centered on complexity science and understanding how to apply its lessons to problem solving. Complexity science teaches us about the nature of complex systems including the nervous system, ecosystems, economies, social communities, and the internet. It teaches us ways to identify opportunities for change through metaphor, models, and math and ways to synchronize change through incentives.The Santa Fe Institute has been at the center of our complexity research journey. Founded in 1984, SFI is the leading research institute on complexity science. Its researchers endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds. We encourage anyone interested in this topic to wander through the ample and diverse resources on the SFI website, SFI publications, and SFI courses.We had the pleasure of digging into complexity science and its applications with one of the leading minds in complexity, David Krakauer, who is President and William H. Miller Professor of Complex Systems at SFI. David's research explores the evolution of intelligence and stupidity on Earth. This includes studying the evolution of genetic, neural, linguistic, social, and cultural mechanisms supporting memory and information processing, and exploring their shared properties. He served as the founding director of the Wisconsin Institutes for Discovery, the co-director of the Center for Complexity and Collective Computation, and professor of mathematical genetics, all at the University of Wisconsin, Madison. He has been a visiting fellow at the Genomics Frontiers Institute at the University of Pennsylvania, a Sage Fellow at the Sage Center for the Study of the Mind at the University of California, Santa Barbara, a long-term fellow of the Institute for Advanced Study, and visiting professor of evolution at Princeton University.A graduate of the University of London, where he went on to earn degrees in biology and computer science, Dr. Krakauer received his D.Phil. in evolutionary theory from Oxford University.Learn more about SFI.Learn more about David Krakauer.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Kees Dorst: Frame Innovation

Artificiality

Play Episode Listen Later Jan 29, 2023 62:12


What can we learn from the practice of design? What might we learn if we had an insight into top designers' minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listeners are likely familiar with design thinking—what other practices should we learn about and understand?To answer these questions, we talked with Kees Dorst about his books, Frame Innovation and Notes on Design, to discover his views on the creative processes of top designers and understand his practice of frame innovation. We enjoyed both books and find insights that extend well beyond design into all areas of problem solving. We are particularly interested in applying frame innovation in our complex problem-solving sprints and consulting practice.Kees Dorst is Professor of Transdisciplinary Innovation at the University of Technology Sydney's TD School. He is considered one of the lead thinkers developing the field of design, valued for his ability to connect a philosophical understanding of the logic of design with hands-on practice. As a bridge-builder between these two worlds, his writings on design as a way of thinking are read by both practitioners and academics. He has written several bestselling books in the field – ‘Understanding Design' (2003, 2006), ‘Design Expertise' (with Bryan Lawson, 2013), 'Frame Innovation' (2015) ‘Designing for the Common Good' (2016) and ‘Notes on Design – How Creative Practice Works' (2017).If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Sous On Fridays The WeekendMix
Neuro D - Artificiality (Marco Aurelio Remix)

Sous On Fridays The WeekendMix

Play Episode Listen Later Nov 23, 2022 5:06


What started spontaneous thing during a live set during one of my techno performances became a remix. Enjoy the OG vibe and the 3/4 beats in your eardrums.

Conspiracy Theories
Future Tech: Artificiality

Conspiracy Theories

Play Episode Listen Later Nov 14, 2022 46:26


They're modern marvels, all around us — artificial products we can eat (lab-grown meat), transplant (bioprinted organs), and even inject into our bodies (synthetic vaccines). But not everyone's convinced they're miracles. Top of mind for critics? Are they safe, are they sustainable — and what does expanding the frontier of artificiality mean for our future? Learn more about your ad choices. Visit podcastchoices.com/adchoices

future tech artificiality
Artificiality
Marina Nitze and Nick Sinai: Hack Your Bureaucracy

Artificiality

Play Episode Listen Later Oct 30, 2022 56:12


We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience of feeling like our organizations are too difficult, too entrenched, and too complex to change. Any organization—large or small, public or private—can feel like a faceless bureaucracy that is resistant to change. So what can people do who want to affect change? How do you accomplish things that can seem impossible?To answer these questions, we talked with Marina Nitze and Nick Sinai about their recently published book, Hack Your Bureaucracy: Get Things Done No Matter What Your Role on Any Team. Marina and Nick have deep experience in one of the largest, most complex bureaucracies in the world: the U.S. government. As technology leaders in the Obama White House, Marina and Nick undertook large change programs. Their book contains their stories and their advice for anyone who wants to affect change.We find the hacks in their book quite valuable, and we wish this book had been available early in our career when we were both in much larger organizations. We love the fact that their hacks focus on the people and working within a system for change—not the move fast & break things mentality of Silicon Valley. Above all, we appreciate that it's clear that Marina and Nick thought deeply about what they would have wanted to know when they embarked on the significant technology change programs they undertook in the White House and Veterans Administration.Marina Nitze is currently a partner at Layer Aleph, a crisis response firm that specializes in restoring complex software systems to service. Marina was most recently Chief Technology Officer of the U.S. Department of Veterans Affairs under President Obama, after serving as Senior Advisor on technology in the Obama White House and as the first Entrepreneur-in-Residence at the U.S. Department of Education.Nick Sinai is a Senior Advisor at Insight Partners, a VC and private equity firm, and is also Adjunct Faculty at Harvard Kennedy School and a Senior Fellow at the Belfer Center for Science and International Affairs. Nick served as U.S. Deputy Chief Technology Officer in the Obama White House, and prior, played a key role in crafting the National Broadband Plan at the FCC.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Tom Davenport and Steve Miller: Working with AI

Artificiality

Play Episode Listen Later Oct 16, 2022 51:59


How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to predict the somewhat unpredictable.To help us get a better understanding of the current state of humans and AI working together, we talked with Tom Davenport and Steve Miller about their recently-released book, Working with AI. The book is centered around 29 detailed and deeply-researched case studies about human-AI collaboration in real-world work settings. What they show is that AI isn't a job destroyer but a technology that changes the way we work.Tom is Distinguished Professor of Information Technology and Management at Babson College, Visiting Professor at Oxford's Saïd Business School, Fellow of the MIT Initiative on the Digital Economy, and Senior Advisor to Deloitte's AI practice. He is the author of The AI Advantage and coauthor of Only Humans Need Apply and other books.Steve is Professor Emeritus of Information Systems at Singapore Management University, where he previously served as Founding Dean of the School of Computing and Information Systems and Vice Provost for Research. He is coauthor of Robotics Applications and Social Implications.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Helen Edwards and Dave Edwards: Make Better Decisions

Artificiality

Play Episode Listen Later Oct 2, 2022 36:26


We humans make a lot of decisions. Apparently, 35,000 of them every day! So how do we improve our decisions? Is there a process to follow? Who are the experts to learn from? Do big data and AI make decisions easier or harder? Is there any way to get better at making decisions in this complex, modern world we live in?To dig into these questions we talked with…ourselves! We recently published our first book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. In this book, we've provided a guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Make Better Decisions is structured around 50 nudges that have their lineage in scholarship from behavioral economics, cognitive science, computer science, decision science, design, neuroscience, philosophy, and psychology. Each nudge prompts the reader to use their beautiful, big human brain to notice when our automatic decision-making systems will lead us astray in our complex, modern world, and when they'll lead us in the right direction.In this conversation, we talk about our book, our favorite nudges at the moment, and some of the Great Minds who we have interviewed on Artificiality including Barbara Tversky, Jevin West, Michael Bungay Stanier, Stephen Fleming, Steven Sloman and Tania Lombrozo.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn about our book Make Better Decisions and buy it on AmazonSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Kat Cizek and William Uricchio: Co-Creation

Artificiality

Play Episode Listen Later Sep 18, 2022 55:05


We all do things with other people. We design things, we write things, we create things. Despite the fact that co-creation is all around us it can be easy to miss because creation gets assigned to individuals all too often. We're quick to assume that one person should get credit thereby erasing the contributions of others.The two of us have a distinct interest in co-creation because we co-create everything we do. We co-created Sonder Studio, our speaking engagements, our workshops, our design projects, and our soon-to-be-published book, Make Better Decisions. We're also interested in how humans can co-create with technology, specifically artificial intelligence, and when that is a good thing and when that might be something to avoid.To dig into these interests and questions we talked with Kat Cizek and William Uricchio whose upcoming book Collective Wisdom offers the first guide to co-creation as a concept and as a practice. Kat, William, and a lengthy list of co-authors have presented a wonderful tracing of the history of co-creation across many disciplines and societies. The book is based in interviews with 166 people and includes nearly 200 photographs that should not be missed. We hope that you all have a chance to experience their collective work.Kat is an Emmy and Peabody-winning documentarian who is the Artistic Director and Cofounder of the Co-Creation Studio at MIT Open Documentary Lab. William is Professor of Comparative Media Studies at MIT, where he is also Founder and Principal Investigator of the MIT Open Documentary Lab and Principal Investigator of the Co-Creation Studio. Their book is scheduled to be published by MIT Press on November 1st.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Gerd Gigerenzer: Staying Smart

Artificiality

Play Episode Listen Later Sep 4, 2022 61:02


How should we respond and react to artificial intelligence and its impact on the world and each other? How should we handle the risk and uncertainty risk caused by the permeation of AI throughout our lives?To tackle these questions, we talked with Gerd Gigerenzer about his recent book, How to Stay Smart in a Smart World. We talk with Gerd about the impacts of big data on making decisions, the increasing use of AI for surveillance, the risks of trusting smart technology too much, and the broader impact of technology on our human dignity.Gerd is the Director Emeritus at the Max Planck Institute for Human Development and the author of several books, including Calculated Risks, Gut Feelings, and Risk Savvy and the coeditor of Better Doctors, Better Patients, Better Decisions and Classification in the Wild. He has trained judges, physicians, and managers in decision-making and understanding risk.We thoroughly enjoyed Gerd's book and recommend it to both those new to AI who may be looking for an approachable introduction and to those expert in AI who may be looking for a new perspective to think about the future of our digital world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Eric Pliner: Difficult Decisions

Artificiality

Play Episode Listen Later Aug 14, 2022 58:42


We all want decision-making to be easier. We want simple tools and frameworks that provide a process for no-regrets decisions. But it just isn't that easy. Despite how much we understand about the science of decision-making, the act of making decisions is frequently quite difficult. And the quantity of data we can now access to support decision-making doesn't make decisions easier, it actually makes them more complex.So what to do? In his book, Difficult Decisions: How Leaders Make the Right Call with Insight, Integrity, and Empathy, Eric Pliner argues that the best way to approach complex, subjective decisions is to first understand your own subjectivity, morals, and ethics.In this episode, we talk with Eric about his book, how he advises leaders to make decisions, the importance of aligning intent with impact in the world, and how to think about the role of data in decision-making.In addition to being an author, Eric is CEO of YSC Consulting where he works with leaders and organizations on leadership development, organizational culture, and strategic diversity and inclusion initiatives.As frequent listeners know, we spend a lot of time working with people on how to make better decisions and it was a true pleasure to talk with Eric about how he approaches this topic and how he helps leaders tackle difficult decisions.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Tom Hale: Oura Ring and the New Data of Health

Artificiality

Play Episode Listen Later Jul 31, 2022 56:30


We'd all like to be healthier—to sleep longer, have lower stress, and have more energy. But is it possible for an AI to help us accomplish this? And how would that experience feel? What data would we need to provide? How would the AI encourage the behavior changes required? Would it feel like a friend or a bully? Would it work at all?To answer some of these questions, we talked with Tom Hale, the new CEO at Oura. Oura makes a fascinating device that monitors a long list of signals from your body all through a ring on your finger. That ring connects with an app on your phone that gives you lots of data about your health. Perhaps most interestingly, in addition to the facts about your health, the app provides suggestions for what you might do differently. And it provides those suggestions in a way that seems cautious about making too many conclusions, leaving the true agency with you.Neither of us owned Oura rings before our conversation so we couldn't bring that experience to the podcast. But after our conversation we both decided to buy one and give it a try. Our sizing kits are on the way and the rings will follow soon after. We're planning to record our reactions to the rings so subscribe, if you haven't already, to get an alert when we publish our experience.Prior to joining Oura, Tom was President of MomentiveAI, previously called SurveyMonkey, Chief Product and Operating Officer at HomeAway, and a long-time executive at Adobe Systems.Tom's personal experience with the Oura Ring before becoming CEO is what tipped the balance and got us to be some of his newest customers. We'll be interested to hear if any of our listeners do the same.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Oura.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Frank Rose: Storytelling in a Data-Driven World

Artificiality

Play Episode Listen Later Jul 17, 2022 58:39


We all love stories—they are one of the most important ways that humans communicate. Stories create heroes to root for and villains to revile. Stories create realities and help us align our values and objectives with others. But how do stories change in a world that is awash with data and is overwhelmed by large tech companies that try to motivate—or manipulate–us with stories using data that we don't see and can't comprehend?To help answer these questions, we talked with Frank Rose about his recent book The Sea We Swim In: How Stories Work in a Data-Driven World. Frank's book is inspired by his Strategic Storytelling seminar at Columbia University and is a wonderful resource to help understand the power of narrative thinking.In addition to being a senior fellow at Columbia University School of the Arts, Frank is the director of Columbia's pioneering Digital Storytelling Lab and a frequent speaker on narrative thinking and on the power of immersive storytelling. Frank's writing and journalism career started in the punk scene at CBGB for The Village Voice and continued as a contributing editor at Esquire and then Wired. He has written several books including West of Eden about the early days of Apple Computer and The Art of Immersion about how the digital generation changed storytelling.We greatly enjoyed talking with Frank about one of our favorite subjects: telling stories in a data-driven world.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Learn more about Frank RoseSubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Ben Shneiderman: Human-Centered AI

Artificiality

Play Episode Listen Later Jul 3, 2022 63:14


Many of our listeners will be familiar with human-centered design and human-computer interaction. These fields of research and practice have driven technology product design and development for decades. Today, however, these fields are changing to adapt to the increasing use of artificial intelligence, leading to an emerging field called human-centered AI.Prior to the widespread use of AI, technology products were powerful, yet, predictable—they operated based on the rules created by their designers. With AI, however, machines respond to data, providing predictions that may not be anticipated when the product is designed or programmed. This is incredibly powerful but can also create unintended consequences.This challenge leads to the questions: How can we design AI-based products that provide benefits to humans? How can we create AI systems that learn and change with new data but still provide consequences intended by the system's designers?These questions led us to interview Ben Shneiderman, an Emeritus Distinguished University Professor in the department of Computer Science at the University of Maryland. Ben recently published a wonderfully approachable book, Human-Centered AI, which provides a guide to how AI can be used to augment and enhance humans' lives. As the founding director of the Human-Computer Interaction Laboratory, Ben has a 40-year history in researching how humans and computers interact, making him an ideal source to talk with about how humans and AI interact.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Julio Mario Ottino: The Nexus

Artificiality

Play Episode Listen Later Jun 5, 2022 59:03


“How can we augment our thinking spaces to increase creative solutions? How can we make those solutions real by mastering complexity?” Julio Mario Ottino and Bruce Mau ask and answer these questions in their ambitious and visually stunning work, The Nexus.In their book, Ottino and Mau take on a big subject—how to augment your thinking by integrating art, technology, and science. It is a thought-provoking and curiosity-enhancing book—perfect for rewilding your attention with its glorious footnotes and gorgeous visuals.Our takeaways (not to plot bust) for being a Nexus thinker:Experiment—the world is too uncertain to spend too much energy and time overly planning and analyzing, whether it's from data or from intuition. We have to learn to dance between data and intuition, to be in both the rational and emotional at once.Develop the art of coexistence. We are trained (and like to think) in terms of black and white, A versus B. We have to learn how to hold opposing ideas at the same time and yet be still able to act. This is hard but artists do it all the time and leaders can learn.Complex systems require us to think more and more in terms of tradeoffs. And complex systems exhibit a property called emergence, where literally behaviors we can't predict emerge as a result of the system. The job of leaders is now to create conditions that allow for successful emergence.The best opportunity to tackle the world's greatest problems—those of unprecedented complexity—is by working at the Nexus, where art, technology and science converge.Ottino and Mau challenge us to think beyond the boundaries of our specialities and training, to be curious about how others in unrelated fields discover knowledge and find their creativity. It is thinking for our age, where design becomes the method for discovery.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificiality
Mark Nitzberg: Human-Compatible AI

Artificiality

Play Episode Listen Later May 15, 2022 53:33


We hear a lot about harm from AI and how the big platforms are focused on using AI and user data to enhance their profits. What about developing AI for good for the rest of us? What would it take to design AI systems that are beneficial to humans?In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research. Mark began studying AI in the early 1980s and completed his PhD in Computer Vision and Human Perception under David Mumford at Harvard. He has built companies and products in various AI fields including The Blindsight Corporation, a maker of assistive technologies for low vision and active aging, which was acquired by Amazon. Mark is also co-author of The AI Generation which examines how AI reshapes human values, trust and power around the world.We talk with Mark about CHAI's goal of reorienting AI research towards provably beneficial systems, why it's hard to develop beneficial AI, variability in human thinking and preferences, the parallels between management OKRs and AI objectives, human-centered AI design and how AI might help humans realize the future we prefer.Links:Learn more about UC Berkeley CHAISubscribe to get Artificiality delivered to your emailLearn more about Sonder StudioP.S. Thanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

The Mike Hosking Breakfast
Mike's Minute: Gender pay gap solutions are artificial

The Mike Hosking Breakfast

Play Episode Listen Later May 5, 2022 1:49


Another of these strange made up claims this week masquerading as a report that suggests the solving of a problem, or perceived problem, can only happen if we changed the way we did things.A group called Mind the Gap likes the idea of forcing companies to publish their wages on a gender basis thus embarrassing them in paying more to women.The claim is, if we did this, we could increase females' pay by up to $35 a week.The gap on average is currently 9 percent. The reason it's 9 percent is not because people who employ other people don't like women. It's because women, on the whole, choose different jobs than men, on the whole.The key there is “on the whole.” This is where you get a distorted view of the world when you average everything out.Women frequent the aged care sector, for example, more than men. We have been here before, of course. The famous pay equity case involving aged care where we ended up comparing aged care women with mechanics who are men and pretending apples were apples.There is an inquiry currently underway in Australia looking at the same thing. The warnings are out over whether it addresses anything. In our case, some in the aged care sector got more money. But it still didn't solve the overall problem, and that was attracting people to the industry.Artificiality is almost always a mistake. Do you hire women on skill and talent? Or do you hire women so you can close a gap on a chart? The same mad argument applies to the debate over numbers of female CEOs and board membersWe must also remember a couple of important things. People must choose what they want to do for work and money is not always a driving force.If women, on average, chose professions that don't pay as much as other professions, that's not automatically a problem. And if, on average, the person who happens to be female earns less than she could if only she changed jobs, but is happy, then that's not a problem either.Trying to ratchet your square peg into the round hole you think is fair and equitable is only going to cause trouble because it's a false economy.Remuneration is based on demand, supply, and skills, not gender.See omnystudio.com/listener for privacy information.

Best of Business
Mike's Minute: Gender pay gap solutions are artificial

Best of Business

Play Episode Listen Later May 5, 2022 1:49


Another of these strange made up claims this week masquerading as a report that suggests the solving of a problem, or perceived problem, can only happen if we changed the way we did things.A group called Mind the Gap likes the idea of forcing companies to publish their wages on a gender basis thus embarrassing them in paying more to women.The claim is, if we did this, we could increase females' pay by up to $35 a week.The gap on average is currently 9 percent. The reason it's 9 percent is not because people who employ other people don't like women. It's because women, on the whole, choose different jobs than men, on the whole.The key there is “on the whole.” This is where you get a distorted view of the world when you average everything out.Women frequent the aged care sector, for example, more than men. We have been here before, of course. The famous pay equity case involving aged care where we ended up comparing aged care women with mechanics who are men and pretending apples were apples.There is an inquiry currently underway in Australia looking at the same thing. The warnings are out over whether it addresses anything. In our case, some in the aged care sector got more money. But it still didn't solve the overall problem, and that was attracting people to the industry.Artificiality is almost always a mistake. Do you hire women on skill and talent? Or do you hire women so you can close a gap on a chart? The same mad argument applies to the debate over numbers of female CEOs and board membersWe must also remember a couple of important things. People must choose what they want to do for work and money is not always a driving force.If women, on average, chose professions that don't pay as much as other professions, that's not automatically a problem. And if, on average, the person who happens to be female earns less than she could if only she changed jobs, but is happy, then that's not a problem either.Trying to ratchet your square peg into the round hole you think is fair and equitable is only going to cause trouble because it's a false economy.Remuneration is based on demand, supply, and skills, not gender.See omnystudio.com/listener for privacy information.

The Convivial Society
Thresholds of Artificiality

The Convivial Society

Play Episode Listen Later Jul 6, 2021 11:50


The story of a human retreat from this world, either to the stars above or the virtual realm within, can mask a disregard for or resignation about what is done with the world we do have, both in terms of the structures of human societies and the non-human world within which they are rooted. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe

threshold artificiality convivial society
Zawia Ebrahim Discourses
Pretence and Artificiality

Zawia Ebrahim Discourses

Play Episode Listen Later Apr 24, 2021 16:55


Dars delivered by Shaykh Ebrahim Schuitema at the zawia

dars pretence artificiality shaykh ebrahim schuitema
Blizzard of the World
To Live and Die in Technopolis, Part II

Blizzard of the World

Play Episode Listen Later Oct 21, 2020 95:30


On the conclusion of To Live and Die in Technopolis, our inaugural series on technique, we ask a single question that'll get us going for 90 minutes, we insist that we're realists, not pessimists, we paint two very different portraits of two very different phenomena of the same name, and we get a bit excited there at the end.1. Introduction (0:50)2. "Continuation or Revolution?" through characteristics of technique (3:42)3. Four characteristics of pre-18th century technique (10:14)3.1 Only applied to limited fields, and to a limited number of fields;3.2 Economy of tools and means used;3.3 Spread limited in space and time;3.4 The possibility of choice.4. Seven characteristics of post-18th century technique (35:59)4.1 Rationality;4.2 Artificiality;4.3 An obligation to default to technique;4.4 Self-expanding;4.5 Unicity and indivisibility;4.6 Universalism;4.7 Autonomy.5. Conclusion (1:32:26)http://blizzardoftheworld.buzzsprout.comTwitter: @blizzardofworldEmail: blizzardoftheworld@gmail.com"Electric Blues" by Nikitsan MusicArtwork by PgeshanSupport the show (https://www.buymeacoffee.com/botwpod)

The Cosmic Keys Podcast
S2 Ep4: Humanity Vs. Artificiality with Niish

The Cosmic Keys Podcast

Play Episode Listen Later Sep 28, 2020 138:51


(Discussion begins at 15:20)  Dan opens the episode with an astrology forecast for the week of September 28, 2020 to October 4, 2020, followed by a discussion with podcaster and artist Niish.  Niish is the host of the podcasts Nox Mente, The Obelisk, and The Cosmic Salon, and we discuss a wide range of "woo" topics involving AI, humanity, organic life, the singularity, the fourth industrial revolution, and a whole lot more.  Patrons of the show will get an episode extension with more behind the scenes discussion.  To get longer episodes for just $5 a month and keep the show rolling, check out patreon.com/cosmickeys ! Check out Niish's links below: Nox Mente Youtube:  https://www.youtube.com/channel/UCLgoMEJbyIWRadCVl8weY2A  The Cosmic Salon Podcast: https://podcasts.apple.com/us/podcast/the-cosmic-salon/id1526036371 Niish's Youtube Channel: https://www.youtube.com/channel/UCBk3MjXsRH6L9S3IRhASklw

ai humanity obelisk niish nox mente artificiality
Artificiality
Ep. 02: Another paradox: this time, explainability.

Artificiality

Play Episode Listen Later Mar 4, 2020 27:34


In this episode, we dive into the paradox of autonomy. Some of the topics we explore include:Why are there so many paradoxical observations in AI?What is the autonomy paradox?Is there any way that giving up more information can be autonomy-enhancing?What are principal reason and counter-factual explanations?How can we deal with the autonomy paradox through AI UX design?For further reading, check out the most recent Artificiality article, the research paper we reference and the Buzzfeed article on Clearview AI.Special thanks to our friend, Jonathan Coulton, for our theme music. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Between The Rows
Weed control challenges, a Climate Atlas for farmers and Maple Leaf’s move to remove ‘artificiality’

Between The Rows

Play Episode Listen Later May 17, 2018 24:23


Robin Booker of the Western Producer on the challenge of weed control in dry conditions, Lorraine Stevenson of the Manitoba Co-operator reports on the ‘Climate Atlas,’ a new online tool to help farmers adapt to climate change, Adam Grogan of Maple Leaf Foods on the company’s move to remove artificial […]

42 Minutes of Reality
Episode 15 - Toddlers & Tiaras

42 Minutes of Reality

Play Episode Listen Later Sep 18, 2017 70:01


Intro/outro music: “Gay Bar Videogame” by The Wildbunch http://freemusicarchive.org/music/The_Wildbunch/Gay_Bar/Gay_Bar_videogame 1:31 Introducing this week’s show 2:21 JS - Worst show he’s ever seen 2:52 Mike’s alternate take 4:41 Show concept 7:08 Not the first ‘rodeo’ for many of these families 9:28 Awards ceremony (and ridiculous award names) 10:40 These parents don’t accept second place 12:00 Pageant judges and directors 14:15 Show’s POV 14:52 Editing choices were revealing 16:41 Money is a frequent topic 17:13 Bribing kids with sugar and caffeine 19:33 Parsing difference b/w disapproving of parents and pageant 22:27 Mike’s theory on why pageant footage is edited differently 24:02 Intended audience 25:09 Possibly geared towards moms 27:19 Beauty/gender standards 28:08 Artificiality of beauty standard was revealing 29:04 Amplifying problematic messages of adult beauty pageants 31:24 Two objections to child beauty pageants: consent and sexualization 34:12 Ritualistic aspects of child beauty pageantry 36:17 JS poses a question to Mike 37:38 Coming back to traditionalist gender roles and Southern regional aspect 39:07 Relationship b/w social conservatism and beauty pageant culture 41:45 Our (limited) experience with (adult) beauty pageants 43:24 Social class and economics 44:40 Mike noticed positive correlation b/w wealth and winning 45:15 JS begrudgingly gives show his one kudos 46:06 Vast amounts of money spent on dresses 46:47 Seems to be no real monetary return 49:03 Economics of holding a beauty pageant (WARNING: Baseless speculation ahead) 51:08 We’d call it a con, but parents seem to have no illusions of wealth 52:02 Exploring the motivations: validation, living vicariously, and ‘winning’ (not ‘confidence’) 55:45 Does this show have social value or is it wallowing in titillation? 57:15 JS thought social value was held back by bigger problems in the world 59:40 Is there inherent tension between sensationalism and exposé? 1:01:17 Response to this show would depend on the viewer (and could be quite disturbing) 1:01:55 Mike found show both more interesting and more depressing than expected; thought it would be goofy camp 1:02:53 Comparing to other modeling reality shows; objectionable in how it puts children into an adult setting 1:05:12 Announcing the next episode 1:08:50 The usual: email us, rate/review, and subscribe

A Star to Steer Her By
Episode 05 Artificiality

A Star to Steer Her By

Play Episode Listen Later Oct 6, 2016 76:30


This week: McCoy talks to his junk in "Mudd's Women," Kirk goes on the shittiest carnival ride ever in "What Are Little Girls Made of?" and Jake foreshadows how much he hates Nazis. Also a serious discussion of Grace Lee Whitney's biography, her forced exit from the series, and the aftermath. Content warning: adult language, humor, discussion of sexual assault. Timestamps: synopses: 0:29; Mudd's Women: 1:45; What Are Little Girls Made Of?: 41:10

Six Degrees of Rumination
Episode 23: 6/25/14 The artificiality of life, and when life gets too real

Six Degrees of Rumination

Play Episode Listen Later Jul 13, 2014 54:05


Join Nina, Reno, and producer Mike to explore the many faces of salad, as well as what you might consume if you give up food altogether.  Would you eat a colorless, nearly tasteless sludge in the name of efficiency and sustainability? What do you think of AI?  Could it ever be so well done that it fools real humans?  Reno, Nina, and Mike discuss.  As Eugene Goostman might say, "Oh, what a fruitful conversation ;-)"Harvard is reasonably sure their university library houses a book that's bound in human skin.  Arse'ne Houssayee's French treatise, "Des destine'es de l'ame" (On the Destiny of the Soul) is an example of anthropodermic bibliopegy.  It came to the library in 1934, and the skin was supposedly taken from a woman's back.Beware the fungus among us--although it's not humungous.  Ophiocordyceps unliateralis is a parasite that lodges itself inside ants' brains, controlling their movements and turning them into tiny zombies.  http://www.npr.org/blogs/thesalt/2014/06/10/320321553/the-salad-frontier-why-astronauts-need-to-grow-lettuce-in-spacehttp://www.npr.org/blogs/thesalt/2014/06/25/325189711/kandinsky-on-a-plate-art-inspired-salad-just-tastes-better?sc=twhttp://robrhinehart.com/?p=298http://en.wikipedia.org/wiki/Eugene_Goostman http://www.scottaaronson.com/blog/?p=1858http://mobile.nytimes.com/blogs/artsbeat/2014/06/05/harvard-confirms-book-is-bound-in-human-skin/?_php=true&_type=blogs&smid=nytimesarts&_r=0http://www.sciencechannel.com/tv-shows/through-the-wormhole/videos/this-fungus-turns-victims-into-the-crawling-dead.htmhttp://www.npr.org/blogs/alltechconsidered/2014/06/25/325188747/these-bathroom-lights-tell-you-where-its-ok-to-go*Special edition: Episode 23.5 at the Santa Cruz Mystery Spot!https://www.youtube.com/watch?v=NglZqExcRQk

soul harvard reno artificiality eugene goostman
Critical Social Psychology - for iPad/Mac/PC
Transcript -- Field Experiments

Critical Social Psychology - for iPad/Mac/PC

Play Episode Listen Later Apr 25, 2009


Transcript -- Professor Tom Postmes and Professor Jolanda Jetten look at field experiments in cognitive social psychology, focusing on Daisy Brook and her study on how to make sport teams more effective.

Critical Social Psychology - for iPad/Mac/PC

Professor Tom Postmes and Professor Jolanda Jetten look at field experiments in cognitive social psychology, focusing on Daisy Brook and her study on how to make sport teams more effective.

Critical Social Psychology - for iPod/iPhone
Transcript -- Field Experiments

Critical Social Psychology - for iPod/iPhone

Play Episode Listen Later Apr 25, 2009


Transcript -- Professor Tom Postmes and Professor Jolanda Jetten look at field experiments in cognitive social psychology, focusing on Daisy Brook and her study on how to make sport teams more effective.

Critical Social Psychology - for iPod/iPhone

Professor Tom Postmes and Professor Jolanda Jetten look at field experiments in cognitive social psychology, focusing on Daisy Brook and her study on how to make sport teams more effective.

Views on Vue
UnoCSS with Erik Hanchett - VUE 213

Views on Vue

Play Episode Listen Later Jan 1, 1970 54:56


Erik Hanchett is Front End Engineer at Amazon Web Services. He joins the show with Steve to talk about UnoCSS. He begins by explaining what it is. They also discuss the difference between UnoCSS, tailwind CSS, and WindiCSS. He shares his own experience of using UnoCSS and its useful features. On YouTubeUnoCSS with Erik Hanchett - VUE 213SponsorsChuck's Resume TemplateDeveloper Book Club Become a Top 1% Dev with a Top End Devs MembershipLinksUnoCSSunocss/unocssAWS Amplify - Develop Apps With AWS AmplifySocialsprogramwitherik.com Program With Erik | YouTubeLinkedIn: Erik HanchettTwitter: ErikCHPicksErik - BardSteve - Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

chatgpt dev amazon web services css hanchett front end engineer artificiality