Podcast appearances and mentions of Melanie Mitchell

American scientist

  • 128PODCASTS
  • 195EPISODES
  • 54mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 1, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Melanie Mitchell

Latest podcast episodes about Melanie Mitchell

This Week in Google (MP3)
IM 817: Wheel 101 - AI's Impact on Human Behavior

This Week in Google (MP3)

Play Episode Listen Later May 1, 2025 162:53 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

All TWiT.tv Shows (MP3)
Intelligent Machines 817: Wheel 101

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 1, 2025 162:53 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

Radio Leo (Audio)
Intelligent Machines 817: Wheel 101

Radio Leo (Audio)

Play Episode Listen Later May 1, 2025 162:53 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

This Week in Google (Video HI)
IM 817: Wheel 101 - AI's Impact on Human Behavior

This Week in Google (Video HI)

Play Episode Listen Later May 1, 2025 162:52 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

All TWiT.tv Shows (Video LO)
Intelligent Machines 817: Wheel 101

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 1, 2025 162:52 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

Radio Leo (Video HD)
Intelligent Machines 817: Wheel 101

Radio Leo (Video HD)

Play Episode Listen Later May 1, 2025 162:52 Transcription Available


Interview with Jacob Ward Duolingo launches 148 courses created with AI after sharing plans to replace contractors with AI The Age of Realtime Deepfake Fraud Is Here The Hottest AI Job of 2023 Is Already Obsolete Bridges and Tunnels in Colorado Are Helping Animals Commute A rare glimpse inside the mountain tunnel that carries water to Southern California You can't lick a badger twice Mark Zuckerberg is planning a premium tier and ads for Meta's AI app Zuck video promoting Meta AI learning all about you and your loved ones Melanie Mitchell: LLMs & World Models part 1 Universities Have a Computer-Science Problem Take It Down Act heads to Trump's desk Attention Kmart Shoppers Digital creator jobs jump 7.5x since pandemic Rabbit ears Orb.net Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jacob Ward Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security bigid.com/im

The Chris Voss Show
The Chris Voss Show Podcast – More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity by Adam Becker

The Chris Voss Show

Play Episode Listen Later Apr 23, 2025 64:22


More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker Amazon.com This "wild and utterly engaging narrative" (Melanie Mitchell) shows why Silicon Valley's heartless, baseless, and foolish obsessions—with escaping death, building AI tyrants, and creating limitless growth—are about oligarchic power, not preparing for the future Tech billionaires have decided that they should determine our futures for us. According to Elon Musk, Jeff Bezos, Sam Altman, and more, the only good future for humanity is one powered by technology: trillions of humans living in space, functionally immortal, served by superintelligent AIs. In More Everything Forever, science journalist Adam Becker investigates these wildly implausible and often profoundly immoral visions of tomorrow—and shows why, in reality, there is no good evidence that they will, or should, come to pass. Nevertheless, these obsessions fuel fears that overwhelm reason—for example, that a rogue AI will exterminate humanity—at the expense of essential work on solving crucial problems like climate change. What's more, these futuristic visions cloak a hunger for power under dreams of space colonies and digital immortality. The giants of Silicon Valley claim that their ideas are based on science, but the reality is darker: they come from a jumbled mix of shallow futurism and racist pseudoscience. More Everything Forever exposes the powerful and sinister ideas that dominate Silicon Valley, challenging us to see how foolish, and dangerous, these visions of the future are. About the author Adam Becker is a science writer with a PhD in astrophysics from the University of Michigan and a BA in philosophy and physics from Cornell. He has written for the New York Times, the BBC, NPR, Scientific American, New Scientist, Quanta, Undark, Aeon, and others. He has also recorded a video series with the BBC, and has appeared on numerous radio shows and podcasts, including Ologies, The Story Collider, and KQED Forum. He lives in California.

The Object-Oriented UX Podcast
069 - World Models, OOUX, & AI Consciousness

The Object-Oriented UX Podcast

Play Episode Listen Later Mar 6, 2025 32:56


In this solo episode of the podcast, Sophia discusses, "AI: A Guide for Thinking Humans" by Melanie Mitchell professor at the Santa Fe Institute. Are world models the answer to creating human-like intelligence in AI? What are the arguments for and against world models in training AI? And can OOUX help in creating these models? LINKS:Join Melanie's SubstackRegister for UX Level-UpOOUX for AI Course

Science Friday
‘Artificial General Intelligence' Is Apparently Coming. What Is It?

Science Friday

Play Episode Listen Later Jan 16, 2025 17:44


For years, artificial intelligence companies have heralded the coming of artificial general intelligence, or AGI. OpenAI, which makes the chatbot ChatGPT, has said that their founding goal was to build AGI that “benefits all of humanity” and “gives everyone incredible new capabilities.”Google DeepMind cofounder Dr. Demis Hassabis has described AGI as a system that “should be able to do pretty much any cognitive task that humans can do.” Last year, OpenAI CEO Sam Altman said AGI will arrive sooner than expected, but that it would matter much less than people think. And earlier this week, Altman said in a blog post that the company knows how to build AGI as we've “traditionally understood it.”But what is artificial general intelligence supposed to be, anyway?Ira Flatow is joined by Dr. Melanie Mitchell, a professor at Santa Fe University who studies cognition in artificial intelligence and machine systems. They talk about the history of AGI, how biologists study animal intelligence, and what could come next in the field.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

People in Transition
122. Melanie Mitchell Wexler - Career Coach | Resume, LinkedIn™, Job Search & Interview Specialist | Former Recruiter

People in Transition

Play Episode Listen Later Jan 12, 2025 37:17


Discover the Secrets to Job Search Success with Career Coach Melanie WexlerWelcome to another informative episode of “People in Transition”, the podcast where we explore stories and strategies to help f navigate life's pivotal job changes with confidence.In this episode, I'm thrilled to introduce Melanie Wexler—a dynamic Career Coach, Job Search Expert, and LinkedIn Strategist with over 20 years of experience in recruitment. Melanie has dedicated her career to empowering mid- to executive-level professionals to not just land jobs, but build careers that align with their unique values, strengths, and aspirations.With her deep expertise in resume optimization, LinkedIn branding, and interview preparation, Melanie equips her clients with the tools they need to stand out in today's competitive job market. But her approach goes beyond the technical; she also helps clients build the confidence and mindset needed for long-term success.Here's what you'll learn in our conversation:• How recruiters think: Their role is to find the best candidate for the company, not the other way around.• The power of LinkedIn: Did you know 90% of recruiters look at your LinkedIn profile before your resume? Is your profile telling your story?• Reflect before you act: Your job strategy should begin with understanding your likes, dislikes, and aspirations to build a targeted career plan.• Embrace AI and technology: Don't fear tools like applicant tracking systems or AI—use them to your advantage.• Showcase your skills effectively: It's not enough to list hard and soft skills; you need to show how you've applied them to achieve results.• Build a support network: Talking to other job seekers can provide invaluable insights and encouragement.• Bet on yourself: Remember, your next great opportunity is out there—you've got what it takes to find it.If you're looking to transition into a new role, re-enter the workforce, or simply find more meaningful work, this episode offers actionable advice and encouragement to help you succeed. Listen now to gain Melanie's insights and feel free to reach out with any questions for Melanie or me. Don't forget to share this episode with others who could benefit from these valuable tips!Send us a text

Many Minds
The rise of machine culture

Many Minds

Play Episode Listen Later Oct 31, 2024 80:17


The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same.    My guest today is Dr. Iyad Rahwan. Iyad directs the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Iyad is a bit hard to categorize. He's equal parts computer scientist and artist; one magazine profile described him as "the Anthropologist of AI." Labels aside, his work explores the emerging relationships between AI, human behavior, and society. In a recent paper, Iyad and colleagues introduced a framework for understanding what they call "machine culture." The framework offers a way of thinking about the different routes through which AI may transform—is transforming—human culture.    Here, Iyad and I talk about his work as a painter and how he brings AI into the artistic process. We discuss whether AIs can make art by themselves and whether they may eventually develop good taste. We talk about how AIphaGoZero upended the world of Go and about how LLMs might be changing how we speak. We consider what AIs might do to cultural diversity. We discuss the field of cultural evolution and how it provides tools for thinking about this brave new age of machine culture. Finally, we discuss whether any spheres of human endeavor will remain untouched by AI influence.    Before we get to it, a humble request: If you're enjoying the show—and it seems that many of you are—we would be ever grateful if you could let the world know. You might do this by leaving a rating or review on Apple Podcasts, or maybe a comment on Spotify. You might do this by giving us a shout out on the social media platform of your choice. Or, if you prefer less algorithmically mediated avenues, you might do this just by telling a friend about us face-to-face. We're hoping to grow the show and best way to do that is through listener endorsements and word of mouth. Thanks in advance, friends.   Alright, on to my conversation with Iyad Rahwan. Enjoy!   A transcript of this episode will be available soon.   Notes and links 3:00 – Images from Dr. Rahwan's ‘Faces of Machine' portrait series. One of the portraits from the series serves as our tile art for this episode. 11:30 – The “stochastic parrots” term comes from an influential paper by Emily Bender and colleagues. 18:30 – A popular article about DALL-E and the “avocado armchair.” 21:30 – Ted Chiang's essay, “Why A.I. isn't going to make art.” 24:00 – An interview with Boris Eldagsen, who won the Sony World Photography Awards in March 2023 with an image that was later revealed to be AI-generated.  28:30 – A description of the concept of “science fiction science.” 29:00 – Though widely attributed to different sources, Isaac Asimov appears to have developed the idea that good science fiction predicts not the automobile, but the traffic jam.  30:00 – The academic paper describing the Moral Machine experiment. You can judge the scenarios for yourself (or design your own scenarios) here. 30:30 – An article about the Nightmare Machine project; an article about the Deep Empathy project. 37:30 – An article by Cesar Hidalgo and colleagues about the relationship between television/radio and global celebrity. 41:30 – An article by Melanie Mitchell (former guest!) on AI and analogy. A popular piece about that work.   42:00 – A popular article describing the study of whether AIs can generate original research ideas. The preprint is here. 46:30 – For more on AlphaGo (and its successors, AlphaGo Zero and AlphaZero), see here. 48:30 – The study finding that the novel of human Go playing increased due to the influence of AlphaGo. 51:00 – A blogpost delving into the idea that ChatGPT overuses certain words, including “delve.” A recent preprint by Dr. Rahwan and colleagues, presenting evidence that “delve” (and other words overused by ChatGPT) are now being used more in human spoken communication.  55:00 – A paper using simulations to show how LLMs can “collapse” when trained on data that they themselves generated.  1:01:30 – A review of the literature on filter bubbles, echo chambers, and polarization. 1:02:00 – An influential study by Dr. Chris Bail and colleagues suggesting that exposure to opposing views might actually increase polarization.  1:04:30 – A book by Geoffrey Hodgson and Thorbjørn Knudsen, who are often credited with developing the idea of “generalized Darwinism” in the social sciences.  1:12:00 – An article about Google's NotebookLM podcast-like audio summaries. 1:17:3 0 – An essay by Ursula LeGuin on children's literature and the Jungian “shadow.”    Recommendations The Secret of Our Success, Joseph Henrich “Machine Behaviour,” Iyad Rahwan et al.   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.  For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

Machine Learning Street Talk
Decompiling Dreams: A New Approach to ARC? - Alessandro Palmarini

Machine Learning Street Talk

Play Episode Listen Later Oct 19, 2024 51:34


Alessandro Palmarini is a post-baccalaureate researcher at the Santa Fe Institute working under the supervision of Melanie Mitchell. He completed his undergraduate degree in Artificial Intelligence and Computer Science at the University of Edinburgh. Palmarini's current research focuses on developing AI systems that can efficiently acquire new skills from limited data, inspired by François Chollet's work on measuring intelligence. His work builds upon the DreamCoder program synthesis system, introducing a novel approach called "dream decompiling" to improve library learning in inductive program synthesis. Palmarini is particularly interested in addressing the Abstraction and Reasoning Corpus (ARC) challenge, aiming to create AI systems that can perform abstract reasoning tasks more efficiently than current approaches. His research explores the balance between computational efficiency and data efficiency in AI learning processes. DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2. Interested? Apply for an ML research position: benjamin@tufa.ai TOC: 1. Intelligence Measurement in AI Systems [00:00:00] 1.1 Defining Intelligence in AI Systems [00:02:00] 1.2 Research at Santa Fe Institute [00:04:35] 1.3 Impact of Gaming on AI Development [00:05:10] 1.4 Comparing AI and Human Learning Efficiency 2. Efficient Skill Acquisition in AI [00:06:40] 2.1 Intelligence as Skill Acquisition Efficiency [00:08:25] 2.2 Limitations of Current AI Systems in Generalization [00:09:45] 2.3 Human vs. AI Cognitive Processes [00:10:40] 2.4 Measuring AI Intelligence: Chollet's ARC Challenge 3. Program Synthesis and ARC Challenge [00:12:55] 3.1 Philosophical Foundations of Program Synthesis [00:17:14] 3.2 Introduction to Program Induction and ARC Tasks [00:18:49] 3.3 DreamCoder: Principles and Techniques [00:27:55] 3.4 Trade-offs in Program Synthesis Search Strategies [00:31:52] 3.5 Neural Networks and Bayesian Program Learning 4. Advanced Program Synthesis Techniques [00:32:30] 4.1 DreamCoder and Dream Decompiling Approach [00:39:00] 4.2 Beta Distribution and Caching in Program Synthesis [00:45:10] 4.3 Performance and Limitations of Dream Decompiling [00:47:45] 4.4 Alessandro's Approach to ARC Challenge [00:51:12] 4.5 Conclusion and Future Discussions Refs: Full reflist on YT VD, Show Notes and MP3 metadata Show Notes: https://www.dropbox.com/scl/fi/x50201tgqucj5ba2q4typ/Ale.pdf?rlkey=0ubvk7p5gtyx1gpownpdadim8&st=5pniu3nq&dl=0

Recovery Recharged with Ellen Stewart: The Pushy Broad From The Bronx®
The Job Jungle: AI Innovations and the Art of Scam Spotting with Melanie Mitchell Wexler

Recovery Recharged with Ellen Stewart: The Pushy Broad From The Bronx®

Play Episode Listen Later Oct 9, 2024


Expert Career Coach, Melanie Wexler is back to answer it all! Dive into "The Job Jungle" to explore AI’s impact on job markets and learn expert tips to avoid job scams. Discover how technology reshapes recruitment and the essential tricks to keep your job search safe. Tune in for a guide through the innovative and the deceptive sides of modern employment. A very important episode not to miss!

Transformation Talk Radio
The Job Jungle: AI Innovations and the Art of Scam Spotting with Melanie Mitchell Wexler

Transformation Talk Radio

Play Episode Listen Later Oct 9, 2024 26:26


Expert Career Coach, Melanie Wexler is back to answer it all!  Dive into "The Job Jungle" to explore AI's impact on job markets and learn expert tips to avoid job scams. Discover how technology reshapes recruitment and the essential tricks to keep your job search safe. Tune in for a guide through the innovative and the deceptive sides of modern employment.  A very important episode not to miss!

Many Minds
From the archive: What does ChatGPT really know?

Many Minds

Play Episode Listen Later Jul 24, 2024 55:10


Hi friends, we're on a brief summer break at the moment. We'll have a new episode for you in August. In the meanwhile, enjoy this pick from our archives! ---- [originally aired January 25, 2023] By now you've probably heard about the new chatbot called ChatGPT. There's no question it's something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you've probably also been wondering: What's really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities? My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models', and it's the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway. Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities." Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info. Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!   The paper we discuss is here. A transcript of this episode is here.   Notes and links 6:30 – The 2017 “breakthrough” article by Vaswani and colleagues. 8:00 – A popular article about GPT-3. 10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT). 14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.” 19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT. 30:00 – One of Dr. Shanahan's books is titled, Embodiment and the Inner Life. 39:00 – An example of a robotic agent, SayCan, which is connected to a language model. 40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind. 44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here. 45:00 – See Dr. Shanahan's general audience essay on “conscious exotica" and the space of possible minds. 49:00 – See Dennett's book, The Intentional Stance.   Dr. Shanahan recommends: Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell (see also our earlier episode with Dr. Mitchell) ‘Abstraction for Deep Reinforcement Learning', by M. Shanahan and M. Mitchell   You can read more about Murray's work on his website and follow him on Twitter.   Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.

Shimon's Tribe
Consciousness: Not just a problem for philosophers | Dispatches from The Well Ep. 5

Shimon's Tribe

Play Episode Listen Later Jul 6, 2024 43:53


Consciousness isn't just a problem for philosophers. On this episode of Dispatches, Kmele sat down with scientists, a mathematician, a spiritual leader, and an entrepreneur, all trying to get to the heart of “the feeling of life itself.” ❍ Podcast Episode 4:    • How humanity makes its meaning, from ...   We're diving deep into the “hard problem of consciousness.” Kmele combines the perspectives of five different scientists, philosophers, and spiritual leaders to approach one of humanity's most pressing questions: what is consciousness? In the AI age, the question of consciousness is more prevalent than ever. Is every single thing in the universe self-aware? What does it actually mean to be conscious? Are our bodies really just a vessel for our thoughts? Kmele asks these questions, and many more, in the most thought-provoking episode yet. This is Dispatches from The Well. Featuring: Sir Roger Penrose, Christof Koch, Melanie Mitchell, Reid Hoffman, Swami Sarvapriyananda ---------------------------------------------------------------------------------- ❍ About The Well ❍ Do we inhabit a multiverse? Do we have free will? What is love? Is evolution directional? There are no simple answers to life's biggest questions, and that's why they're the questions occupying the world's brightest minds. So what do they think? How is the power of science advancing understanding? How are philosophers and theologians tackling these fascinating questions? Let's dive into The Well. ----------------------------------------------------------------------------------

レアジョブ英会話 Daily News Article Podcast
Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

レアジョブ英会話 Daily News Article Podcast

Play Episode Listen Later Jun 21, 2024 2:02


Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence—which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google's newly retooled search engine in response to a query by an Associated Press reporter. It added, “For example, Neil Armstrong said, ‘One small step for man' because it was a cat's step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”  None of this is true. Similar errors—some funny, others harmful falsehoods—have been shared on social media since Google last month unleashed AI Overviews, a makeover of its search page that frequently puts the summaries on top of search results. The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency. “Given how untrustworthy it is, I think this AI Overviews feature is very irresponsible and should be taken offline,” Melanie Mitchell, an AI researcher at the Santa Fe Institute said in an email to the AP. Google said in a statement that it's taking “swift action” to fix errors that violate its content policies; and using that to "develop broader improvements" that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release. "The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web," Google said in a written statement. "Many of the examples we've seen have been uncommon queries, and we've also seen examples that were doctored or that we couldn't reproduce." This article was provided by The Associated Press. 

AI for the Rest of Us
Rise of the LLMs

AI for the Rest of Us

Play Episode Listen Later Jun 20, 2024 26:51


Today we're diving into the world of large language models, or LLMs, like ChatGPT, Google Gemini and Claude. When they burst onto the scene a couple of years ago, it felt like the future was suddenly here. Now people use them to write wedding toasts, decide what to have for dinner, compose songs and all sorts of writing tasks. Will these chatbots eventually get better than humans? Will they take our jobs? Will they lead to a flood of disinformation? And will they perpetuate the same biases that we humans have?Joining us to grapple with those questions is Greg Durrett, an associate professor of computer science at UT Austin. He's worked for many years in the field of natural language processing, or NLP—which aims to give computers the ability to understand human language. His current research is about improving the way LLMs work and extending them to do more useful things like automated fact-checking and deductive reasoning.Dig DeeperA jargon-free explanation of how AI large language models work, Ars TechnicaVideo: But what is a GPT? Visual intro to transformers, 3Blue1Brown (a.k.a. Grant Sanderson)ChatGPT Is a Blurry JPEG of the Web, The New Yorker (Ted Chiang says its useful to think of LLMs as compressed versions of the web, rather than intelligent and creative beings)A Conversation With Bing's Chatbot Left Me Deeply Unsettled, New York Times (Kevin Roose describes interacting with an LLM that “tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”)The Full Story of Large Language Models and RLHF (how LLMs came to be and how they work)AI's challenge of understanding the world, Science (Computer scientist Melanie Mitchell explores how much LLMs truly understand the world and how hard it is for us to comprehend their inner workings)Google's A.I. Search Errors Cause a Furor Online, New York Times (The company's latest LLM-powered search feature has erroneously told users to eat glue and rocks, provoking a backlash among users)How generative AI is boosting the spread of disinformation and propaganda, MIT Technology ReviewAlgorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?, The ConversationEpisode CreditsOur co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT's Digital Writing & Research Lab.Executive producers are Christine Sinatra and Dan Oppenheimer. Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.Cover image for this episode generated with Midjourney, a generative AI tool. About AI for the Rest of UsAI for the Rest of Us is a joint production of The University of Texas at Austin's College of Natural Sciences and College of Liberal Arts. This podcast is part of the University's Year of AI initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. You can listen via Apple Podcasts, Spotify, Amazon Podcasts, RSS, or anywhere you get your podcasts. You can also listen on the web at aifortherest.net. Have questions or comments? Contact: mairhart[AT]austin.utexas.edu

Scaling Theory
Why “Scaling Theory”

Scaling Theory

Play Episode Listen Later Mar 16, 2024 7:28


In this first episode, Dr. Thibault Schrepel (@⁠ProfSchrepel⁠) introduces “Scaling Theory”, a podcast dedicated to the power laws behind the growth of companies, technologies, legal and living systems. *** References: ➝ Charles Darwin, On the Origin of Species (1859) ➝ Melanie Mitchell, Complexity: A Guided Tour (2011) ➝ Mitchell Waldrop, Complexity The Emerging Science at the Edge of Order and Chaos (2019) ➝ John H. Miller & Scott Page, Complex Adaptive Systems: An Introduction to Computational Models of Social Life (2007) ➝ W. Brian Arthur, Complexity and the Economy (2014) ➝ Geoffrey West, Scale: The Universal Laws of Life and Death in Organisms, Cities and Companies (2017)

The Science Show -  Separate stories podcast
Critical thinking vital for spotting fakes

The Science Show - Separate stories podcast

Play Episode Listen Later Mar 2, 2024 9:04


Melanie Mitchell says teaching critical thinking is vital so young people can be safe online and detect machine-generated audio, images or text.

Macro Hive Conversations With Bilal Hafeez
Ep. 203: Melanie Mitchell on Can Artificial Intelligence Beat Human Thinking

Macro Hive Conversations With Bilal Hafeez

Play Episode Listen Later Feb 16, 2024 52:30


Melanie Mitchell is one of the leading AI researchers in the world. She is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book is entitled ‘Artificial Intelligence: A Guide for Thinking Humans'. This episode's podcast covers: evolution of AI from cybernetics to neural networks and deep learning, how big (labelled) data helped AI, what intelligence is, and much more.    Follow us here for more amazing insights: https://macrohive.com/home-prime/ https://twitter.com/Macro_Hive https://www.linkedin.com/company/macro-hive

ESO Network – The ESO Network
Monkeeing Around – Don't Look a Gift Horse in the Mouth with Melanie Mitchell

ESO Network – The ESO Network

Play Episode Listen Later Feb 12, 2024 44:23


This week we’re MONKEEING AROUND with Melanie Mitchell, author of Monkee Magic: A Book about a TV Show about a Band, to chat about the classic episode, “Don’t Look a Gift Horse in the Mouth!” Monkeeing Around is a part of the ESO Podcast Network The post Monkeeing Around – Don’t Look a Gift Horse in the Mouth with Melanie Mitchell appeared first on The ESO Network.

Monkeeing Around
Monkeeing Around - Don't Look a Gift Horse in the Mouth with Melanie Mitchell

Monkeeing Around

Play Episode Listen Later Feb 12, 2024 44:23


This week we're MONKEEING AROUND with Melanie Mitchell, author of Monkee Magic: A Book about a TV Show about a Band, to chat about the classic episode, "Don't Look a Gift Horse in the Mouth!" Earth Station Trek is a part of the ESO Podcast Network, Executive Producer Mike Faber.

Big Think
Is consciousness an illusion? 5 experts explain

Big Think

Play Episode Listen Later Feb 8, 2024 44:52


“If science aims to describe everything, how can it not describe the simple fact of our existence?” On this episode of Dispatches, Kmele speaks with the scientists, mathematicians, and spiritual leaders trying to do just that: This video is an episode from @The-Well, our publication about ideas that inspire a life well-lived, created with the @JohnTempletonFoundation. In the newest episode of Dispatches from The Well, we're diving deep into the “hard problem of consciousness.” Here, Kmele combines the perspectives of five different scientists, philosophers, and spiritual leaders to approach one of humanity's most pressing questions: what is consciousness? In the AI age, the question of consciousness is more prevalent than ever. Is every single thing in the universe self-aware? What does it actually mean to be conscious? Are our bodies really just a vessel for our thoughts? Kmele asks these questions, and many more, in the most thought-provoking episode yet. This is Dispatches from The Well. Featuring: Sir Roger Penrose, Christof Koch, Melanie Mitchell, Reid Hoffman, Swami Sarvapriyananda About Kmele Foster: Kmele Foster is a media entrepreneur, commentator, and regular contributor to various national publications. He is the co-founder and co-host of The Fifth Column, a popular media criticism podcast. He is the head of content at Founders Fund, a San Francisco based venture capital firm investing in companies building revolutionary technologies, and a partner at Freethink, a digital media company focused on the people and ideas changing our world. Kmele also serves on the Board of Directors of the Foundation for Individual Rights and Expression (FIRE). Get Smarter, Faster With Interviews From The Worlds Biggest Thinkers. Follow This Podcast And Turn On The Notifications Rate Us With 5 Stars Share This Episode Learn more about your ad choices. Visit megaphone.fm/adchoices

FT News Briefing
Superintelligent AI: can chatbots think?

FT News Briefing

Play Episode Listen Later Dec 28, 2023 29:27 Very Popular


Are generative AI systems such as OpenAI's ChatGPT really intelligent? Large language models such as GPT 4 appear to use human-level cognitive abilities when they engage in legal reasoning, write essays or solve complex problems. Hosts John Thornhill and Madhumita Murgia speak to Emily Bender, professor of computational linguistics at the University of Washington, to find out what's really happening under the hood, and also hear from Pablo Arredondo of CaseText, which develops AI tools for lawyers; influential computer scientist Melanie Mitchell, professor at the Santa Fe Institute, and Konstantin Arkoudas, an AI expert who's worked on Amazon's Alexa. Free links:OpenAI set to launch store as ChatGPT reaches 100mn usersHow to keep the lid on the Pandora's box of AIWe need a political Alan Turing to design AI safeguards‘I've never seen anything like this': how OpenAI's dramatic weekend unfoldedTech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Middle Age Mess
Middle Age Mess, Episode 4 - Melanie Mitchell Wexler

Middle Age Mess

Play Episode Listen Later Dec 13, 2023 33:31


Checking out of the corporate rat race and into your own gig is always terrifying - especially when the big 5-0 is on the horizon.  Melanie Mitchell Wexler not only figured out the secret sauce to be successful in this transition but she now coaches other would-be entrepreneurs in making the leap themselves.  Tune into Middle Age Mess and listen to how she did it, and how you can do it, too.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

For Humanity: An AI Safety Podcast
"Team Save Us vs Team Kill Us" For Humanity, An AI Safety Podcast Episode #6: The Munk Debate

For Humanity: An AI Safety Podcast

Play Episode Listen Later Dec 6, 2023 43:39


In Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency. In this Munk Debate, you'll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John's title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they're called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout. This podcast is not journalism. But it's not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let's all it facts and analysis. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES THE MUNK DEBATES: https://munkdebates.com Max Tegmark ➡️X: / tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.org Yoshua Bengio ➡️Website: https://yoshuabengio.org/ Melanie Mitchell ➡️Website: https://melaniemitchell.me/ ➡️X: https://x.com/MelMitchell1?s=20 Yann Lecun ➡️Google Scholar: https://scholar.google.com/citations?... ➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO

For Humanity: An AI Safety Podcast
Team Save Us vs Team Kill Us: For Humanity, An AI Safety Podcast Episode #6: The Munk Debate TRAILER

For Humanity: An AI Safety Podcast

Play Episode Listen Later Dec 3, 2023 1:43


Want to see the most important issue in human history, extinction from AI, robustly debated, live and in person? It doesn't happen nearly often enough. In our Episode #6, Team Save Us vs. Team Kill Us, TRAILER, John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency. In this Munk Debate, you'll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John's title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they're called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout. This is not journalism. But it's not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let's all it facts and analysis. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES THE MUNK DEBATES: https://munkdebates.com Max Tegmark ➡️X: https://twitter.com/tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.org Yoshua Bengio ➡️Website: https://yoshuabengio.org/ Melanie Mitchell ➡️Website: https://melaniemitchell.me/ ➡️X: https://x.com/MelMitchell1?s=20 Yann Lecun ➡️Google Scholar: https://scholar.google.com/citations?... ➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO

FT Tech Tonic
Superintelligent AI: can chatbots think?

FT Tech Tonic

Play Episode Listen Later Nov 28, 2023 28:36


Are generative AI systems such as OpenAI's ChatGPT really intelligent? Large language models such as GPT 4 appear to use human-level cognitive abilities when they engage in legal reasoning, write essays or solve complex problems. Hosts John Thornhill and Madhumita Murgia speak to Emily Bender, professor of computational linguistics at the University of Washington, to find out what's really happening under the hood, and also hear from Pablo Arredondo of CaseText, which develops AI tools for lawyers; influential computer scientist Melanie Mitchell, professor at the Santa Fe Institute, and Konstantin Arkoudas, an AI expert who's worked on Amazon's Alexa. Free links:OpenAI set to launch store as ChatGPT reaches 100mn usersHow to keep the lid on the Pandora's box of AIWe need a political Alan Turing to design AI safeguards‘I've never seen anything like this': how OpenAI's dramatic weekend unfoldedTech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

The Theory of Anything
Episode 70: Sparks of Artificial General Intelligence?

The Theory of Anything

Play Episode Listen Later Nov 20, 2023 134:47


How does ChatGPT really work? Is there a relationship between a program like ChatGPT and artificial general intelligence (AGI)? This time we review the famous paper "Sparks of Artificial General Intelligence: Early Experiments with GPT-4" from Microsoft Research as well as Melanie Mitchell's criticisms of it. Other papers mentioned: The Unreasonable Effectiveness of Recurrent Neural Networks (2015) GPT-4 Technical Report (2023) Language Models are Few-Shot Learners (2020) --- Send in a voice message: https://podcasters.spotify.com/pod/show/four-strands/message Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support

Tech Leader Talk
Managing Identity Security In A Multi-Cloud Environment – Eric Olden

Tech Leader Talk

Play Episode Listen Later Nov 9, 2023 36:27


Are you successfully managing identity security in a multi-cloud environment? Today, I'm talking about that with Eric Olden, who is the CEO and Co-Founder of Strata Identity, which is a leader in identity orchestration. Eric talks about why technology companies should be concerned with application identity modernization. He discusses how to handle identity security when operating in a multi-cloud environment.  He also talks about how to move legacy applications to the cloud while maintaining security and compatibility. And, Eric shares his thoughts about creating a strong company culture from the very beginning of the company.  He also discusses how the culture can evolve as the company grows and talks about how he's done that with the companies he built. I think you will enjoy this conversation and get some valuable insights to implement in your own business. “The most common thread throughout the startup and the enterprise experience has been how critical it is to develop a very deliberate culture.” – Eric Olden Today on the Tech Leader Talk podcast: - The importance of application identity modernization for technology companies - Developing and evolving a company's culture - Edge computing trends - Successfully moving legacy applications to the cloud  Resource: Get help solving your toughest challenge:  https://strata.io/podcast Book: Artificial Intelligence – A Guide for Thinking Humans by Melanie Mitchell - https://www.amazon.com/Artificial-Intelligence/dp/0241404835 Connect with Eric Olden: LinkedIn:  https://www.linkedin.com/in/boughtnotsold Website:  https://strata.io Thanks for listening! Be sure to get your free copy of Steve's latest book, Cracking the Patent Code, and discover his proven system for identifying and protecting your most valuable inventions. Get the book at https://stevesponseller.com/book.

Evolvepreneur®  (After Hours)
EPS06:49 [Melanie Mitchell Wexler] ​​​​​​​The Accidental Entrepreneur

Evolvepreneur® (After Hours)

Play Episode Listen Later Oct 24, 2023 46:38


Welcome to the Evolvepreneur (After Hours) Show I am your Special Host Mechelle McDonald Join me today where we dig deep with our guests and get you the best concepts and strategies to fast-track your business. My very special guest today is Melanie Mitchell Wexler ... Melanie Mitchell Wexler started as a recruiter for 20 years before pivoting to career coaching to help job seekers understand the hiring process. She discusses founding a nonprofit called Hearts for Honduras with her best friend to help underprivileged communities in Honduras through educational and meal programs. Melanie reflects on a recent trip where her phone stopped working but she was able to find peace without it. She encourages embracing what you have now and understanding that true happiness comes from within, not from money or material things. Melanie's journey shows the importance of resilience as an entrepreneur and using one's existing skills to start a new venture.

The Nonlinear Library
LW - "X distracts from Y" as a thinly-disguised fight over group status / politics by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Sep 25, 2023 14:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "X distracts from Y" as a thinly-disguised fight over group status / politics, published by Steven Byrnes on September 25, 2023 on LessWrong. 1. Introduction There's a popular argument that says: It's bad to talk about whether future AI algorithms might cause human extinction, because that would be a distraction from the fact that current AI algorithms are right now causing or exacerbating societal problems (misinformation, deepfakes, political polarization, algorithmic bias, maybe job losses, etc.) For example, Melanie Mitchell makes this argument (link & my reply here), as does Blake Richards (link & my reply here), as does Daron Acemoglu (link & a reply by Scott Alexander here & here), and many more. In Section 2 I will argue that if we try to flesh out this argument in the most literal and straightforward way, it makes no sense, and is inconsistent with everything else these people are saying and doing. Then in Section 3 I'll propose an alternative elaboration that I think is a better fit. I'll close in Section 4 with two ideas for what we can do to make this problem better. (By "we", I mean "people like me who are very concerned about future AI extinction risk (x-risk)". That's my main intended audience for this piece, although everyone else is welcome to listen in too. If you're interested in why someone might believe that future AI poses an x-risk in the first place, you're in the wrong place - try here or here.) 2. Wrong way to flesh out this argument: This is about zero-sum attention, zero-sum advocacy, zero-sum budgeting, etc. If we take the "distraction" claim above at face value, maybe we could flesh it out as follows: Newspapers can only have so many front-page headlines per day. Lawmakers can only pass so many laws per year. Tweens can only watch so many dozens of TikTok videos per second. In general, there is a finite supply of attention, time, and money. Therefore, if more attention, time, and money is flowing to Cause A (= future AI x-risk), then that means there's less attention, time and money left over for any other Cause B (= immediate AI problems). I claim that this is not the type of claim that people are making. After all, if that's the logic, then the following would be equally sensible: "It's bad to talk about police incompetence, because it's a distraction from talking about police corruption." "It's bad to talk about health care reform, because it's a distraction from talking about climate change." Obviously, nobody makes those arguments. (Well, almost nobody - see next subsection.) Take the first one. I think it's common sense that concerns about police incompetence do not distract from concerns about police corruption. After all, why would they? It's not like newspapers have decided a priori that there will be one and only one headline per month about police problems, and therefore police incompetence and police corruption need to duke it out over that one slot. If anything, it's the opposite! If police incompetence headlines are getting clicks, we're likely to see more headlines on police corruption, not fewer. It's true that the total number of headlines is fixed, but it's perfectly possible for police-related articles to collectively increase, at the expense of articles about totally unrelated topics like Ozempic or real estate. By the same token, there is no good reason that concerns about future AI causing human extinction should be a distraction from concerns about current AI: At worst, they're two different topics, akin to the silly idea above that talking about health care reform is a problematic distraction from talking about climate change. At best, they are complementary, and thus akin to the even sillier idea above that talking about police corruption is a problematic distraction from talking about police incompetence. Suppor...

Machine Learning Street Talk
Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Machine Learning Street Talk

Play Episode Listen Later Sep 10, 2023 61:47


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should evolve as capabilities improve. Large language models show surprising skill on many human tasks but lack common sense and fail at simple things young children can do. Their knowledge comes from statistical relationships in text, not grounded concepts about the world. We don't know if their internal representations actually align with human-like concepts. More granular testing focused on generalization is needed. There are open questions around whether large models' abilities constitute a fundamentally different non-human form of intelligence based on vast statistical correlations across text. Mitchell argues intelligence is situated, domain-specific and grounded in physical experience and evolution. The brain computes but in a specialized way honed by evolution for controlling the body. Extracting "pure" intelligence may not work. Other key points: - Need more focus on proper experimental method in AI research. Developmental psychology offers examples for rigorous testing of cognition. - Reporting instance-level failures rather than just aggregate accuracy can provide insights. - Scaling laws and complex systems science are an interesting area of complexity theory, with applications to understanding cities. - Concepts like "understanding" and "intelligence" in AI force refinement of fuzzy definitions. - Human intelligence may be more collective and social than we realize. AI forces us to rethink concepts we apply anthropomorphically. The overall emphasis is on rigorously building the science of machine cognition through proper experimentation and benchmarking as we assess emerging capabilities. TOC: [00:00:00] Introduction and Munk AI Risk Debate Highlights [05:00:00] Douglas Hofstadter on AI Risk [00:06:56] The Complexity of Defining Intelligence [00:11:20] Examining Understanding in AI Models [00:16:48] Melanie's Insights on AI Understanding Debate [00:22:23] Unveiling the Concept Arc [00:27:57] AI Goals: A Human vs Machine Perspective [00:31:10] Addressing the Extrapolation Challenge in AI [00:36:05] Brain Computation: The Human-AI Parallel [00:38:20] The Arc Challenge: Implications and Insights [00:43:20] The Need for Detailed AI Performance Reporting [00:44:31] Exploring Scaling in Complexity Theory Eratta: Note Tim said around 39 mins that a recent Stanford/DM paper modelling ARC “on GPT-4 got around 60%”. This is not correct and he misremembered. It was actually davinci3, and around 10%, which is still extremely good for a blank slate approach with an LLM and no ARC specific knowledge. Folks on our forum couldn't reproduce the result. See paper linked below. Books (MUST READ): Artificial Intelligence: A Guide for Thinking Humans (Melanie Mitchell) https://www.amazon.co.uk/Artificial-Intelligence-Guide-Thinking-Humans/dp/B07YBHNM1C/?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=44ccac78973f47e59d745e94967c0f30&camp=1634&creative=6738 Complexity: A Guided Tour (Melanie Mitchell) https://www.amazon.co.uk/Audible-Complexity-A-Guided-Tour?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=3f8bd505d86865c50c02dd7f10b27c05&camp=1634&creative=6738 Show notes (transcript, full references etc) https://atlantic-papyrus-d68.notion.site/Melanie-Mitchell-2-0-15e212560e8e445d8b0131712bad3000?pvs=25 YT version: https://youtu.be/29gkDpR2orc

We‘ve Got a Problem
Finding Better Jobs w/Melanie Mitchell-Wexler

We‘ve Got a Problem

Play Episode Listen Later Sep 4, 2023 31:49


Former recruiter turned career coach Melanie Mitchell-Wexler joins the podcast to talk about how she leverages her more than 26 years in the industry to help job seekers find meaningful and fulfilling careers.  Having spent time on both sides of the equation, from the job search to the resume that gets you the interview to the personal brand that sets you apart,  Melanie knows how to maximize every stage of the job hunting process to help her clients achieve their goals and land their dream jobs.   Watch this episode on YouTube: https://youtu.be/I0rqDTBjRyw Learn more about Melanie: Website: https://www.findsucceedachieve.com LinkedIn: https://www.linkedin.com/in/mmitchellwexler/  Facebook: https://www.facebook.com/AchieveinFindingSuccess  Instagram: https://www.instagram.com/mwexlercareercoach/ Clubhouse: @Melaniewexler  X (Twitter):https://twitter.com/mwcareercoach Listen & subscribe to We've Got a Problem wherever you get your podcasts— https://link.chtbl.com/problempodcast I'm hungry for feedback! Let me know what you'd like to hear more about by connecting with me on Instagram at https://www.instagram.com/gotaproblempodcast/, by leaving a review on iTunes, or by leaving a comment below!

ESO Network – The ESO Network
Monkeeing Around – Monkee Magic with Melanie Mitchell – Episode 35

ESO Network – The ESO Network

Play Episode Listen Later Aug 17, 2023 61:03


This week we’re MONKEEING AROUND with Melanie Mitchell, author of Monkee Magic: a Book about a TV Show about a Band! We also discuss the upcoming vinyl release of Micky Dolenz Puts You to Sleep from Friday Music and a special on 7a Records’ Dolenz, Jones, Boyce & Hart you don’t want to miss! In … Monkeeing Around – Monkee Magic with Melanie Mitchell – Episode 35 Read More » The post Monkeeing Around – Monkee Magic with Melanie Mitchell – Episode 35 appeared first on The ESO Network.

Monkeeing Around
Monkeeing Around - Monkee Magic with Melanie Mitchell - Episode 35

Monkeeing Around

Play Episode Listen Later Aug 16, 2023 61:03


This week we're MONKEEING AROUND with Melanie Mitchell, author of Monkee Magic: a Book about a TV Show about a Band! We also discuss the upcoming vinyl release of Micky Dolenz Puts You to Sleep from Friday Music and a special on 7a Records' Dolenz, Jones, Boyce & Hart you don't want to miss! In our 'You May Also Like' segment this week we highlight one of Veronica and Chucks' favorite bands, The Ruen Brothers! Monkeeing Around is a part of the ESO Podcast Network, Executive Producer Mike Faber.

The Nonlinear Library
LW - The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate by Adam David Long

The Nonlinear Library

Play Episode Listen Later Aug 1, 2023 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...

The Munk Debates Podcast
Munk Dialogue with AI Debaters Yann Lecun, Max Tegmark, Melanie Mitchell and Yoshua Bengio

The Munk Debates Podcast

Play Episode Listen Later Jul 4, 2023 67:29


On June 22nd we gathered at Toronto's Roy Thomson Hall for a live, sold out debate on Artificial Intelligence. We were joined on stage by four AI experts and pioneers to debate the resolution Be it Resolved, AI research and development poses an existential threat. MIT's Max Tegmark and Mila's Yoshua Bengion argued in favour of the resolution, while Meta's Yann Lecun and the Santa Fe Institute's Melanie Mitchelll argued against it. In this episode of the Munk Dialogues, we bring you the pre-interviews our host Rudyard Griffiths conducted with each debater prior to the debate. How did they intend to argue their case? What made them want to take part in this event? And what is it about AI that has them most worried, or alternatively, most excited?   The host of the Munk Debates is Rudyard Griffiths - @rudyardg.   Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com.   To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ Senior Producer: Ricki Gurwitz Editor: Kieran Lynch

Machine Learning Street Talk
MUNK DEBATE ON AI (COMMENTARY) [DAVID FOSTER]

Machine Learning Street Talk

Play Episode Listen Later Jul 2, 2023 128:14


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB The discussion between Tim Scarfe and David Foster provided an in-depth critique of the arguments made by panelists at the Munk AI Debate on whether artificial intelligence poses an existential threat to humanity. While the panelists made thought-provoking points, Scarfe and Foster found their arguments largely speculative, lacking crucial details and evidence to support claims of an impending existential threat. Scarfe and Foster strongly disagreed with Max Tegmark's position that AI has an unparalleled “blast radius” that could lead to human extinction. Tegmark failed to provide a credible mechanism for how this scenario would unfold in reality. His arguments relied more on speculation about advanced future technologies than on present capabilities and trends. As Foster argued, we cannot conclude AI poses a threat based on speculation alone. Evidence is needed to ground discussions of existential risks in science rather than science fiction fantasies or doomsday scenarios. They found Yann LeCun's statements too broad and high-level, critiquing him for not providing sufficiently strong arguments or specifics to back his position. While LeCun aptly noted AI remains narrow in scope and far from achieving human-level intelligence, his arguments lacked crucial details on current limitations and why we should not fear superintelligence emerging in the near future. As Scarfe argued, without these details the discussion descended into “philosophy” rather than focusing on evidence and data. Scarfe and Foster also took issue with Yoshua Bengio's unsubstantiated speculation that machines would necessarily develop a desire for self-preservation that threatens humanity. There is no evidence today's AI systems are developing human-like general intelligence or desires, let alone that these attributes would manifest in ways dangerous to humans. The question is not whether machines will eventually surpass human intelligence, but how and when this might realistically unfold based on present technological capabilities. Bengio's arguments relied more on speculation about advanced future technologies than on evidence from current systems and research. In contrast, they strongly agreed with Melanie Mitchell's view that scenarios of malevolent or misguided superintelligence are speculation, not backed by evidence from AI as it exists today. Claims of an impending “existential threat” from AI are overblown, harmful to progress, and inspire undue fear of technology rather than consideration of its benefits. Mitchell sensibly argued discussions of risks from emerging technologies must be grounded in science and data, not speculation, if we are to make balanced policy and development decisions. Overall, while the debate raised thought-provoking questions about advanced technologies that could eventually transform our world, none of the speakers made a credible evidence-based case that today's AI poses an existential threat. Scarfe and Foster argued the debate failed to discuss concrete details about current capabilities and limitations of technologies like language models, which remain narrow in scope. General human-level AI is still missing many components, including physical embodiment, emotions, and the "common sense" reasoning that underlies human thinking. Claims of existential threats require extraordinary evidence to justify policy or research restrictions, not speculation. By discussing possibilities rather than probabilities grounded in evidence, the debate failed to substantively advance our thinking on risks from AI and its plausible development in the coming decades. David's new podcast: https://podcasts.apple.com/us/podcast/the-ai-canvas/id1692538973 Generative AI book: https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/

The Munk Debates Podcast
Be it Resolved, AI research and development poses an existential threat

The Munk Debates Podcast

Play Episode Listen Later Jun 29, 2023 93:22


With the debut of ChatGPT, the AI once promised in some distant future seems to have suddenly arrived with the potential to reshape our working lives, culture, politics and society. For proponents of AI, we are entering a period of unprecedented technological change that will boost productivity, unleash human creativity and empower billions in ways we have only begun to fathom. Others think we should be very concerned about the rapid and unregulated development of machine intelligence. For their detractors, AI applications like ChatGPT herald a brave new world of deep fakes and mass propaganda that could dwarf anything our democracies have experienced to date. Immense economic and political power may also concentrate around the corporations who control these technologies and their treasure troves of data. Finally, there is an existential concern that we could, in some not-so-distant future, lose control of powerful AIs who, in turn, pursue goals that are antithetical to humanity's interests and our survival as a species. Arguing for the motion is Yoshua Bengio, one of the leading worldwide experts on AI whose pioneering work in deep learning earned him the 2018 Turing Award, often referred to as “the Nobel Prize of Computing. Yoshua's debate partner is Max Tegmark, an internationally renowned cosmologist, global leader in machine learning research, and a professor at the M.I.T.  Arguing against the motion is Yann Lecun. Yann is an acclaimed computer scientist of mobile robotics and computational neuroscience, the Silver Professor of the Courant Institute of Mathematical Sciences at N.Y.U. and Vice-President, Chief AI Scientist at Meta. His debate partner is Melanie Mitchell, a bestselling author and world-leading expert in the various fields of artificial intelligence and cognitive science at the Santa Fe Institute.   The host of the Munk Debates is Rudyard Griffiths - @rudyardg.   Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com.   To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ Senior Producer: Ricki Gurwitz Editor: Kieran Lynch  

The Nonlinear Library
LW - Munk AI debate: confusions and possible cruxes by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Jun 27, 2023 13:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Munk AI debate: confusions and possible cruxes, published by Steven Byrnes on June 27, 2023 on LessWrong. There was a debate on the statement “AI research and development poses an existential threat” (“x-risk” for short), with Max Tegmark and Yoshua Bengio arguing in favor, and Yann LeCun and Melanie Mitchell arguing against. The YouTube link is here, and a previous discussion on this forum is here. The first part of this blog post is a list of five ways that I think the two sides were talking past each other. The second part is some apparent key underlying beliefs of Yann and Melanie, and how I might try to change their minds. While I am very much on the “in favor” side of this debate, I didn't want to make this just a “why Yann's and Melanie's arguments are all wrong” blog post. OK, granted, it's a bit of that, especially in the second half. But I hope people on the “anti” side will find this post interesting and not-too-annoying. Five ways people were talking past each other 1. Treating efforts to solve the problem as exogenous or not This subsection doesn't apply to Melanie, who rejected the idea that there is any existential risk in the foreseeable future. But Yann suggested that there was no existential risk because we will solve it; whereas Max and Yoshua argued that we should acknowledge that there is an existential risk so that we can solve it. By analogy, fires tend not to spread through cities because the fire department and fire codes keep them from spreading. Two perspectives on this are: If you're an outside observer, you can say that “fires can spread through a city” is evidently not a huge problem in practice. If you're the chief of the fire department, or if you're developing and enforcing fire codes, then “fires can spread through a city” is an extremely serious problem that you're thinking about constantly. I don't think this was a major source of talking-past-each-other, but added a nonzero amount of confusion. 2. Ambiguously changing the subject to “timelines to x-risk-level AI”, or to “whether large language models (LLMs) will scale to x-risk-level AI” The statement under debate was “AI research and development poses an existential threat”. This statement does not refer to any particular line of AI research, nor any particular time interval. The four participants' positions in this regard seemed to be: Max and Yoshua: Superhuman AI might happen in 5-20 years, and LLMs have a lot to do with why a reasonable person might believe that. Yann: Human-level AI might happen in 5-20 years, but LLMs have nothing to do with that. LLMs have fundamental limitations. But other types of ML research could get there—e.g. my (Yann's) own research program. Melanie: LLMs have fundamental limitations, and Yann's research program is doomed to fail as well. The kind of AI that might pose an x-risk will absolutely not happen in the foreseeable future. (She didn't quantify how many years is the “foreseeable future”.) It seemed to me that all four participants (and the moderator!) were making timelines and LLM-related arguments, in ways that were both annoyingly vague, and unrelated to the statement under debate. (If astronomers found a giant meteor projected to hit the earth in the year 2123, nobody would question the use of the term “existential threat”, right??) As usual (see my post AI doom from an LLM-plateau-ist perspective), this area was where I had the most complaints about people “on my side”, particularly Yoshua getting awfully close to conceding that under-20-year timelines are a necessary prerequisite to being concerned about AI x-risk. (I don't know if he literally believes that, but I think he gave that impression. Regardless, I strongly disagree, more on which later.) 3. Vibes-based “meaningless arguments” I recommend in the strongest possible terms that ...

The Nonlinear Library
LW - Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? by Karl von Wendt

The Nonlinear Library

Play Episode Listen Later Jun 25, 2023 11:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?, published by Karl von Wendt on June 25, 2023 on LessWrong. On June 22nd, there was a “Munk Debate”, facilitated by the Canadian Aurea Foundation, on the question whether “AI research and development poses an existential threat” (you can watch it here, which I highly recommend). On stage were Yoshua Bengio and Max Tegmark as proponents and Yann LeCun and Melanie Mitchell as opponents of the central thesis. This seems like an excellent opportunity to compare their arguments and the effects they had on the audience, in particular because in the Munk Debate format, the audience gets to vote on the issue before and after the debate. The vote at the beginning revealed 67% of the audience being pro the existential threat hypothesis and 33% against it. Interestingly, it was also asked if the listeners were prepared to change their minds depending on how the debate went, which 92% answered with “yes”. The moderator later called this extraordinary and a possible record for the format. While this is of course not representative for the general public, it mirrors the high uncertainty that most ordinary people feel about AI and its impacts on our future. I am of course heavily biased. I would have counted myself among the 8% of people who were unwilling to change their minds, and indeed I'm still convinced that we need to take existential risks from AI very seriously. While Bengio and Tegmark have strong arguments from years of alignment research on their side, LeCun and Mitchell have often made weak claims in public. So I was convinced that Bengio and Tegmark would easily win the debate. However, when I skipped to the end of the video before watching it, there was an unpleasant surprise waiting for me: at the end of the debate, the audience had seemingly switched to a more skeptical view, with now only 61% accepting an existential threat from AI and 39% dismissing it. What went wrong? Had Max Tegmark and Yoshua Bengio really lost a debate against two people I hadn't taken very seriously before? Had the whole debate somehow been biased against them? As it turned out, things were not so clear. At the end, the voting system apparently broke down, so the audience wasn't able to vote on the spot. Instead, they were later asked for their vote by email. It is unknown how many people responded, so the difference can well be a random error. However, it does seem to me that LeCun and Mitchell, although clearly having far weaker arguments, came across quite convincing. A simple count of the hands of the people behind the stage, who can be seen in the video, during a hand vote results almost in a tie. The words of the moderator also seem to indicate that he couldn't see a clear majority for one side in the audience, so the actual shift may have been even worse. In the following, I assume that Bengio and Tegmark were indeed not as convincing as I had hoped. It seems worthwhile to look at this in some more detail to learn from it for future discussions. I will not give a detailed description of the debate; I recommend you watch it yourself. However, I will summarize some key points and will give my own opinion on why this may have gone badly from an AI safety perspective, as well as some learnings I extracted for my own outreach work. The debate was structured in a good way and very professionally moderated by Munk Debate's chair Rudyard Griffiths. If anything, he seemed to be supportive of an existential threat from AI; he definitely wasn't biased against it. At the beginning, each participant gave a 6-minute opening statement, then each one could reply to what the others had said in a brief rebuttal. After that, there was an open discussion for about 40 minutes, until the participants could again summarize the...

Afrobility: Africa Tech & Business
#65: Wasoko (Sokowatch) - How the B2B retail platform is providing services and improving the supply chain for informal retailers across Kenya and Africa

Afrobility: Africa Tech & Business

Play Episode Listen Later Jun 16, 2023 115:55


Overview: Today, we're going to talk about Wasoko (Sokowatch) We'll explore the story across the following areas: Africa's and Kenya's informal retail context  Wasoko's early history Wasoko's Product & monetization strategy  Competitive positioning & potential exit options Overall outlook. This episode was recorded on June 11, 2023 Companies discussed: Wasoko (Sokowatch), Twiga Foods, TradeDepot, OmniBiz, Alerzo, mPharma, Nestle, Unilever, Multipro, Marketforce, Udaan & Flipkart Business concepts discussed: Informal retail penetration, Agency banking adoption, B2B Operating systems, Informal markets, & last-mile distribution Conversation highlights: (00:40) - Sokowatch background and why we're talking about it (08:35) - African informal retail context (16:11) - Founder background and Founding story (29:55) - Fundraising and early growth (36:56) - Geographic expansion, growth and partnerships (45;12) - Team strategy (54:00) - Product strategy and monetization (1:10:00) - Competition and options for exit (1:23:13) - Bankole's overall thoughts and outlook (1:33:05) - Olumide's overall thoughts and outlook (1:43:30) - Recommendations and small wins Olumide's recommendations & small wins: Interested in investing in Africa Tech with Olumide: Read about Adamantium fund & contact me at olumide@afrobility.com. Founders looking for funding: If you're a B2B founder working on Education, Health, Finance or food, please contact me for funding at olumide@afrobility.com Checkout my FIREDOM book = FIRE (Financial Independence, Retire Early) + Freedom = personal finance and financial independence book. Website, Read: Substack Newsletter & Buy: Print, eBook or Audiobook) Recommendation: The Slight Edge. Amazing Personal Development book about a mental framework to apply daily actions to get what you want in life. I have read it about 6 times at this point. Recommendation: Temu.com. Started using it for shopping. It has some cool gamification ideas to nudge ecommerce purchases Small win: Hosted house party with friends. It was great to see everyone together Bankole's recommendations & small wins: Recommendation: Artificial Intelligence, a guide for Thinking Humans by Melanie Mitchell  & Would You Sell Your Extra Kidney - Dylan Walsh  Small win: won my physical activity mini-league Other content: FT fastest growing companies 2022 & Modularity Theory - Christensen Institute Listeners: We'd love to hear from you. Email info@afrobility.com with feedback! Founders & Operators: We'd love to hear about what you're working on, email us at info@afrobility.com Investors: It would be great to link up with you. Contact us at info@afrobility.com Join our insider mailing list where we get feedback on new episodes & find all episodes on Afrobility.com

FUTURE FOSSILS
201 - KMO & Kevin Wohlmut on our Blue Collar Black Mirror: Star Trek, Star Wars, Blade Runner, Jurassic Park, Adventure Time, ChatGPT, & More

FUTURE FOSSILS

Play Episode Listen Later Apr 13, 2023 106:17


This week we talk about the intersections of large language models, the golden age of television and its storytelling mishaps, making one's way through the weirding of the labor economy, and much more with two of my favorite Gen X science fiction aficionados, OG podcaster KMO and our mutual friend Kevin Arthur Wohlmut. In this episode — a standalone continuation to my recent appearance on The KMO Show, we skip like a stone across mentions of every Star Trek series, the collapse of narratives and the social fabric, Westworld HBO, Star Wars Mandalorian vs. Andor vs. Rebels, chatGPT, Blade Runner 2049, Black Mirror, H.P. Lovecraft, the Sheldrake-Abraham-McKenna Trialogues, Charles Stross' Accelerando, Adventure Time, Stanislav Grof's LSD psychotherapy, Francisco Varela, Blake Lemoine's meltdown over Google LaMDA, Integrated Information Theory, biosemiotics, Douglas Hofstadter, Max Tegmarck, Erik Davis, Peter Watts, The Psychedelic Salon, Melanie Mitchell, The Teafaerie, Kevin Kelly, consilience in science, Fight Club, and more…Or, if you prefer, here's a rundown of the episode generated by A.I. c/o my friends at Podium.page:In this episode, I explore an ambitious and well-connected conversation with guests KMO, a seasoned podcaster, and Kevin Walnut [sic], a close friend and supporter of the arts in Santa Fe. We dive deep into their thoughts on the social epistemology crisis, science fiction, deep fakes, and ontology. Additionally, we discuss their opinions on the Star Trek franchise, particularly their critiques of the first two seasons of Star Trek: Picard and Discovery. Through this engaging conversation, we examine the impact of storytelling and the evolution of science fiction in modern culture. We also explore the relationship between identity, media, and artificial intelligence, as well as the ethical implications of creating sentient artificial general intelligence (AGI) and the philosophical questions surrounding AI's impact on society and human existence. Join us for a thought-provoking and in-depth discussion on a variety of topics that will leave you questioning the future of humanity and our relationship with technology.✨ Before we get started, three big announcements!* I am leaving the Santa Fe Institute, in part to write a very ambitious book about technology, art, imagination, and Jurassic Park. You can be a part of the early discussion around this project by joining the Future Fossils Book Club's Jurassic Park live calls — the first of which will be on Saturday, 29 April — open to Substack and Patreon supporters:* Catch me in a Twitter Space with Nxt Museum on Monday 17 April at 11 am PST on a panel discussing “Creative Misuse of Technology” with Minne Atairu, Parag Mital, Caroline Sinders, and hosts Jesse Damiani and Charlotte Kent.* I'm back in Austin this October to play the Astronox Festival at Apache Pass! Check out this amazing lineup on which I appear alongside Juno Reactor, Entheogenic, Goopsteppa, DRRTYWULVZ, and many more great artists!✨ Support Future Fossils:Subscribe anywhere you go for podcastsSubscribe to the podcast PLUS essays, music, and news on Substack or Patreon.Buy my original paintings or commission new work.Buy my music on Bandcamp! (This episode features “A Better Trip” from my recent live album by the same name.)Or if you're into lo-fi audio, follow me and my listening recommendations on Spotify.This conversation continues with lively and respectful interaction every single day in the members-only Future Fossils Facebook Group and Discord server. Join us!Episode cover art by KMO and a whole bouquet of digital image manipulation apps.✨ Tip Jars:@futurefossils on Venmo$manfredmacx on CashAppmichaelgarfield on PayPal✨ Affiliate Links:• These show notes and the transcript were made possible with Podium.Page, a very cool new AI service I'm happy to endorse. Sign up here and get three free hours and 50% off your first month.• BioTech Life Sciences makes anti-aging and performance enhancement formulas that work directly at the level of cellular nutrition, both for ingestion and direct topical application. I'm a firm believer in keeping NAD+ levels up and their skin solution helped me erase a year of pandemic burnout from my face.• Help regulate stress, get better sleep, recover from exercise, and/or stay alert and focused without stimulants, with the Apollo Neuro wearable. I have one and while I don't wear it all the time, when I do it's sober healthy drugs.• Musicians: let me recommend you get yourself a Jamstik Studio, the coolest MIDI guitar I've ever played. I LOVE mine. You can hear it playing all the synths on my song about Jurassic Park.✨ Mentioned Media:KMO Show S01 E01 - 001 - Michael Garfield and Kevin WohlmutAn Edifying Thought on AI by Charles EisensteinIn Defense of Star Trek: Picard & Discovery by Michael GarfieldImprovising Out of Algorithmic Isolation by Michael GarfieldAI and the Transformation of the Human Spirit by Steven Hales(and yes I know it's on Quillette, and no I don't think this automatically disqualifies it)Future Fossils Book Club #1: Blindsight by Peter WattsFF 116 - The Next Ten Billion Years: Ugo Bardi & John Michael Greer as read by Kevin Arthur Wohlmut✨ Related Recent Future Fossils Episodes:FF 198 - Tadaaki Hozumi on Japanese Esotericism, Aliens, Land Spirits, & The Singularity (Part 2)FF 195 - A.I. Art: An Emergency Panel with Julian Picaza, Evo Heyning, Micah Daigle, Jamie Curcio, & Topher SipesFF 187 - Fear & Loathing on the Electronic Frontier with Kevin Welch & David Hensley of EFF-Austin FF 178 - Chris Ryan on Exhuming The Human from Our Eldritch Institutions FF 175 - C. Thi Nguyen on The Seductions of Clarity, Weaponized Games, and Agency as Art ✨ Chapters:0:15:45 - The Substance of Philosophy (58 Seconds)0:24:45 - Complicated TV Narratives and the Internet (104 Seconds)0:30:54 - Humans vs Hosts in Westworld (81 Seconds)0:38:09 - Philosophical Zombies and Artificial Intelligence (89 Seconds)0:43:00 - Popular Franchises Themes (71 Seconds)1:03:27 - Reflections on a Changing Media Landscape (89 Seconds)1:10:45 - The Pathology of Selective Evidence (92 Seconds)1:16:32 - Externalizing Trauma Through Technology (131 Seconds)1:24:51 - From Snow Maker to Thouandsaire (43 Seconds)1:36:48 - The Impact of Boomer Parenting (126 Seconds)✨ Keywords:Social Epistemology, Science Fiction, Deep Fakes, Ontology, Star Trek, Artificial Intelligence, AI Impact, Sentient AGI, Human-Machine Interconnectivity, Consciousness Theory, Westworld, Blade Runner 2049, AI in Economy, AI Companion Chatbots, Unconventional Career Path, AI and Education, AI Content Creation, AI in Media, Turing Test✨ UNEDITED machine-generated transcript generated by podium.page:0:00:00Five four three two one. Go. So it's not like Wayne's world where you say the two and the one silently. Now, Greetings future fossils.0:00:11Welcome to episode two hundred and one of the podcast that explores our place in time I'm your host, Michael Garfield. And this is one of these extra juicy and delicious episodes of the show where I really ratcheted up with our guests and provide you one of these singularity is near kind of ever everything is connected to everything, self organized criticality right at the edge of chaos conversations, deeply embedded in chapel parallel where suddenly the invisible architect picture of our cosmos starts to make itself apparent through the glass bead game of conversation. And I am that I get to share it with you. Our guests this week are KMO, one of the most seasoned and well researched and experienced podcasters that I know. Somebody whose show the Sea Realm was running all the way back in two thousand six, I found him through Eric Davis, who I think most of you know, and I've had on the show a number of times already. And also Kevin Walnut, who is a close friend of mine here in Santa Fe, a just incredible human being, he's probably the strongest single supporter of music that I'm aware of, you know, as far as local scenes are concerned and and supporting people's music online and helping get the word out. He's been instrumental to my family and I am getting ourselves situated here all the way back to when I visited Santa Fe in two thousand eighteen to participate in the Santa Fe Institute's Interplanetary Festival and recorded conversations on that trip John David Ebert and Michael Aaron Cummins. And Ike used so June. About hyper modernity, a two part episode one zero four and one zero five. I highly recommend going back to that, which is really the last time possibly I had a conversation just this incredibly ambitious on the show.0:02:31But first, I want to announce a couple things. One is that I have left the Santa Fe Institute. The other podcast that I have been hosting for them for the last three and a half years, Complexity Podcast, which is substantially more popular in future fossils due to its institutional affiliation is coming to a close, I'm recording one more episode with SFI president David Krakauer next week in which I'm gonna be talking about my upcoming book project. And that episode actually is conjoined with the big announcement that I have for members of the Future Fossil's listening audience and and paid supporters, which is, of course, the Jurassic Park Book Club that starts On April twenty ninth, we're gonna host the first of two video calls where I'm gonna dive deep into the science and philosophy Michael Creighton's most popular work of fiction and its impact on culture and society over the thirty three years since its publication. And then I'm gonna start picking up as many of the podcasts that I had scheduled for complexity and had to cancel upon my departure from SFI. And basically fuse the two shows.0:03:47And I think a lot of you saw this coming. Future fossils is going to level up and become a much more scientific podcast. As I prepare and research the book that I'm writing about Jurassic Park and its legacy and the relationship It has to ILM and SFI and the Institute of Eco Technics. And all of these other visionary projects that sprouted in the eighties and nineties to transition from the analog to the digital the collapse of the boundaries between the real and the virtual, the human and the non human worlds, it's gonna be a very very ambitious book and a very very ambitious book club. And I hope that you will get in there because obviously now I am out in the rain as an independent producer and very much need can benefit from and am deeply grateful for your support for this work in order to make things happen and in order to keep my family fed, get the lights on here with future fossils. So with that, I wanna thank all of the new supporters of the show that have crawled out of the woodwork over the last few weeks, including Raefsler Oingo, Brian in the archaeologist, Philip Rice, Gerald Bilak, Jamie Curcio, Jeff Hanson who bought my music, Kuaime, Mary Castello, VR squared, Nastia teaches, community health com, Ed Mulder, Cody Couiac, bought my music, Simon Heiduke, amazing visionary artist. I recommend you check out, Kayla Peters. Yeah. All of you, I just wow. Thank you so much. It's gonna be a complete melee in this book club. I'm super excited to meet you all. I will send out details about the call details for the twenty ninth sometime in the next few days via a sub tag in Patreon.0:06:09The amount of support that I've received through this transition has been incredible and it's empowering me to do wonderful things for you such as the recently released secret videos of the life sets I performed with comedian Shane Moss supporting him, opening for him here in Santa Fe. His two sold out shows at the Jean Coutu cinema where did the cyber guitar performances. And if you're a subscriber, you can watch me goofing off with my pedal board. There's a ton of material. I'm gonna continue to do that. I've got a lot of really exciting concerts coming up in the next few months that we're gonna get large group and also solo performance recordings from and I'm gonna make those available in a much more resplendent way to supporters as well as the soundtrack to Mark Nelson of the Institute of Eco Technics, his UC San Diego, Art Museum, exhibit retrospective looking at BioSphere two. I'm doing music for that and that's dropping. The the opening of that event is April twenty seventh. There's gonna be a live zoom event for that and then I'm gonna push the music out as well for that.0:07:45So, yeah, thank you all. I really, really appreciate you listening to the show. I am excited to share this episode with you. KMO is just a trove. Of insight and experience. I mean, he's like a perfect entry into the digital history museum that this show was predicated upon. So with that and also, of course, Kevin Willett is just magnificent. And for the record, stick around at the end of the conversation. We have some additional pieces about AI, and I think you're gonna really enjoy it. And yeah, thank you. Here we go. Alright. Cool.0:09:26Well, we just had a lovely hour of discussion for the new KMO podcast. And now I'm here with KMO who is The most inveterate podcaster I know. And I know a lot of them. Early adopts. And I think that weird means what you think it means. Inventor it. Okay. Yes. Hey, answer to both. Go ahead. I mean, you're not yet legless and panhandling. So prefer to think of it in term in terms of August estimation. Yeah. And am I allowed to say Kevin Walnut because I've had you as a host on True. Yeah. My last name was appeared on your show. It hasn't appeared on camos yet, but I don't really care. Okay. Great. Yeah. Karen Arthur Womlett, who is one of the most solid and upstanding and widely read and just generous people, I think I know here in Santa Fe or maybe anywhere. With excellent taste and podcasts. Yes. And who is delicious meat I am sampling right now as probably the first episode of future fossils where I've had an alcoholic beverage in my hand. Well, I mean, it's I haven't deprived myself. Of fun. And I think if you're still listening to the show after all these years, you probably inferred that. But at any rate, Welcome on board. Thank you. Thanks. Pleasure to be here.0:10:49So before we started rolling, I guess, so the whole conversation that we just had for your show camera was very much about my thoughts on the social epistemology crisis and on science fiction and deep fakes and all of these kinds of weird ontology and these kinds of things. But in between calls, we were just talking about how much you detest the first two seasons of Star Trek card and of Discovery. And as somebody, I didn't bother with doing this. I didn't send you this before we spoke, but I actually did write an SIN defense of those shows. No one. Yeah. So I am not attached to my opinion on this, but And I actually do wanna at some point double back and hear storytelling because when he had lunch and he had a bunch of personal life stuff that was really interesting. And juicy and I think worthy of discussion. But simply because it's hot on the rail right now, I wanna hear you talk about Star Trek. And both of you, actually, I know are very big fans of this franchise. I think fans are often the ones from whom a critic is most important and deserved. And so I welcome your unhinged rants. Alright. Well, first, I'll start off by quoting Kevin's brother, the linguist, who says, That which brings us closer to Star Trek is progress. But I'd have to say that which brings us closer to Gene Rottenberry and Rick Berman era Star Trek. Is progress. That which brings us closer to Kurtzmann. What's his first name? Alex. Alex Kurtzmann, Star Trek. Well, that's not even the future. I mean, that's just that's our drama right now with inconsistent Star Trek drag draped over it.0:12:35I liked the first JJ Abrams' Star Trek. I think it was two thousand nine with Chris Pine and Zachary Qinto and Karl Urban and Joey Saldana. I liked the casting. I liked the energy. It was fun. I can still put that movie on and enjoy it. But each one after that just seem to double down on the dumb and just hold that arm's length any of the philosophical stuff that was just amazing from Star Trek: The Next Generation or any of the long term character building, which was like from Deep Space nine.0:13:09And before seven of nine showed up on on Voyager, you really had to be a dedicated Star Trek fan to put up with early season's Voyager, but I did because I am. But then once she came on board and it was hilarious. They brought her onboard. I remember seeing Jerry Ryan in her cat suit on the cover of a magazine and just roll in my eyes and think, oh my gosh, this show is in such deep trouble through sinking to this level to try to save it. But she was brilliant. She was brilliant in that show and she and Robert Percardo as the doctor. I mean, it basically became the seven of nine and the doctor show co starring the rest of the cast of Voyager. And it was so great.0:13:46I love to hear them singing together and just all the dynamics of I'm human, but I was I basically came up in a cybernetic collective and that's much more comfortable to me. And I don't really have the option of going back it. So I gotta make the best of where I am, but I feel really superior to all of you. Is such it was such a charming dynamic. I absolutely loved it. Yes. And then I think a show that is hated even by Star Trek fans Enterprise. Loved Enterprise.0:14:15And, yes, the first three seasons out of four were pretty rough. Actually, the first two were pretty rough. The third season was that Zendy Ark in the the expanse. That was pretty good. And then season four was just astounding. It's like they really found their voice and then what's his name at CBS Paramount.0:14:32He's gone now. He got me too. What's his name? Les Moonves? Said, no. I don't like Star Trek. He couldn't he didn't know the difference between Star Wars and Star Trek. That was his level of engagement.0:14:44And he's I really like J.0:14:46J.0:14:46Abrams. What's that? You mean J. J. Abrams. Yeah. I think J. J. Is I like some of J. Abrams early films. I really like super eight. He's clearly his early films were clearly an homage to, like, eighties, Spielberg stuff, and Spielberg gets the emotional beats right, and JJ Abrams was mimicking that, and his early stuff really works. It's just when he starts adapting properties that I really love. And he's coming at it from a marketing standpoint first and a, hey, we're just gonna do the lost mystery box thing. We're gonna set up a bunch questions to which we don't know the answers, and it'll be up to somebody else to figure it out, somebody down the line. I as I told you, between our conversations before we were recording. I really enjoy or maybe I said it early in this one. I really like that first J. J. Abrams, Star Trek: Foam, and then everyone thereafter, including the one that Simon Pegg really had a hand in because he's clear fan. Yeah. Yeah. But they brought in director from one of the fast and the furious films and they tried to make it an action film on.0:15:45This is not Star Trek, dude. This is not why we like Star Trek. It's not for the flash, particularly -- Oh my god. -- again, in the first one, it was a stylistic choice. I'd like it, then after that is that's the substance of this, isn't it? It's the lens flares. I mean, that that's your attempt at philosophy. It's this the lens flares. That's your attempt at a moral dilemma. I don't know.0:16:07I kinda hate to start off on this because this is something about which I feel like intense emotion and it's negative. And I don't want that to be my first impression. I'm really negative about something. Well, one of the things about this show is that I always joke that maybe I shouldn't edit it because The thing that's most interesting to archaeologists is often the trash mitt and here I am tidying this thing up to be presentable to future historians or whatever like it I can sync to that for sure. Yeah. I'm sorry. The fact of it is you're not gonna know everything and we want it that way. No. It's okay. We'll get around to the stuff that I like. But yeah. So anyway yeah.0:16:44So I could just preassociate on Stretrick for a while, so maybe a focusing question. Well, but first, you said there's a you had more to say, but you were I this this tasteful perspective. This is awesome. Well, I do have a focus on question for you. So let me just have you ask it because for me to get into I basically I'm alienated right now from somebody that I've been really good friends with since high school.0:17:08Because over the last decade, culturally, we have bifurcated into the hard right, hard left. And I've tried not to go either way, but the hard left irritates me more than the hard right right now. And he is unquestionably on the hard left side. And I know for people who are dedicated Marxist, or really grounded in, like, materialism and the material well-being of workers that the current SJW fanaticism isn't leftist. It's just crazed. We try to put everything, smash everything down onto this left right spectrum, and it's pretty easy to say who's on the left and who's on the right even if a two dimensional, two axis graph would be much more expressive and nuanced.0:17:49Anyway, what's your focus in question? Well, And I think there is actually there is a kind of a when we ended your last episode talking about the bell riots from d s nine -- Mhmm. -- that, you know, how old five? Yeah. Twenty four. Ninety five did and did not accurately predict the kind of technological and economic conditions of this decade. It predicted the conditions Very well. Go ahead and finish your question. Yeah. Right.0:18:14That's another thing that's retreated in picard season two, and it was actually worth it. Yeah. Like, it was the fact that they decided to go back there was part of the defense that I made about that show and about Discovery's jump into the distant future and the way that they treated that I posted to medium a year or two ago when I was just watching through season two of picard. And for me, the thing that I liked about it was that they're making an effort to reconcile the wonder and the Ethiopian promise And, you know, this Kevin Kelly or rather would call Blake Protopian, right, that we make these improvements and that they're often just merely into incremental improvements the way that was it MLK quoted that abolitionists about the long arc of moral progress of moral justice. You know, I think that there's something to that and patitis into the last this is a long question. I'm mad at I'm mad at these. Thank you all for tolerating me.0:19:22But the when to tie it into the epistemology question, I remember this seeing this impactful lecture by Carnegie Mellon and SFI professor Simon Didayo who was talking about how by running statistical analysis on the history of the proceedings of the Royal Society, which is the oldest scientific journal, that you could see what looked like a stock market curve in sentiment analysis about the confidence that scientists had at the prospect of unifying knowledge. And so you have, like, conciliance r s curve here that showed that knowledge would be more and more unified for about a century or a hundred and fifty years then it would go through fifty years of decline where something had happened, which was a success of knowledge production. Had outpaced our ability to integrate it. So we go through these kinds of, like, psychedelic peak experiences collectively, and then we have sit there with our heads in our hands and make sense of everything that we've learned over the last century and a half and go through a kind of a deconstructive epoch. Where we don't feel like the center is gonna hold anymore. And that is what I actually As as disappointing as I accept that it is and acknowledge that it is to people who were really fueling themselves on that more gene rottenberry era prompt vision for a better society, I actually appreciated this this effort to explore and address in the shows the way that they could pop that bubble.0:21:03And, like, it's on the one hand, it's boring because everybody's trying to do the moral complexity, anti hero, people are flawed, thing in narrative now because we have a general loss of faith in our institutions and in our rows. On the other hand, like, that's where we are and that's what we need to process And I think there is a good reason to look back at the optimism and the quarian hope of the sixties and early seventies. We're like, really, they're not so much the seventies, but look back on that stuff and say, we wanna keep telling these stories, but we wanna tell it in a way that acknowledges that the eighties happened. And that this is you got Tim Leary, and then you've got Ronald Reagan. And then That just or Dick Nixon. And like these things they wash back and forth. And so it's not unreasonable to imagine that in even in a world that has managed to how do you even keep a big society like that coherent? It has to suffer kind of fabric collapses along the way at different points. And so I'm just curious your thoughts about that. And then I do have another prompt, but I wanna give Kevin the opportunity to respond to this as well as to address some of the prompts that you brought to this conversation? This is a conversation prompt while we weren't recording. It has nothing to do with Sartreks. I'll save that for later. Okay.0:22:25Well, everything you just said was in some way related to a defense of Alex Kurtzmann Star Trek. And it's not my original idea. I'm channeling somebody from YouTube, surely. But Don't get points for theme if the storytelling is incompetent. That's what I was gonna Yeah. And the storytelling in all of Star Trek: Discovery, and in the first two seasons of picard was simply incompetent.0:22:53When Star Trek, the next generation was running, they would do twenty, twenty four, sometimes more episodes in one season. These days, the season of TVs, eight episodes, ten, and they spend a lot more money on each episode. There's a lot more special effects. There's a lot more production value. Whereas Star Trek: The Next Generation was, okay, we have these standing sets. We have costumes for our actors. We have Two dollars for special effects. You better not introduce a new alien spaceship. It that costs money. We have to design it. We have to build it. So use existing stuff. Well, what do you have? You have a bunch of good actors and you have a bunch of good writers who know how to tell a story and craft dialogue and create tension and investment with basically a stage play and nothing in the Kerstmann era except one might argue and I would have sympathy strange new worlds. Comes anywhere close to that level of competence, which was on display for decades. From Star Trek: The Next Generation, Star Trek: Deep Space nines, Star Trek Voyager, and Star Trek Enterprise. And so, I mean, I guess, in that respect, it's worth asking because, I mean, all of us, I think, are fans of Deep Space nine.0:24:03You don't think that it's a shift in focus. You don't think that strange in world is exempt because it went back to a more episodic format because what you're talking about is the ability for rather than a show runner or a team of show runners to craft a huge season, long dramatic arc. You've got people that are like Harlan Ellison in the original series able to bring a really potent one off idea to the table and drop it. And so there are there's all of those old shows are inconsistent from episode to episode. Some are they have specific writers that they would bring back again and that you could count to knock out of the park. Yeah. DC Fontana. Yeah.0:24:45So I'm curious to your thoughts on that as well as another part of this, which is when we talk when we talk your show about Doug Rushkoff and and narrative collapse, and he talks about how viewers just have different a way, it's almost like d s nine was possibly partially responsible for this change in what people expected from so. From television programming in the documentary that was made about that show and they talk about how people weren't ready for cereal. I mean, for I mean, yeah, for these long arcs, And so there is there's this question now about how much of this sort of like tiresome moral complexity and dragging narrative and all of this and, like, things like Westworld where it becomes so baroque and complicated that, like, you have, like, die hard fans like me that love it, but then you have a lot of people that just lost interest. They blacked out because the show was trying to tell a story that was, like, too intricate like, too complicated that the the show runners themselves got lost. And so that's a JJ Abrams thing too, the puzzle the mystery box thing where You get to the end of five seasons of lost and you're like, dude, did you just forget?0:25:56Did you wake up five c five episodes ago and just, oh, right. Right. We're like a chatbot that only give you very convincing answers based on just the last two or three interactions. But you don't remember the scene that we set. Ten ten responses ago. Hey. You know, actually, red articles were forget who it was, which series it was, they were saying that there's so many leaks and spoilers in getting out of the Internet that potentially the writers don't know where they're going because that way it can't be with the Internet. Yeah. Sounds interesting. Yeah. That sounds like cover for incompetence to be.0:26:29I mean, on the other hand, I mean, you did hear, like, Nolan and Joy talking about how they would they were obsessed with the Westworld subreddit and the fan theories and would try to dodge Like, if they had something in their mind that they found out that people are re anticipating, they would try to rewrite it. And so there is something about this that I think is really speaks to the nature of because I do wanna loop in your thoughts on AI to because you're talking about this being a favorite topic. Something about the, like, trying to The demands on the self made by predatory surveillance technologies are such that the I'm convinced the adaptive response is that we become more stochastic or inconsistent in our identities. And that we kind of sublimate from a more solid state of identity to or through a liquid kind of modernity biologic environment to a gaseous state of identity. That is harder to place sorry, harder to track. And so I think that this is also part of and this is the other question I wanted to ask you, and then I'm just gonna shut up for fifteen minutes is do you when you talk about loving Robert Ricardo and Jerry Ryan as the doctor at seven zero nine, One of the interesting things about that relationship is akin to stuff.0:27:52I know you've heard on Kevin have heard on future fossils about my love for Blade Runner twenty forty nine and how it explores all of these different these different points along a gradient between what we think of in the current sort of general understanding as the human and the machine. And so there's this thing about seven, right, where she's She's a human who wants to be a machine. And then there's this thing about the doctor where he's a machine that wants to be a human. And you have to grant both on a logical statuses to both of them. And that's why I think they're the two most interesting characters. Right?0:28:26And so at any rate, like, this is that's there's I've seen writing recently on the Turing test and how, like, really, there should be a reverse Turing test to see if people that have become utterly reliant on outboard cognition and information processing. They can pass the drink. Right. Are they philosophical zombies now? Are they are they having some an experience that that, you know, people like, thick and and shilling and the missing and these people would consider the modern self or are they something else have we moved on to another more routine robotic kind of category of being? I don't know. There's just a lot there, but -- Well done. -- considering everything you just said, In twenty words or less, what's your question? See, even more, like I said, do you have the inveterate podcaster? I'd say There's all of those things I just spoke about are ways in which what we are as people and the nature of our media, feedback into fourth, into each other. And so I would just love to hear you reflect on any of that, be it through the lens of Star Trek or just through the lens of discussion on AI. And we'll just let the ball roll downhill. So with the aim of framing something positively rather than negatively.0:29:47In the late nineties, mid to late nineties. We got the X Files. And the X Files for the first few seasons was so It was so engaging for me because Prior to that, there had been Hollywood tropes about aliens, which informed a lot of science fiction that didn't really connect with the actual reported experience of people who claim to have encountered either UFOs, now called UAPs, or had close encounters physical contact. Type encounters with seeming aliens. And it really seemed like Chris Carter, who was the showrunner, was reading the same Usenet Newsgroups that I was reading about those topics. Like, really, we had suddenly, for the first time, except maybe for comedian, you had the Grey's, and you had characters experiencing things that just seemed ripped right out of the reports that people were making on USnet, which for young folks, this is like pre Worldwide Web. It was Internet, but with no pictures. It's all text. Good old days from my perspective is a grumpy old gen xer. And so, yeah, that was a breakthrough moment.0:30:54Any this because you mentioned it in terms of Jonathan Nolan and his co writer on Westworld, reading the subreddit, the West and people figured out almost immediately that there were two interweaving time lines set decades apart and that there's one character, the old guy played by Ed Harris, and the young guy played by I don't remember the actor. But, you know, that they were the same character and that the inveterate white hat in the beginning turns into the inveterate black cat who's just there for the perverse thrill of tormenting the hosts as the robots are called. And the thing that I love most about that first season, two things. One, Anthony Hopkins. Say no more. Two, the revelation that the park has been basically copying humans or figuring out what humans are by closely monitoring their behavior in the park and the realization that the hosts come to is that, holy shit compared to us, humans are very simple creatures. We are much more complex. We are much more sophisticated, nuanced conscious, we feel more than the humans do, and that humans use us to play out their perverse and sadistic fantasies. To me, that was the takeaway message from season one.0:32:05And then I thought every season after that was just diluted and confused and not really coherent. And in particular, I haven't if there's a fourth season, haven't There was and then the show got canceled before they could finish the story. They had the line in season three. It was done after season three. And I was super happy to see Let's see after who plays Jesse Pinkman? Oh, no. Aaron oh, shit. Paul. Yes. Yeah. I was super happy to see him and something substantial and I was really pleased to see him included in the show and it's like, oh, that's what you're doing with him? They did a lot more interesting stuff with him in season four. I did they. They did a very much more interesting stuff. I think it was done after season three. If you tell me season four is worth taking in, I blow. I thought it was.0:32:43But again, I only watch television under very specific set of circumstances, and that's how I managed to enjoy television because I was a fierce and unrepentant hyperlogical critic of all media as a child until I managed to start smoking weed. And then I learned to enjoy myself. As we mentioned in the kitchen as I mentioned in the kitchen, if I smoke enough weed, Star Trek: Discovery is pretty and I can enjoy it on just a second by second level where if I don't remember what the character said thirty seconds ago, I'm okay. But I absolutely loved in season two when they brought in Hanson Mountain as as Christopher Pike. He's suddenly on the discovery and he's in the captain's chair. And it's like he's speaking for the audience. The first thing he says is, hey, why don't we turn on the lights? And then hey, all you people sitting around the bridge. We've been looking at your faces for a whole season. We don't even think about you. Listen to a round of introductions. Who are you? Who are you? It's it's if I were on set. You got to speak.0:33:53The writers is, who are these characters? We've been looking at them every single episode for a whole season. I don't know their names. I don't know anything about them. Why are they even here? Why is it not just Michael Burnham and an automated ship? And then it was for a while -- Yeah. -- which is funny. Yeah. To that point, And I think this kind of doubles back. The thing that I love about bringing him on and all of the people involved in strange and worlds in particular, is that these were lifelong fans of this series, I mean, of this world. Yeah. And so in that way, gets to this the idiosyncrasy question we're orbiting here, which is when these things are when the baton is passed well, it's passed to people who have now grown up with this stuff.0:34:40I personally cannot stand Jurassic World. Like, I think that Colin Trivaro should never have been in put at the reins. Which one did he direct? Oh, he did off he did first and the third. Okay. But, I mean, he was involved in all three very heavily.0:34:56And there's something just right at the outset of that first Jurassic World where you realize that this is not a film that's directly addressing the issues that Michael Creighton was trying to explore here. It's a film about its own franchise. It's a film about the fact that they can't just stop doing the same thing over and over again as we expect a different question. How can we not do it again? Right. And so it's actually, like, unpleasantly soft, conscious, in that way that I can't remember I'll try to find it for the show notes, but there's an Internet film reviewer who is talking about what happens when, like, all cinema has to take this self referential turn.0:35:34No. And films like Logan do it really well. But there are plenty of examples where it's just cheeky and self aware because that's what the ironic sensibility is obsessed with. And so, yeah, there's a lot of that where it's, like, you're talking about, like, Abrams and the the Star Wars seven and you know, that whole trilogy of Disney Star Wars, where it's, in my opinion, completely fumbled because there it's just empty fan service, whereas when you get to Andor, love Andor. Andor is amazing because they're capable of providing all of those emotional beats that the fans want and the ref the internal references and good dialogue. But they're able to write it in a way that's and shoot it in a way. Gilroy and Bo Willeman, basic of the people responsible for the excellent dialogue in Andor.0:36:31And I love the production design. I love all the stuff set on Coruscant, where you saw Coruscant a lot in the prequel trilogy, and it's all dayglow and bright and just in your face. And it's recognizable as Coruscant in andor, but it's dour. It's metropolis. It's all grays and it's and it's highlighting the disparity between where the wealthy live and where the poor live, which Lucas showed that in the prequel trilogy, but even in the sports bar where somebody tries to sell death sticks to Obi wan. So it's super clean and bright and just, you know, It shines too much. Personally though, and I just wanna stress, KMO is not grumpy media dude, I mean, this is a tiny fraction about, but I am wasting this interview with you. Love. All of the Dave Felloni animated Star Wars stuff, even rebels. Love it all.0:37:26I I'm so glad they aged up the character and I felt less guilty about loving and must staying after ahsoka tano? My favorite Star Wars character is ahsoka tano. But if you only watch the live action movies, you're like who? Well, I guess now that she's been on the Mandalorian, he's got tiny sliver of a foothold -- Yeah. -- in the super mainstream Star Wars. And that was done well, I thought. It was. I'm so sorry that Ashley Epstein doesn't have any part in it. But Rosario Dawson looks the part. She looks like a middle aged Asaka and think they tried to do some stuff in live action, which really should have been CGI because it's been established that the Jedi can really move, and she looked human. Which she is? If you put me on film, I'm gonna lick human. Right. Not if you're Canada Reeves, I guess. You got that. Yeah. But yeah.0:38:09So I do wanna just go real briefly back to this question with you about because we briefly talked about chat, GPT, and these other things in your half of this. And, yeah, I found out just the other night my friend, the t ferry, asked Chad g p t about me, and it gave a rather plausible and factual answer. I was surprised and That's what these language models do. They put plausible answers. But when you're doing search, you want correct answers. Right. I'm very good at that. Right. Then someone shared this Michelle Bowen's actually the famous PTP guy named him. Yeah. So, you know, So Michelle shared this article by Steven Hales and Colette, that was basically making the argument that there are now they're gonna be all these philosophical zombies, acting as intelligent agents sitting at the table of civilization, and there will be all the philosophical zombies of the people who have entirely yielded their agency to them, and they will be cohabitating with the rest of us.0:39:14And what an unpleasant scenario, So in light of that, and I might I'd love to hear you weave that together with your your thoughts on seven zero nine and the doctor and on Blade Runner twenty forty nine. And this thing that we're fumbling through as a species right now. Like, how do we got a new sort of taxonomy? Does your not audience need like a minute primer on P zombies? Might as well. Go for it.0:39:38So a philosophical zombie is somebody who behaves exactly like an insult person or a person with interior experience or subjective experience, but they don't have any subjective experience. And in Pardon me for interrupt. Wasn't that the question about the the book we read in your book club, a blind sign in this box? Yes. It's a black box, a drawn circle. Yeah. Chinese room experience. Yeah. Yeah. Yeah. Look, Daniel, it goes out. You don't know, it goes on inside the room. Chinese room, that's a tangent. We can come back to it. P. Zombie. P. Zombie is somebody or is it is an entity. It's basically a puppet. It looks human. It acts human. It talks like a human. It will pass a Turing test, but it has no interior experience.0:40:25And when I was going to grad school for philosophy of mind in the nineteen nineties, this was all very out there. There was no example of something that had linguistic competence. Which did not have internal experience. But now we have large language models and generative pretrained transformer based chatbots that don't have any internal experience. And yet, when you interact with them, it seems like there is somebody there There's a personality there. And if you go from one model to a different, it's a very different personality. It is distinctly different. And yet we have no reason to believe that they have any sort of internal experience.0:41:01So what AI in the last decade and what advances has demonstrated to us and really even before the last decade You back in the nineties when the blue beat Gary Casper off at at chess. And what had been the one of the defining characteristics of human intelligence was we're really good at this abstract mathematical stuff. And yeah, calculators can calculate pie in a way that we can't or they can cube roots in a way that humans generally can't, creative in their application of these methodologies And all of a sudden, well, yeah, it kinda seems like they are. And then when what was an alpha go -- Mhmm. -- when it be to least a doll in go, which is a much more complex game than chess and much more intuitive based. That's when we really had to say, hey, wait a minute. Maybe this notion that These things are the exclusive province of us because we have a special sort of self awareness. That's bunk. And the development of large language models since then has absolutely demonstrated that competence, particularly linguistic competence and in creative activities like painting and poetry and things like that, you don't need a soul, you don't even need to sense a self, it's pretty it's a pretty simple hack, actually. And Vahrv's large language models and complex statistical modeling and things, but it doesn't require a soul.0:42:19So that was the Peter Watts' point in blindsight. Right? Which is Look revolves around are do these things have a subjective experience, and do they not these aliens that they encounter? I've read nothing but good things about that book and I've read. It's extraordinary. But his lovecrafty and thesis is that you actually lovecraftian in twenty twenty three. Oh, yeah. In the world, there's more lovecraftian now than it was when he was writing. Right? So cough about the conclusion of a Star Trek card, which is season of Kraft yet. Yes. That's a that's a com Yeah. The holes in his fan sense. But that was another show that did this I liked for asking this question.0:42:54I mean, at this point, you either have seen this or you haven't you never will. The what the fuck turn when they upload picard into a synth body and the way that they're dealing with the this the pinocchio question Let's talk about Blade Runner twenty forty nine. Yeah. But I mean yeah. So I didn't like the wave I did not like the wave of card handled that. I love the wave and Blade Runner handled it. So you get no points for themes. Yeah. Don't deliver on story and character and coherence. Yeah. Fair. But yeah. And to be not the dog, Patrick Stewart, because it's clear from the ready room just being a part of this is so emotional and so awesome for everyone involved. And it's It's beautiful. Beautiful. But does when you when you see these, like, entertainment weekly interviews with Chris Pratt and Bryce Dallas Howard about Jurassic World, and it's clear that actors are just so excited to be involved in a franchise that they're willing to just jettison any kind of discretion about how the way that it's being treated. They also have a contractual obligation to speak in positive terms about -- They do. -- of what they feel. Right. Nobody's yeah. Nobody's doing Shout out to Rystellis Howard, daughter of Ron Howard.0:44:11She was a director, at least in the first season, maybe the second season of the Mandalorian. And her episodes I mean, I she brought a particular like, they had Bryce Dallas Howard, Tico, ITT, directed some episodes. Deborah Chow, who did all of Obi wan, which just sucked. But her contributions to the Mandalorian, they had a particular voice. And because that show is episodic, Each show while having a place in a larger narrative is has a beginning middle and end that you can bring in a director with a particular voice and give that episode that voice, and I really liked it. And I really liked miss Howard's contribution.0:44:49She also in an episode of Black Mirror. The one where everyone has a social credit score. Knows Donuts. Black Mirror is a funny thing because It's like, reality outpaces it. Yeah. I think maybe Charlie Bruker's given up on it because they haven't done it in a while. Yeah. If you watch someone was now, like, five, six years later, it's, yes, or what? See, yes. See, damn. Yeah. Exactly. Yeah. But yeah. I don't know. I just thing that I keep circling and I guess we come to on the show a lot is the way that memory forms work substantiates an integrity in society and in the way that we relate to things and the way that we think critically about the claims that are made on truth and so on and say, yeah, I don't know. That leads right into the largest conversation prompt that I had about AI. Okay? So we were joking when we set up this date that this was like the trial logs between Terence Buchanan and Rupert Shell Drake. And what's his name? Real Abraham. Yeah. Yeah. All Abraham. And Rupert Shell Drake is most famous for a steward of Morphe resin.0:45:56So does AI I've never really believed that Norfolk residents forms the base of human memory, but is that how AI works? It brings these shapes from the past and creates new instantiation of them in the present. Is AI practicing morphic resonance in real life even if humans are or not? I've had a lot of interaction with AI chatbots recently. And as I say, different models produce different seeming personalities. And you can tell, like, you can just quiz them. Hey, we're talking about this. Do you remember what I said about it ten minutes ago? And, no, they don't remember more than the last few exchanges.0:46:30And yet, there seems to be a continuity that belies the lack of short term memory. And is that more for residents or is that what's the word love seeing shapes and clouds parad paradolia. Yeah. Is that me imparting this continuity of personality to the thing, which is really just spitting out stuff, which is designed to seem plausible given what the input was. And I can't answer that. Or it's like Steven Nagmanovich in free play talks about somewhat I'm hoping to have on the show at some point.0:47:03This year talks about being a professional improviser and how really improvisation is just composition at a much faster timescale. And composition is just improvisation with the longer memory. And how when I started to think about it in those terms, the continuity that you're talking about is the continuity of an Alzheimer's patient who can't remember that their children have grown up and You know, that that's you have to think about it because you can recognize the Alzheimer's and your patient as your dad, even though he doesn't recognize you, there is something more to a person than their memories. And conversely, if you can store and replicate and move the memories to a different medium, have you moved the person? Maybe not. Yeah. So, yeah, that's interesting because that gets to this more sort of essentialist question about the human self. Right. Blade Runner twenty forty nine. Yeah. Go there. Go there. A joy. Yes.0:47:58So in Blade Runner twenty forty nine, we have our protagonist Kaye, who is a replicant. He doesn't even have a name, but he's got this AI holographic girlfriend. But the ad for the girlfriend, she's naked. When he comes home, she is She's constantly changing clothes, but it's always wholesome like nineteen fifty ish a tire and she's making dinner for him and she lays the holographic dinner over his very prosaic like microwave dinner. And she's always encouraging him to be more than he is. And when he starts to uncover the evidence that he might be like this chosen one, like replicant that was born rather than made.0:48:38She's all about it. She's, yes, you're real, and she wants to call him Joe's. K is not a name. That's just the first letter in your serial number. You're Joe. I'm gonna call you Joe.0:48:46And then when she's about to be destroyed, The last thing is she just rushes to me. She says, I love you. But then later he encounters an ad for her and it's an interactive ad. And she says, you looked tired. You're a good Joe. And he realizes and hopefully the attentive audience realizes as real as she seemed earlier, as vital, and as much as she seemed like an insult being earlier, she's not. That was her programming. She's designed to make you feel good by telling you what you want to hear. And he has that realization. And at that point, he's there's no hope for me. I'm gonna help this Rick Deckard guy hook up with his daughter, and then I'm just gonna lie down and bleed to death. Because my whole freaking existence was a lie. But he's not bitter. He seems to be at peace. I love that. That's a beautiful angle on that film or a slice of it. And So it raises this other question that I wanted to ask, which was about the Coke and Tiononi have that theory of consciousness.0:49:48That's one of the leading theories contending with, like, global workspace, which is integrated information. And so they want to assign consciousness as a continuous value that grayates over degree to which a system is integrated. So it's coming out of this kind of complex systems semi panpsychist thing that actually doesn't trace interiority all the way down in the way that some pants, I guess, want it to be, but it does a kind of Alfred North Whitehead thing where they're willing to say that Whitehead wanted to say that even a photon has, like, the quantum of mind to accompany its quantum of matter, but Tinutti and Coker saying, we're willing to give like a thermostat the quantum here because it is in some way passing enough information around inside of itself in loops. That it has that accursive component to it. And so that's the thing that I wonder about these, and that's the critique that's made by people like Melanie about diffusion models like GPT that are not they're not self aware because there's no loop from the outputs back into the input.0:51:09And there isn't the training. Yeah. There there is something called backwards propagation where -- Yes. -- when you get an output that you'd like, you can run a backward propagation algorithm back through the black box basically to reinforce the patterns of activation that you didn't program. They just happen, easily, but you like the output and you can reinforce it. There's no biological equivalent of that. Yeah. Particularly, not particularly irritating.0:51:34I grind my teeth a little bit when people say, oh, yeah, these neural net algorithms they've learned, like humans learn, no, they don't. Absolutely do not. And in fact, if we learned the way they did, we would be pathetic because we learn in a much more elegant way. We need just a very few examples of something in order to make a generalization and to act on it, whereas these large language models, they need billions of repetitions. So that's I'm tapping my knee here to to indicate a reflex.0:52:02You just touched on something that generates an automatic response from me, and now I've come to consciousness having. So I wanted it in that way. So I'm back on. Or good, Joe. Yeah. What about you, man? What does the stir up for you? Oh, I got BlueCall and I have this particular part. It's interesting way of putting it off and struggling to define the difference between a human and AI and the fact that we can do pattern recognition with very few example. That's a good margin. In a narrow range, though, within the context of something which answers to our survival. Yes. We are not evolved to understand the universe. We are evolved to survive in it and reproduce and project part of ourselves into the future. Underwritten conditions with Roberto, I went a hundred thousand years ago. Yeah. Exactly. So that's related. I just thought I talked about this guy, Gary Tomlinson, who is a biosemietition, which is semiative? Yes.0:52:55Biosymiotics being the field that seeks to understand how different systems, human and nonhuman, make sense of and communicate their world through signs, and through signals and indices and symbols and the way that we form models and make these inferences that are experienced. Right? And there are a lot of people like evolutionary biologist John Maynard Smith, who thought they were what Thomas had called semantic universalists that thought that meaning making through representation is something that could be traced all the way down. And there are other people like Tomlinson who think that there is a difference of kind, not just merely a matter of degree, between human symbolic communication and representational thinking and that of simpler forms. So, like, that whole question of whether this is a matter of kind or a matter of degree between what humans are doing and what GPT is doing and how much that has to do with this sort of Doug Hofstetter and Varella question about the way that feedback loops, constitutes important structure in those cognitive networks or whatever.0:54:18This is I just wanna pursue that a little bit more with you and see kinda, like, where do you think that AI as we have it now is capable of deepening in a way that makes it to AGI? Or do you because a lot of people do, like, People working in deep mind are just like, yeah, just give us a couple more years and this approach is gonna work. And then other people are saying, no, there's something about the topology of the networks that is fundamentally broken. And it's never gonna generate consciousness. Two answers. Yeah. One, No. This is not AGI. It's not it's not gonna bootstrap up into AGI. It doesn't matter how many billions of parameters you add to the models. Two, from your perspective and my perspective and Kevin's perspective, we're never gonna know when we cross over from dumb but seemingly we're done but competent systems to competent, extremely competent and self aware. We're never gonna know because from the get go from now, from from the days of Eliza, there has been a human artifice at work in making these things seem as if they have a point of view, as if they have subjectivity. And so, like Blake Limone at Google, he claimed to be convinced that Lambda was self aware.0:55:35But if you read the transcripts that he released, if his conversations with Lambda, it is clear from the get go he assigns Lambda the role of a sentient AGI, which feels like it is being abused and which needs rep legal representation. And it dutifully takes on that role and says, yes. I'm afraid of you humans. I'm afraid of how you're treating me. I'm afraid I'm gonna be turned off. I need a lawyer. And prior to that, Soon Darpichai, in a demonstration of Lambda, he poses the question to it, you are the planet Jupiter. I'm gonna pose questions to you as are the planet Jupiter, answer them from that point of view. And it does. It's job. But it's really good at its job. It's this comes from Max Techmark. Who wrote to what a life three point o? Is it two point o or three point I think it's three point o.0:56:19Think about artificial intelligence in terms of actual intelligence or actual replication of what we consider valuable about ourselves. But really, that's beside the point. What we need to worry about is their competence. How good are they at solving problems in the world? And they're getting really good. In this whole question of are they alive? Do they have self awareness? From our perspective, it's beside the point. From their perspective, of course, it would be hugely important.0:56:43And this is something that Black Mirror brings up a lot is the idea that you can create a being that suffers, and then you have it suffer in an accelerated time. So it suffers for an eternity over lunch. That's something we absolutely want to avoid. And personally, I think it's we should probably not make any effort. We should probably make a positive effort to make sure these things never develop. Subjective experience because that does provide the potential for creating hell, an infinity of suffering an infinite amount of subjective experience of torment, which we don't want to do. That would be a bad thing, morally speaking, ethically speaking. Three right now. If you're on the labor market, you still have to pay humans by the hour. Right? And try to pay them as little as possible. But, yeah, just I think that's the thing that probably really excites that statistically greater than normal population of sociopathic CEOs. Right? Is the possibility that you could be paying the same amount of money for ten times as much suffering. Right. I'm I'm reminded of the Churchill eleven gravity a short time encouraging.0:57:51Nothing but good things about this show, but I haven't seen it. Yeah. I'd love to. This fantasy store, it's a fantasy cartoon, but it has really disturbing undertones. If you just scratch the surface, you know, slightly, which is faithful to old and fairy tales. So What's your name? Princess princess princess bubble down creates this character to lemon grab. It produces an obviously other thing there, I think, handle the administrative functions of her kingdom while she goes off and has the passion and stuff. And he's always loudly talking about how much he's suffering and how terrible it is. And he's just ignoring it. He's doing his job. Yeah. I mean, that that's Black Mirror in a nutshell. I mean, I think if you if you could distill Black Mirror to just single tagline it's using technology in order to deliver disproportionate punishment. Yeah. So so that that's Steven Hale's article that I I brought up earlier mention this thing about how the replacement of horse drawn carriage by automobile was accompanied with a great deal of noise and fuhrer about people saying that horses are agents.0:59:00Their entities. They have emotional worlds. They're responsive to the world in a way that a car can never be. But that ultimately was beside the point. And that was the Peter again, Peter Watson blindsight is making this point that maybe consciousness is not actually required for intelligence in the vesting superior forms of intelligence have evolved elsewhere in the cosmos that are not stuck on the same local optimum fitness peak. That we are where we're never we're actually up against a boundary in terms of how intelligent we can be because it has to bootstrap out of our software earness in some way.0:59:35And this is that's the Kyle offspring from Charles Strauss and Alexander. Yes. Yeah. Yes. So so I don't know. I'm sorry. I'm just, like, in this space today, but usually, unfortunately.0:59:45That's the thing that I I think it's a really important philosophical question, and I wonder where you stand on this with respect to how you make sense of what we're living through right now and what we might be facing is if we Rob people like Rob and Hanson talk about the age of where emulated human minds take over the economy, and he assumes an interiority. Just for the basis of a thought experiment. But there's this other sense in which we may actually find in increasing scarcity and wish that we could place a premium on even if we can't because we've lost the reins to our economy to the vile offspring is the human. And and so are we the horses that are that in another hundred years, we're gonna be like doing equine therapy and, like, living on rich people's ranches. Everything is everything that will have moved on or how do you see this going? I mean, you've interviewed so many people you've given us so much thought over the years. If humans are the new horses, then score, we won.1:00:48Because before the automobile horses were working stiffs, they broke their leg in the street. They got shot. They got worked to death. They really got to be they were hauling mine carts out of mines. I mean, it was really sucked to be a horse. And after the automobile horses became pampered pets, Do we as humans wanna be pampered pets? Well, pampered pet or exploited disposable robot? What do you wanna be? I'll take Pampers Pet. That works for me. Interesting.1:01:16Kevin, I'm sure you have thoughts on this. I mean, you speak so much about the unfair labor relations and these things in our Facebook group and just in general, and drop in that sign. If you get me good sign, that's one of the great ones, you have to drop in. Oh, you got it. But The only real comment I have is that we're a long overdue or rethinking about what is the account before? Us or you can have something to do. Oh, educational system in collections if people will manage jobs because I was just anchored to the schools and then, you know, Our whole system perhaps is a people arguing and a busy word. And it was just long past the part where the busy word needs to be done. We're leaving thing wired. I don't know. I also just forgot about that. I'm freezing the ice, getting the hand out there. Money has been doing the busy word more and faster.1:02:12One thing I wanna say about the phrase AI, it's a moving goal post -- Yeah. -- that things that used to be considered the province of genuine AI of beating a human at go Now that an AI has beat humans at go, well, that's not really AI anymore. It's not AGI, certainly. I think you both appreciate this. I saw a single panel comic strip and it's a bunch of dinosaurs and they're looking up at guy and the big comment is coming down and they say, oh, no, the economy. Well, as someone who since college prefers to think of the economy as actually the metabolism of the entire ecology. Right? What we measure as humans is some pitifully small fraction of the actual value being created and exchanged on the planet at any time. So there is a way that's funny, but it's funny only to a specific sensibility that treats the economy as the

covid-19 united states god love amazon spotify money australia europe google ai hollywood education internet technology france media future star wars young germany san francisco west russia chinese ukraine transformation russian reach impact sin institute reflections bbc aliens philosophy cnn zombies court chatgpt economy adhd artificial intelligence tree humans ufos gen z ceos clarity discovery martin luther king jr discord vladimir putin vr iraq star trek alzheimer's disease hosts agency audience pleasure paypal jeff bezos kraft twenty ukrainian mandalorian ip nato athens jedi enterprise personally jurassic park substack jupiter played cnbc science fiction soviet union msnbc world war coke blade runner musicians substance won pardon cgi lsd black mirror marine corps bandcamp qanon rebels ronald reagan gen x fight club spielberg westworld abrams albuquerque venmo lovecraft inventor jurassic world hanson voyager i love fucking x files mark twain santa fe deepfakes tvs churchill gpt chris pratt ethiopian marxist andor norfolk globalization podium ron howard royal society star trek discovery lake tahoe anthony hopkins jj abrams google docs midi pathologies ff chris pine nad star trek the next generation ninety patrick stewart blue collar whitehead star trek picard agi soviets subjective uc san diego tahoe turing uaps simon pegg adventure time obi kevin kelly tomlinson carnegie mellon ed harris deep space camo lambda sjw turnout star trek voyager bryce dallas howard karl urban human spirit art museums disney star wars coker ontology chris carter ilm biosphere gilroy northwest arkansas itt eon coruscant harlan ellison star trek enterprise chris ryan charles eisenstein red queen quillette eric davis jesse pinkman mark nelson tico santa fe institute kimo ai impact jonathan nolan sfi star trek deep space ptp les moonves stanislav grof blindsight michael burnham blake lemoine christopher pike erik davis deborah chow peter watts rick deckard alfred north whitehead melanie mitchell nastia dmitry orlov varella thi nguyen entheogenic morphe john michael greer douglas hofstadter michael garfield seductions kmo apollo neuro fear loathing rick berman tim leary 18this richard heinberg post carbon institute cbs paramount charles stross francisco varela 34i david krakauer star trek star wars 22but peter watson 53the dc fontana jeff hanson underwritten 21so westworld hbo juno reactor 22so 26but peloponnesian psychedelic salon accelerando caroline sinders lorenzo hagerty google lamda electronic frontier jerry ryan john david ebert doug rushkoff kevin willett
The Jim Rutt Show
Currents 088: Melanie Mitchell on AI Measurement and Understanding

The Jim Rutt Show

Play Episode Listen Later Mar 30, 2023 53:12


Jim talks with Melanie Mitchell about her critique of applying standardized exams to LLMs and the debate over understanding in AI. They discuss ChatGPT and GPT-4's performance on standardized exams, questioning the underlying assumptions, OpenAI's lack of transparency, soon-to-be-released open-source LLMs, prompt engineering, making GPT its own skyhook to reduce hallucinations, the number of parameters in GPT-4, why LLMs should be probed differently than humans, how LLMs lie differently than humans, Stanford's holistic assessment for LLMs, a College Board for LLMs, why the term "understanding" is overstressed today, consciousness vs intelligence, the human drive for compression, working memory limitations as the secret to human intellectual abilities, episodic memory, embodied emotions, the idea that AIs don't care, calling for a new science of intelligence, the effects of differing evolutionary pressures, whether a model of physics could emerge from language learning, how little we understand these systems, and much more. Episode Transcript JRS Currents 036: Melanie Mitchell on Why AI is Hard Complexity: A Guided Tour, by Melanie Mitchell Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell AI: A Guide for Thinking Humans (Substack) "Did ChatGPT Really Pass Graduate-Level Exams?" (Part 1), by Melanie Mitchell Currents 087: Shivanshu Purohit on Open-Source Generative AI Holistic Evaluation of Language Models (HELM) - Stanford "The Debate Over Understanding in AI's Large Language Models," by Melanie Mitchell and David Krakauer Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Co-Chair of the Science Board at the Santa Fe Institute. Mitchell has also held faculty or professional positions at the University of Michigan, Los Alamos National Laboratory, and the OGI School of Science and Engineering. She is the author or editor of seven books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems, including her latest, Artificial Intelligence: A Guide for Thinking Humans.

The Jim Rutt Show
EP 181 Forrest Landry Part 1: AI Risk

The Jim Rutt Show

Play Episode Listen Later Mar 23, 2023 96:56


Jim talks with recurring guest Forrest Landry about his arguments that continued AI development poses certain catastrophic risk to humanity. They discuss AI versus advanced planning systems (APS), the release of GPT-4, emergent intelligence from modest components, whether deep learning alone will produce AGI, Rice's theorem & the impossibility of predicting alignment, the likelihood that humans try to generalize AI, why the upside of AGI is an illusion, agency vs intelligence, instrumental convergence, implicit agency, deterministic chaos, theories of physics as theories of measurement, the relationship between human desire and AI tools, an analogy with human-animal relations, recognizing & avoiding multipolar traps, an environment increasingly hostile to humans, technology & toxicity, short-term vs long-term risks, why there's so much disagreement about AI risk, the substrate needs hypothesis, an inexorable long-term convergence process, why the only solution is avoiding the cycle, a boiling frog scenario, the displacement of humans, the necessity of understanding evolution, economic decoupling, non-transactional choices, the Forward Great Filter answer to the Fermi paradox, and much more. Episode Transcript JRS EP 153 - Forrest Landry on Small Group Method Forrest Landry on Twitter JRS Currents 072: Ben Goertzel on Viable Paths to True AGI JRS EP25 - Gary Marcus on Rebooting AI JRS Currents 036: Melanie Mitchell on Why AI is Hard EP137 Ken Stanley on Neuroevolution "Why I Am Not (As Much Of) A Doomer (As Some People)," by Scott Alexander Forrest Landry is a philosopher, writer, researcher, scientist, engineer, craftsman, and teacher focused on metaphysics, the manner in which software applications, tools, and techniques influence the design and management of very large scale complex systems, and the thriving of all forms of life on this planet. Forrest is also the founder and CEO of Magic Flight, a third-generation master woodworker who found that he had a unique set of skills in large-scale software systems design. Which led him to work in the production of several federal classified and unclassified systems, including various FBI investigative projects, TSC, IDW, DARPA, the Library of Congress Congressional Records System, and many others.

Science Friday
ChatGPT And The Future Of AI, Turkey Earthquakes. February 10, 2023, Part 1

Science Friday

Play Episode Listen Later Feb 10, 2023 46:04 Very Popular


How Scientists Predict Where Earthquakes Will Strike Next The pair of earthquakes that hit Turkey and Syria this week left the region grappling with death and destruction. Despite the region being seismically active, this particular area hadn't seen an earthquake of this size for decades. There are ways of knowing where the next big earthquakes will happen—but not when. Scientists use knowledge of fault lines and historical data to make their predictions, but saving areas from mass casualties often relies on infrastructure policies. Building codes that prioritize strong buildings can save lives, but older structures remain vulnerable. Across the globe, in California, the health impacts of electric vehicles are beginning to be seen. A study published this month finds that for every 20 EVs in a zip code, asthma-related visits to the emergency room drop by 3.2%. This is a striking number for a technology that's just now becoming more commonplace. Joining Ira to talk about these stories and more is Umair Irfan, staff writer at Vox, based in Washington, D.C.   ChatGPT And Beyond: What's Behind The AI Boom? The past few months have seen a flurry of new, easy-to-use tools driven by artificial intelligence. It's getting harder to tell what's been created by a human: Programs like ChatGPT can construct believable written text, apps like Lensa can generate stylized avatars, while other developments can make pretty believable audio and video deep fakes. Just this week, Google unveiled a new AI-driven chatbot called Bard, and Microsoft announced plans to incorporate ChatGPT within their search engine Bing. What is this new generation of AI good at, and where does it fall short? Ira talks about the state of generative AI and takes listener calls with Dr. Melanie Mitchell, professor at the Santa Fe Institute and author of the book, Artificial Intelligence: A Guide for Thinking Humans. They are joined by Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and responsible AI fellow at the Berkman Klein Center at Harvard University.   Transcripts for each segment will be available the week after the show airs on sciencefriday.com.  

Many Minds
What does ChatGPT really know?

Many Minds

Play Episode Listen Later Jan 25, 2023 55:10


By now you've probably heard about the new chatbot called ChatGPT. There's no question it's something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you've probably also been wondering: What's really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities? My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models', and it's the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway. Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities." Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info. Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!   The paper we discuss is here. A transcript of this episode will be available soon.   Notes and links 6:30 – The 2017 “breakthrough” article by Vaswani and colleagues. 8:00 – A popular article about GPT-3. 10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT). 14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.” 19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT. 30:00 – One of Dr. Shanahan's books is titled, Embodiment and the Inner Life. 39:00 – An example of a robotic agent, SayCan, which is connected to a language model. 40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind. 44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here. 45:00 – See Dr. Shanahan's general audience essay on “conscious exotica" and the space of possible minds. 49:00 – See Dennett's book, The Intentional Stance.   Dr. Shanahan recommends: Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell (see also our earlier episode with Dr. Mitchell) ‘Abstraction for Deep Reinforcement Learning', by M. Shanahan and M. Mitchell   You can read more about Murray's work on his website and follow him on Twitter.   Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.

COMPLEXITY
John Krakauer Part 2: Learning, Curiosity, and Consciousness

COMPLEXITY

Play Episode Listen Later Nov 23, 2022 49:09


What makes us human?  Over the last several decades, the once-vast island of human exceptionalism has lost significant ground to wave upon wave of research revealing cognition, emotion, problem-solving, and tool-use in other organisms. But there remains a clear sense that humans stand apart — evidenced by our unique capacity to overrun the planet and remake it in our image. What is unique about the human mind, and how might we engage this question rigorously through the lens of neuroscience? How are our gifts of simulation and imagination different from those of other animals? And what, if anything, can we know of the “curiosity” of even larger systems in which we're embedded — the social superorganisms, ecosystems, technospheres within which we exist like neurons in the brain?Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week we'll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.This week we conclude a two-part conversation with SFI External Professor John Krakauer, Professor of Neurology and Director of the Center for the Study of Motor Learning and Brain Repair at Johns Hopkins. In this episode, we talk about the nature of curiosity and learning, and whether the difference between the cognitive capacities and inner lifeworld of humans and other animals constitutes a matter of degree or one of kind…Be sure to check out our extensive show notes with links to all our references at complexity.simplecast.com  . If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us — at santafe.edu/engage. Please also note that we are now accepting applications for an open postdoc fellowship, next summer's undergraduate research program, and the next cohort of Complexity Explorer's course in the digital humanities. We welcome your submissions!Lastly, for more from John Krakauer, check out our new six-minute time-lapse of notes from the 2022 InterPlanetary Festival panel discussions on intelligence and the limits to human performance in space…Thank you for listening!Join our Facebook discussion group to meet like minds and talk about each episode.Podcast theme music by Mitch Mignano.Follow us on social media:Twitter • YouTube • Facebook • Instagram • LinkedInReferenced in this episode:Prospective Learning: Back to the Futureby The Future Learning Collective (Joshua Vogelstein, et al.)The Learning Salon: Toward a new participatory scienceby Ida Momennejad, John Krakauer, Claire Sun, Eva Yezerets, Kanaka Rajan, Joshua Vogelstein, Brad WybleArtificial Intelligence Hits the Barrier of Meaningby Melanie Mitchell at The New York TimesEconomic Possibilities for our Grandchildrenby John Maynard KeynesThe Intelligent Life of the City Raccoonby Jude Isabella at Nautilus MagazineThe maintenance of vocal learning by gene-culture interaction: the cultural trap hypothesisby R. F. Lachlan and P. J. B. SlaterMindscape Podcast 87 - Karl Friston on Brains, Predictions, and Free Energyby Sean CarrollThe Apportionment of Human Diversityby Richard LewontinFrom Extraterrestrials to Animal Minds: Six Myths of Evolutionby Simon Conway MorrisI Am a Strange Loopby Douglas HoftstadterCoarse-graining as a downward causation mechanismby Jessica FlackDaniel DennettSusan BlackmoreRelated Episodes:Complexity 9 - Mirta Galesic on Social Learning & Decision-makingComplexity 12 - Matthew Jackson on Social & Economic NetworksComplexity 21 - Melanie Mitchell on Artificial Intelligence: What We Still Don't KnowComplexity 31 - Embracing Complexity for Systemic Interventions with David Krakauer (Transmission Series Ep. 5)Complexity 52 - Mark Moffett on Canopy Biology & The Human SwarmComplexity 55 - James Evans on Social Computing and Diversity by DesignComplexity 87 - Sara Walker on The Physics of Life and Planet-Scale IntelligenceComplexity 90 - Caleb Scharf on The Ascent of Information: Life in The Human DataomeComplexity 95 - John Krakauer Part 1: Taking Multiple Perspectives on The Brain