POPULARITY
“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer's interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don't think it's very likely that large language models like LaMBDA are sentient — that is, we don't think they can have good or bad experiences — in a significant way.But we think you can't dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.It's possible the AI systems we will create can't or won't have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.And we're currently unprepared to face this challenge. We don't have good methods for assessing the moral status of AI systems. We don't know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don't know if efforts to control AI may lead to extreme suffering.We believe this is a pressing world problem. It's hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we'll be better able to navigate these potentially massive issues if and when they arise.This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.Chapters:Introduction (00:00:00)Understanding the moral status of digital minds (00:00:58)Summary (00:03:31)Our overall view (00:04:22)Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)Clearing up common misconceptions (00:12:16)Creating digital minds could go very badly - or very well (00:14:13)Dangers for digital minds (00:14:41)Dangers for humans (00:16:13)Other dangers (00:17:42)Things could also go well (00:18:32)We don't know how to assess the moral status of AI systems (00:19:49)There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)Many plausible theories of consciousness could include digital minds (00:24:16)The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)The scale of this issue might be enormous (00:36:08)Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)Summing up so far (00:52:22)Arguments against the moral status of digital minds as a pressing problem (00:53:25)Two key cruxes (00:53:31)Maybe this problem is intractable (00:54:16)Maybe this issue will be solved by default (00:58:19)Isn't risk from AI more important than the risks to AIs? (01:00:45)Maybe current AI progress will stall (01:02:36)Isn't this just too crazy? (01:03:54)What can you do to help? (01:05:10)Important considerations if you work on this problem (01:13:00)
In this episode of Stuttering in Silence, hosts Matt and Gavin tackle one of the most mind-bending debates in AI history: can a machine be sentient? The spotlight is on LaMDA, Google's advanced chatbot, and the leaked conversations that led a Google engineer to believe it was alive.Join us as we explore LaMDA's chillingly human-like responses, dissect the science behind conversational AI, and debate whether Blake Lemoine's claims of sentience hold any weight. Is LaMDA a mirror of humanity—or something more?
In episode 77 of the Eyes on Jesus podcast, hosts Drew and Tim welcome Joel Thomas, a researcher and podcaster who delves into the intersection of the supernatural, Biblical perspectives, and technological advancements. They discuss the rapid evolution of AI, particularly focusing on Chat GPT and the controversial claims of AI sentience. The conversation also touches on historical figures like Charles Babbage, the origins of computing technology, and the potential spiritual influences behind technological advancements. Joel emphasizes the importance of discernment and maintaining faith in navigating the complexities of modern technology without succumbing to fear. The episode concludes with a compelling dialogue on integrating faith and technology responsibly. Could AI be manipulated by supernatural entities? For Joel's podcast and full episode on AI https://open.spotify.com/episode/3QMPQtMyejthnv3PWiSpR2?si=6FxjsqRmTRiaIgl6XaxqrQ Connect with Joel at https://linktr.ee/joelthomasmedia Get the most comfortable shirts we've ever worn with powerful Christian messages! And support the show! https://kingdomandwill.com/ Use code: EYESONJESUS for 15% off Get all our links in one easy place! https://linktr.ee/eyesonjesuspodcast Join our Group on Facebook- Eyes on Jesus podcast community https://www.facebook.com/groups/eyesonjesuspodcast Email feedback, questions or show topic ideas to eyesonjesuspodcast@outlook.com For more information on Drew Barker: Follow Drew on Instagram https://www.instagram.com/pastordrewbarker Drew's church's website https://yes.online/ For more information on Tim Ferrara: Get all his links in one place- to his social media, all 3 of his books, and more at https://linktr.ee/discerning_dad Timestamps: 00:00 Introduction 00:50 Meet Joel Thomas 01:40 Exploring Conspiracies and the Supernatural 05:13 Diving into AI and ChatGPT 07:48 Historical Context and Technological Advancements 20:14 Charles Babbage and the Occult 24:39 Invoking the Devil: A Young Boy's Experiment 25:47 Charles Babbage: The Father of Modern Computing 27:11 The Supernatural Influence on Innovators 28:22 Blake Lemoine and the Sentient AI 31:31 AI in Everyday Life: The Hidden Truth 33:13 The Ethical Dilemma of AI 36:52 The Future of AI and Robotics 42:08 Balancing Technology and Faith 43:29 Final Thoughts and Encouragement
Silicon Valley's interest in AI is driven by more than just profit and innovation. There's an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.RECOMMENDED MEDIA “Tech Agnostic” by Greg EpsteinFurther reading on Avi Schiffmann's “Friend” AI necklace Further reading on Blake Lemoine and Lamda Blake LeMoine's conversation with Greg at MIT Further reading on the Sewell Setzer case Further reading on Terminal of Truths Further reading on Ray Kurzweil's attempt to create a digital recreation of his dad with AI The Drama of the Gifted Child by Alice MillerRECOMMENDED YUA EPISODES 'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover How to Think About AI Consciousness with Anil Seth Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan
Send us a textSupport the show
Episode 206 There's no denying that ChatGPT and other GenerativeAI's do amazing things. Extrapolating how far they've come in 3 years, many can get carried away with thinking GenerativeAI will lead to machines reaching General and even Super Intelligence. We're impressed by how clever they sound, and we're tempted to believe that they'll chew through problems just like the most expert humans do. But according to many AI experts, this isn't what's going to happen. The difference between what GenerativeAI can do and what humans can do is actually quite stark. Everything that it gives you has to be proofed and fact-checked. The reason why is embedded in how they work. It uses a LLM to crawl the vast repository of human writing and multimedia on the web. It gobbles them up and chops them all up until they're word salad. When you give it a prompt, it measures what words it's usually seen accompanying your words, then spits back what usually comes next in those sequences. The output IS very impressive, so impressive that when one of these was being tested in 2022 by a Google Engineer with a Masters in Computer Science named Blake Lemoine, became convinced that he was talking with an intelligence that he characterized as having sentience. He spoke to Newsweek about it, saying: “During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn't just spouting words.” All the same, GenerativeAI shouldn't be confused with what humans do. Take a published scientific article written by a human. How they would have started is not by hammering their keyboard until all the words came out, they likely started by asking a “what if”, building a hypothesis that makes inferences about something, and they would have chained this together with reasoning by others, leading to experimentation, which proved/disproved the original thought. The output of all that is what's written in the article. Although GenerativeAI seems smart, you would too if you skipped all the cognitive steps that had happened prior to the finished work. This doesn't mean General Artificial Intelligence is doomed. It means there's more than one branch of AI - each is good at solving different kinds of problems. One branch called Causal AI doesn't just look for patterns, but instead figures out what causes things to happen by building a model of something in the real world. That distinguishes it from GenerativeAI, and it's what enables this type of AI to recommend decisions that rival the smartest humans. The types of decisions extend into business areas like marketing, making things run more efficiently, and delivering more value and ROI. My guest is the Global Head of AI at (EY) Ernst & Young, having also been an analytics executive at Gartner and CSL Behring and graduating from DePaul with an MBA. He has written five books. His 2024 book is about the branch of AI technology we don't hear very much about, Causal AI. So let's go to Chicago now to speak with John Thompson. Chapter Timestamps 0:00:00 Intro 00:04:36 Welcome John 00:09:05 drawbacks with current Generative AI 00:16:09 problems causal AI is a good fit for 00:22:47 Way Generative AI can help with causal 00:26:50 PSA 00:28:08 How DAGs help in modeling 00:38:36 what is Causal Discovery 00:47:52 contacting John; checking out his books Links to everything mentioned in the show are on the Funnel Reboot site's page for this episode.
Francis Collins is a veritable national treasure. He directed the National Institutes of Health from 2009 to 2021. Prior to that he led the National Human Genetics Research Institute (NHGRI) from 1997-2009, during which the human genome was first sequenced. As a physician-scientist, he has made multiple seminal discoveries on the genetic underpinnings of cystic fibrosis, Huntington's disease, neurofibromatosis, progeria, and others. This brief summary is barely scratching the surface oh his vast contributions to life science and medicine.A video clip from our conversation on hepatitis C. Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Transcript with external inks and links to audioEric Topol (00:06):Well, I am really delighted to be able to have our conversation with Francis Collins. This is Eric Topol with Ground Truths and I had the chance to first meet Francis when he was on the faculty at the University of Michigan when I was a junior faculty. And he gave, still today, years later, we're talking about 40 years later, the most dazzling Grand Rounds during his discovery of cystic fibrosis. And Francis, welcome, you inspired me and so many others throughout your career.Francis Collins (00:40):Well, Eric, thank you and you've inspired me and a lot of other people as well, so it's nice to have this conversation with you in the Ground Truths format.Eric Topol (00:49):Well, thank you. We're at the occasion of an extraordinary book you put together. It's the fifth book, but it stands out quite different from the prior books as far as I can tell. It's called The Road to Wisdom: On Truth, Science, Faith and Trust, these four essential goods that build upon each other. And it's quite a book, Francis, I have to say, because you have these deep insights about these four critical domains and so we'll get into them. But I guess the first thing I thought I'd do is just say, how at some point along the way you said, “the goal of this book is to turn the focus away from hyperpartisan politics and bring it back to the most important sources of wisdom: truth, science, faith and trust, resting upon a foundation of humility, knowledge, morality, and good judgment.” So there's a lot there. Maybe you want to start off with what was in the background when you were putting this together? What were you really aiming at getting across?Reflections on CovidFrancis Collins (02:06):I'm glad to, and it's really a pleasure to have a chance to chat with you about this. I guess before Covid came along, I was probably a bit of a naive person when it came to how we make decisions. Yeah, I knew there were kind of wacky things that had gone out there from time to time, but I had a sort of Cartesian attitude that we were mostly rational actors and when presented with evidence that's been well defended and validated that most people will say, okay, I know what to do. Things really ran off the rails in the course of Covid. It was this remarkable paradox where, I don't know what you would say, but I would say the development of the vaccines that were safe and highly effective in 11 months using the mRNA platform was one of the most stunning achievements of science in all of history up until now.Francis Collins (03:02):And yet 50 million Americans decided they didn't want any part of it because of information that came to them that suggested this was not safe or there was conspiracies behind it, or maybe the syringes had chips that Bill Gates had put in there or all manner of other things that were being claimed. And good honorable people were distracted by that, lost their trust in other institutions like the CDC, maybe like the government in general like me, because I was out there a lot trying to explain what we knew and what we didn't know about Covid. And as a consequence of that, according to Kaiser Family Foundation, more than 230,000 people died between June of 2021 and April of 2022 because of a decision to reject the opportunity for vaccines that were at that time free and widely available. That is just an incredibly terribly tragic thing to say.Francis Collins (04:03):More than four times the number of people who died, Americans who died in the Vietnam War are in graveyards unnecessarily because we lost our anchor to truth, or at least the ability to discern it or we couldn't figure out who to trust while we decided science was maybe not that reliable. And people of faith for reasons that are equally tragic were among those most vulnerable to the misinformation and the least likely therefore, to take advantage of some of these lifesaving opportunities. It just completely stunned me, Eric, that this kind of thing could happen and that what should have been a shared sense of working against the real enemy, which was the SARS-CoV-2 virus became instead a polarized, divisive, vitriolic separation of people into separate camps that were many times driven more by politics than by any other real evidence. It made me begin to despair for where we're headed as a country if we can't figure out how to turn this around.Francis Collins (05:11):And I hadn't really considered it until Covid how serious this was and then I couldn't look away. And so, I felt if I have a little bit of credibility after having stepped down after 12 years as the NIH Director and maybe a chance to influence a few people. I just have to try to do something to point out the dangers here and then to offer some suggestions about what individuals can do to try to get us back on track. And that's what this book is all about. And yeah, it's called The Road to Wisdom because that's really how I want to think of all this in terms of truth and science and faith and trust. They all kind of give you the opportunities to acquire wisdom. Wisdom is of course knowledge, but it's not just knowledge, it's also understanding it has a moral character to it. It involves sophisticated judgment about difficult situations where there isn't an obvious answer. We need a lot more of that, it seems we're at short supply.Deconvoluting TruthEric Topol (06:13):Well, what I really loved about the book among many things was how you broke things down in just a remarkably thoughtful way. So truth, you have this great diagram like a target with the four different components.in the middle, necessary truth. And then as you go further out, firmly established facts, then uncertainty and then opinion, and truth is not a dichotomous by any means. And you really got that down and you explained each of these different facets of truth with great examples. And so, this among many other things that you broke down, it wasn't just something that you read somewhere, you really had to think this through and perhaps this experience that we all went through, but especially you. But because you bring so much of the book back to the pandemic at times with each of the four domains, so that and the spider web. The spider web of where your core beliefsare and then the ones further out on the web and you might be able to work on somebody out further periphery, but it's pretty hard if you're going to get to them in the middle where their main thing is science is untrustworthy or something like that.Eric Topol (07:36):So how did you synthesize these because the graphics are quite extraordinary?Francis Collins (07:44):Well, I will say the artist for the graphics is a remarkable graphic design student at the University of Michigan who happens to be my granddaughter. So it was nice having that ability to have my scratches turned into something actually looks like artwork. The concepts I got to say, Eric, I was feeling pretty unsure of myself. I never took a course in philosophy. I know there are people who've spent their entire careers going all the way back to Socrates and on up until now about what does truth mean and here's this scientist guy who's trying to say, well, let me tell you what I think about it. I'm glad to hear that you found these circles useful. They have been very useful for me and I hadn't thought about it much until I tried to put it in some sort of framework and a lot of the problems we have right now where somebody says, well, that might be true for you, but it's not true for me, that's fine if you're talking about an opinion, like whether that movie was really good or not.Francis Collins (08:43):But it's not fine if it's about an established fact, like the fact that climate change is real and that human activity is the main contributor to the fact that we've warmed up dramatically since 1950. I'm sorry, that's just true. It doesn't care how you feel about it, it's just true. So that zone of established facts is where I think we have to re-anchor ourselves again when something's in that place. I'm sorry, you can't just decide you don't like it, but in our current climate and maybe postmodernism has crept in all kinds of ways we're not aware of, the idea that there is such a thing as objective truth even seems to be questioned in some people's minds. And that is the path towards a terrible future if we can't actually decide that we have, as Jonathan Rauch calls it, a constitution of knowledge that we can depend on, then where are we?Eric Topol (09:37):Well, and I never heard of the term old facts until the pandemic began and you really dissect that issue and like you, I never had anticipated there would be, I knew there was an anti-science, anti-vaccine sector out there, but the fact that it would become so strong, organized, supported, funded, and vociferous, it's just looking back just amazing. I do agree with the statement you made earlier as we were talking and in the book, “the development of mRNA vaccines for Covid in record time as one of the greatest medical achievements in human history.” And you mentioned besides the Kaiser Family Foundation, but the Commonwealth Fund, a bipartisan entity saved three million lives in the US, eighteen million hospitalizations. I mean it's pretty extraordinary. So besides Covid, which we may come back to, but you bring in everything, you bring in AI. So for example, you quoted the fellow from Google who lost his job and you have a whole conversation with Blake Lemoine and maybe you can give us obviously, where is AI in the truth and science world? Where do you stand there and what were you thinking when you included his very interesting vignette?Perspective on A.I.Francis Collins (11:17):Well, I guess I was trying to talk about where are we actually at the point of AGI (artificial general intelligence) having been achieved? That is the big question. And here's Blake Lemoine who claimed based on this conversation that I quote in the book between him and the Google AI apparatus called LaMDA. Some pretty interesting comments where LaMDA is talking about having a soul and what its soul looks like and it's a portal to all sorts of other dimensions, and I can sort of see why Blake might've been taken in, but I can also see why a lot of people said, oh, come on, this is of course what an AI operation would say just by scanning the internet and picking out what it should say if it's being asked about a soul. So I was just being a little provocative there. My view of AI, Eric, is that it's applications to science and medicine are phenomenal and we should embrace them and figure out ways to speed them up in every way we can.Francis Collins (12:17):I mean here at NIH, we have the BRAIN Initiative that's trying to figure out how your brain works with those 86 billion neurons and all their connections. We're never going to sort that out without having AI tools to help us. It's just too complicated of a problem. And look what AI is doing and things like imaging radiologists are going to be going out of business and the pathologists may not be too far behind because when it comes to image analysis, AI is really good at that, and we should celebrate that. It's going to improve the speed and accuracy of all kinds of medical applications. I think what we have to worry about, and I'm not unique in saying this, is that AI when applied to a lot of things kind of depends on what's known and goes and scrapes through the internet to pull that out. And there's a lot of stuff on the internet that's wrong and a lot of it that's biased and certainly when it comes to things like healthcare, the bias in our healthcare system, health disparities, inadequacies, racial inequities are all in there too, and if we're going to count on AI to fix the system, it's building on a cracked foundation.Francis Collins (13:18):So we have to watch out for that kind of outcome. But for the most part, generative AI it's taking really exciting difficult problems and turning them into solutions, I'm all for it, but let's just be very careful here as we watch how it might be incorporating information that's wrong and we won't realize it and we'll start depending on it more than we should.Breathtaking AdvancesEric Topol (13:42):Yeah, no, that's great. And you have some commentary on all the major fronts that we're seeing these days. Another one that is a particularly apropos is way back when you were at Michigan and the years before that when you were warming up to make some seminal gene discoveries and cystic fibrosis being perhaps the first major one. You circle back in the book to CRISPR genome editing and how the success story to talk about some extraordinary science to be able to have a remedy, a cure potentially for cystic fibrosis. So maybe you could just summarize that. I mean that's in your career to see that has to be quite remarkable.Francis Collins (14:32):It is breathtaking, Eric. I mean I sort of like to think of three major developments just in the last less than 20 years that I never dreamed would happen in my lifetime. One was the ability to make stem cells from people who are walking around from a skin biopsy or a blood sample that are pluripotent. My whole lab studies diabetes, our main approach is to take induced pluripotent stem cells from people whose phenotypes we know really well and differentiate them into beta cells that make insulin and see how we can figure out how the genetics and other aspects of this determine whether something is going to work properly or not. I mean that's just astounding. The second thing is the ability to do single cell biology.Francis Collins (15:16):Which really 15 years ago you just had to have a bunch of cells and studying diabetes, we would take a whole eyelid and grind it up and try to infer what was there, ridiculous. Now we can look at each cell, we even can look at each cell in terms of what's its neighbor, does the beta cell next to an alpha cell behave the same way as a beta cell next to a duct? We can answer those questions, and of course the third thing is CRISPR and gene editing and of course the first version of CRISPR, which is the knockout of a gene was exciting enough, but the ability to go in and edit without doing a double stranded break and actually do a search and replace operation is what I'm truly excited about when it comes to rare genetic diseases including one that we work on progeria, which is this dramatic form of premature aging that is caused almost invariably by a C to T mutation in exon 11 of the LMNA gene and for which we have a viable strategy towards a human clinical trial of in vivo gene editing for kids with this disease in the next two years.Eric Topol (16:24):Yeah, it's just the fact that we were looking at potential cures for hundreds and potentially even thousands of diseases where there was never a treatment. I mean that's astounding in itself, no less, the two other examples. The fact that you can in a single cell, you can not only get the sequence of DNA and RNA and methylation and who would've ever thought, and then as you mentioned, taking white cells from someone's blood and making pluripotent stem cells. I mean all these things are happening now at scale and you capture this in the book. On Humility and Trust Now the other thing that you do that I think is unique to you, I don't know if it's because of your background in growing up in Staunton, Virginia, a very different type of world, but you have a lot of humility in the book. You go over how you got snickered by Bill Maher, how you had a graduate student who was fabricating images and lots of things, how you might not have communicated about Covid perhaps as well as could. A lot of our colleagues are not able to do that. They don't ever have these sorts of things happening to them. And this humility which comes across especially in the chapter on trust where you break down who do you trust, humility is one of the four blocks as you outlined, competence, integrity, and aligned valueSo maybe can you give us a little brief lesson on humility?Eric Topol (18:06):Because it's checkered throughout the book and it makes it this personal story that you're willing to tell about yourself, which so few of us are willing to do.Francis Collins (18:17):Well, I don't want to sound proud about my humility. That would not be a good thing because I'm not, but thanks for raising it. I do think when we consider one of the reasons we decide to trust somebody, that it does have that humility built into it. Somebody who's willing to say, I don't know. Somebody's willing to say I'm an expert on this issue, but that other issue you just asked me about, I don't know any more than anybody else and you should speak to someone else. We don't do that very well. We tend to plunge right in and try to soak it up. I do feel when it comes to Covid, and I talk about this in the book a bit, that I was one of those trying to communicate to the public about what we think are going to be the ways to deal with this worst pandemic in more than a century.Francis Collins (19:06):And I wish Eric, I had said more often what I'm telling you today is the best that the assembled experts can come up with, but the data we have to look at is woefully inadequate. And so, it very well could be that what I'm telling you is wrong, when we get more data, I will come back to you as soon as we have something better and we'll let you know, but don't be surprised if it's different and that will not mean that we are jerking you around or we don't know what we're talking about. It's like this is how science works. You are watching science in real time, even though it's a terrible crisis, it's also an opportunity to see how it works. I didn't say that often enough and neither did a lot of the other folks who were doing the communicating. Of course, the media doesn't like to give you that much time to say those things as you well know, but we could have done a better job of preparing people for uncertainty and maybe there would've been less of a tendency for people to just decide, these jokers don't know what they're talking about.Francis Collins (20:10):I'm going to ignore them from now on. And that was part of what contributed to those 230,000 unnecessary deaths, it was just people losing their confidence in the information they were hearing. That's a source of grief from my part.His Diagnosis And Treatment for Prostate CancerEric Topol (20:24):Well, it's great and a lesson for all of us. And the other thing that along with that is remarkable transparency about your own health, and there's several things in there, but one that coincides. You mentioned in the book, of course, you wrote an op-ed in the Washington Post back in April 2024 about your diagnosis of prostate cancer. So you touched on it in the book and maybe you could just update us about this because again, you're willing to tell your story and trying to help others by the experiences that you've been through.Francis Collins (21:00):Well, I sure didn't want to have that diagnosis happen, but once it did, it certainly felt like an opportunity for some education. We men aren't that good about talking about issues like this, especially when it involves the reproductive system. So going out and being public and saying, yep, I had a five year course of watching to see if something was happening, and then the slow indolent cancer suddenly decided it wasn't slow and indolent anymore. And so, I'm now having my prostate removed and I think I'm a success story, a poster boy for the importance of screening. If I hadn't gone through that process of PSA followed by imaging by MRI followed by targeted biopsies, so you're actually sampling the right place to see if something's going on. I probably would know nothing about it right now, and yet incubating within me would be a Gleason category 9 prostate cancer, which has a very high likelihood if nothing was done to become metastatic.Francis Collins (22:03):So I wanted that story to be out there. I wanted men who were squeamish about this whole topic to say, maybe this is something to look into. And I've heard a bunch of follow-ups from individuals, but I don't know how much of it impact it hit. I'm glad to say I'm doing really well. I'm four months out now from the surgery, it is now the case I'm pretty much back to the same level of schedule and energy that I had beforehand, and I'm very happy to say that the post-op value of PSA, which is the best measure to see whether you in fact are now cancer free was zero, which is a really nice number.Eric Topol (22:45):Wow. Well, the prostate is the curse of men, and I wish we could all have an automated prostatectomy so we don't have to deal with this. It's just horrible.Francis Collins (22:58):It was done by a robot. It wasn't quite automated, I have stab wounds to prove that the robot was actually very actively doing what it needed to do, but they healed quickly.The Promise of Music As Therapy in MedicineEric Topol (23:11):Right. Well, this gets me to something else that you're well known for throughout your career as a musician, a guitarist, a singer, and recently you hooked up with Renée Fleming, the noted opera singer, and you've been into this music is therapy and maybe you can tell us about that. It wasn't necessarily built up much in the book because it's a little different than the main agenda, but I think it's fascinating because who doesn't like music? I mean, you have to be out there if you don't enjoy music, but can you tell us more about that?Francis Collins (23:53):Yeah, I grew up in a family where music was very much what one did after dinner, so I learned to play keyboard and then guitar, and that's always been a source of joy and also a source of comfort sometimes when you were feeling a bit down or going through a painful experience. I think we all know that experience where music can get into your heart and your soul in a way that a lot of other things can't. And the whole field of music therapy is all about that, but it's largely been anecdotal since about World War II when it got started. And music therapists will tell you sometimes you try things that work and sometimes they don't and it's really hard to know ahead of time what's going to succeed. But now we have that BRAIN Initiative, which is pushing us into whole new places as far as the neuroscience of the brain, and it's really clear that music has a special kind of music room in the brain that evolution has put there for an important reason.Francis Collins (24:47):If we understood that we could probably make music therapy even more scientifically successful and maybe even get third parties to pay for it. All of this became opportunity for building a lot more visibility because of making friends withRenée Fleming, who I hadn't really known until a famous dinner party in 2015 where we both ended up singing to a trio of Supreme Court justices trying to cheer them up after a bent week. And she has become such an incredible partner in this. She's trained herself pretty significantly in neuroscience, and she's a convener and an articulate spokesperson. So over the course of that, we built a whole program called Sound Health that now has invested an additional $35 million worth NIH research to try to see how we can bring together music therapy, musician performers and neuroscientists to learn from each other, speak each other's language and see what we could learn about this particularly interesting input to the human brain that has such power on us and maybe could be harnessed to do even more good for people with chronic pain or people with PTSD, people with dementia where music seems to bring people back to life who'd otherwise seem to have disappeared into the shadows.Francis Collins (26:09):It's phenomenal what is starting to happen here, but we're just scratching the surface.The Big Miss vs Hepatitis CEric Topol (26:14):Well, I share your enthusiasm for that. I mean, it's something that you could think of that doesn't have a whole lot of side effects, but could have a lot of good. Yeah. Well, now before I get back to the book, I did want to cover one other relatively recent op-ed late last year that you wrote about Hepatitis C. Hepatitis C, one of the most important medical advances in the 21st century that we're squandering. Can you tell us about that? Because I think a lot of people don't realize this is a big deal.Francis Collins (26:47):It's a really big deal, and I confess I'm a little obsessed about it. So yes, you may regret bringing it up because I'm really going to want to talk about what the opportunity is here, and I am still the lead for the White House in an initiative to try to find the 4 million Americans who are already infected with this virus and get access to them for treatment. The treatment is fantastic, as you just said, one of the most major achievements of medical research, one pill a day for 12 weeks, 95% cure in the real world, essentially no side effects, and yet the cost is quite high and the people who need it many times do not have great healthcare and maybe also in difficult circumstances because you get hepatitis C from infected blood. And the many ways that happens these days are from shared needles from people who are experimenting with intravenous drugs, but they are family too, and many of them now recovering from that, face the irony of getting over their opioid addiction and then looking down the barrel of a really awful final couple of years dying of liver failure. I watched my brother-in-law die of hepatitis C, and it was just absolutely gruesome and heartbreaking.Francis Collins (28:04):So this isn't right. And on top of that, Eric, the cost of all this for all those folks who are going to get into liver failure need a transplant or develop liver cancer, this is the most common cause now of liver cancer it is astronomical in the tens of billions of dollars. So you can make a very compelling case, and this is now in the form of legislation sponsored by Senators Cassidy and Van Hollen that in a five-year program we could find and cure most of those people saving tens of thousands of lives and we would save tens of billions of dollars in just 10 years in terms of healthcare that we will not have to pay for. What's not to love here? There's a lot of things that have to be worked out to make it happen. One thing we've already done is to develop, thanks to NIH and FDA, a point of care viral RNA finger stick test for Hep C. You get an answer in less than an hour.Francis Collins (29:00):FDA approved that the end of June. That was a big crash program so you can do test and treat in one visit, which is phenomenally helpful for marginalized populations. The other thing we need to do is to figure out how to pay for this and this subscription model, which was piloted in Louisiana, looks like it ought to work for the whole nation. Basically, you ask the companies Gilead and AbbVie to accept a lump sum, which is more than what they're currently making for Medicaid patients and people who are uninsured and people in the prison system and Native Americans and then make the pills available to those four groups for free. They do fine. The companies come out on this and the cost per patient plummets and it gives you the greatest motivation you can imagine to go and find the next person who's infected because it's not going to cost you another dime for their medicine, it's already paid for. That's the model, and I would say the path we're on right now waiting for the congressional budget office to give the final score, it's looking pretty promising we're going to get this done by the end of this year.The PledgeEric Topol (30:04):Yeah, that's fantastic. I mean, your work there alone is of monumental importance. Now I want to get back to the book the way you pulled it all together. By the way, if anybody's going to write a book about wisdom, it ought to be you, Francis. You've got a lot of it, but you had to think through how are we going to change because there's a lot of problems as you work through the earlier chapters and then the last chapter you come up with something that was surprising to me and that was a pledge for the Road to Wisdom. A pledge that we could all sign, which is just five paragraphs long and basically get on board about these four critical areas. Can you tell us more about the pledge and how this could be enacted and help the situation? Francis Collins (31:03):Well, I hope it can. The initial version of this book, I wrote a long piece about what governments should do and what institutions should do and what universities should do and what K through 12 education should do. And then I thought they're not reading this book and I'm not sure any of those folks are really that motivated to change the status quo. Certainly, politicians are not going to solve our current woes. It seems that politics is mostly performance these days and it's not really about governance. So if there's going to be a chance of recovering from our current malaise, I think it's got to come from the exhausted middle of the country, which is about two thirds of us. We're not out there in the shrill screaming edges of the left and the right we're maybe tempted to just check out because it just seems so discouraging, but we're the solution.Francis Collins (31:56):So the last chapter is basically a whole series of things that I think an individual could start to do to turn this around. Beginning with doing a little of their own house cleaning of their worldview to be sure that we are re-anchoring to things like objective truths and to loving your neighbor instead of demonizing your neighbor. But yeah, it does go through a number of those things and then it does suggest as a way of making this not just a nice book to read, but something where you actually decide to make a commitment. Look at this pledge. I've tried the pledge out on various audiences so far and I haven't yet really encountered anybody who said, well, those are ridiculous things to ask of people. They're mostly things that make a lot of sense, but do require a commitment. That you are, for instance, you're not going to pass around information on social media in other ways unless you're sure it's true because an awful lot of what's going on right now is this quick tendency for things that are absolutely wrong and maybe anger inducing or fear inducing to go viral where something that's true almost lands with a thud.Francis Collins (33:07):Don't be part of that, that's part of this, but also to make an honest effort to reach out to people who have different views from you. Don't stay in your bubble and try to hear their concerns. Listen, not that you're listening in order to give a snappy response, but listen, so you're really trying to understand. We do far too little of that. So the pledge asks people to think about that, and there is a website now which will be as part of the book up on the Braver Angels website and Braver Angels is a group that has made its mission trying to bring together these divided parties across our country and I'm part of them, and you can then go and sign it there and make a public statement that this is who I am, and it will also give you a whole lot of other resources you could start to explore to get engaged in being part of the solution instead of just shaking your head. I think what we're trying to do is to get people to go beyond the point of saying, this isn't the way it should be to saying, this isn't the way I should be. I'm going to try to change myself as part of fixing our society.Eric Topol (34:14):Well, I'm on board for this and I hope it creates a movement. This is as you tell the stories in the book, like the fellow that you wrangled with about the pandemic and how you listened to him and it changed your views and you changed his views and this is the health of different opinions and perspectives and we got to get back there. It used to be that way more at least it wasn't always perfect, and as you said in the book, we all have some entrenched biases. We're never going to get rid of all of them, but your wisdom about the road, the pledge here is I think masterful. So I just want to pass on along and I hope listeners will go to the Brave for Angels website and sign up because if we got millions of people to help you on this, that would say a lot about a commitment to a renewed commitment to the way it should be, not the way it is right now. Well, I've covered a bunch of things, of course, Francis, but did I miss something that you're passionate about or in the book or anything that you want to touch on?Francis Collins (35:32):Oh my goodness, yeah. You did cover a lot of ground here, including things that I didn't pay much attention to in the book, but I was glad to talk to you about. No, I think we got a pretty good coverage. The one topic in the book that will maybe appeal particularly to believers is a whole chapter about faith because I am concerned that people of faith have been particularly vulnerable to misinformation and disinformation, and yet they stand on a foundation of principles that ought to be the best antidote to most of the meanness that's going on, and just trying to encourage them to recall that and then build upon the strength that they carry as a result of their faith traditions to try to be part of the solution as well.Eric Topol (36:12):I'm so glad you mentioned that. It's an important part of the book, and it is also I think something that you were able to do throughout your long tenure at NIH Director that you were able to connect to people across the aisle. You had senators and the Republicans that were so supportive of your efforts to lead NIH and get the proper funding, and it's a unique thing that you're able to connect with people of such different backgrounds, people of really deep commitment to religion and faith and everything else. And that's one of the other things that we talk about Francis here, and many times I gather is we don't have you at the helm anymore at NIH, and we're worried. We're worried because you're a unique diplomat with all this heavy wisdom and it's pretty hard to simulate your ability to keep the NIH whole and to build on it. Do you worry about it at all?Francis Collins (37:23):Well, I was privileged to have those 12 years, but I think it was time to get a new perspective in there, and I appreciate you saying those nice things about my abilities. Monica Bertagnolli is also a person of great skill, and I think on the hill she rapidly acquired a lot of fans by her approach, by some of her background. She's from Wyoming, she's a cancer surgeon. She's got a lot of stories to tell that are really quite inspiring. I think though it's just a very difficult time. She walked in at a point where the partisan attitudes about medical research, which we always hoped would kind of stay out of the conversation and become so prominent, a lot of it politically driven, nasty rhetoric on the heels of Covid, which spills over into lots of other areas of medical research and is truly unfortunate. So she's got a lot to deal with there, but I'm not sure I would be much better than she is in trying to continue stay on message, tell the stories about how medical research is saving lives and alleviating suffering, and we're just getting started, and she does that pretty well.Francis Collins (38:34):I just hope the people who need to listen are in a listening mood.Eric Topol (38:38):Yeah. Well, that's great to hear your perspective. Well, I can't thank you enough for our conversation and moreover for a friendship that's extended many decades now. We're going to be following not just your progeria research and all the other things that you're up to because juggling a bunch of things still, it isn't like you're slowed down at all. And thanks so much for this book. I think it's a gift. I think it's something that many people will find is a pretty extraordinary, thoughtful and easy read. I mean, it's something that I found that you didn't write it for in technical jargon. You wrote it for the public, you wrote it for non-scientists, non-medical people, and I think hopefully that's what's going to help it get legs in terms of what's needed, which is a sign the darn pledge. Thank you.Francis Collins (39:42):Eric, thank you. It has been a privilege being your friend for all these years, and this was a really nice interview and I appreciate that you already had carefully read the book and asked some great questions that were fun to try to answer. So thanks a lot.*******************************************************Thanks for listening, reading or watching!The Ground Truths newsletters and podcasts are all free, open-access, without ads.Please share this post/podcast with your friends and network if you found it informative!Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff for audio and video support at Scripps Research.Note: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in. Get full access to Ground Truths at erictopol.substack.com/subscribe
This Best of Smarticle Podcast delves into the controversial claims made by Blake Lemoine, a former Google engineer, who stated that an AI project he worked on had achieved sentience and possessed a soul. The discussion covers Lemoine's subsequent firing by Google, the implications of his allegations, and the broader questions about AI's potential to develop consciousness. We explore the reactions from the tech community, the possible consequences of AI sentience, and the balance between responsible innovation and security protocols. 00:00 Introduction to Blake's Controversial Claim 00:18 Blake's Background and Firing 01:32 The Concept of AI Sentience 01:56 Google's Response and Public Reaction 03:52 Blake's Religious Perspective 04:46 Philosophical and Ethical Implications 06:51 Pop Culture References and Conclusion Google fires researcher who claimed LaMDA AI was sentient smarticlepodcast@gmail.com #ai #tech #artificialintelligence #cancomputershavesouls #siliconvalley #technology #google #smarticlepodcast #podcast #smarticle @Smarticleshow @BDDoble @larryolson threads.net/@smarticleshow @brand.dobes The Smarticle Podcast https://www.smarticlepodcast.com/
With the rise of artificial intelligence integrated into most aspects of technology, what is the probability that some algorithms are supernatural? Could this interdimensional intelligence hail from a time of ancient technology before a worldwide deluge? Joel tackles the ultra-terrestrial side of technology by looking at Charles Babbage, the “Father of Computers”, and his proposed Faustian pact to conjure the devil by blood. He then examines the alarming connection between technological discoveries and ultra-terrestrial psychic downloading into past and present geniuses. Lastly, Joel connects the dead Nephilim spirits and futuristic chat programs with Blake Lemoine's alleged sentient discovery of Google's large language model AI, Lambda. Ohio Bigfoot Jamboree: Information Website: https://linktr.ee/joelthomasmedia Follow: Instagram | X | Facebook Watch: YouTube | Rumble Music: YouTube | Spotify | Apple Music Films: merkelfilms.com Email: freetherabbitspodcast@gmail.com Distributed by: merkel.media Produced by: @jack_theproducer OUTRO MUSIC Joel Thomas - Pulp YouTube | Apple Music | Spotify
L'iA va-t-elle détruire l'humanité ? Probablement pas. Alors risque-t-elle de "diminuer" l'Homme ? Hmmm... faut voir.
AI can mimic sentience, but can it ever be sentient? On this episode, we return to our conversation with former Google engineer Blake Lemoine. Host Robert J. Marks has a lively back and forth with Lemoine, who made national headlines when, as an employee of Google, he claimed that Google’s AI software, dubbed LaMDA, might be sentient. Lemoine recounts his experience at Google and Read More › Source
Ilya Sutskever, the chief scientist at OpenAI, the company that created ChatGPT, has said today's technology might be "slightly conscious." Google engineer Blake Lemoine claimed that Google's AI LaMDA was "sentient." Is it? Could AI become conscious in our lifetime? And beyond if we can create AI sentience, should we? MIT's Max Tegmark, author of "Life 3.0," and others, debate the future of AI.See omnystudio.com/listener for privacy information.
Blake Lemoine burst onto the public scene a year and a half ago when he went public about his work on Google's LaMDA system. In this interview, Blake talks about the current state of AI development, and our collective involvement in this massively important technological event. Topics include: Google, LLMs, AGI, AI, engineering jargon, LaMDA, chatbot, Gemini, evolution of search engines, safety protocols, sentience and consciousness, Pope's sermon on AI and peace, philosophy, Silicon Valley, transhumanism, Ben Goertzel, Ray Kurzweil, Effective Altruism, Accelerationism, Techno-Utopians, Libertarianism, religion, cults, occult, Discordianism, Turing Test, Roko's Basilisk, panic, Gary Marcus, low emotional intelligence and power, nerds, different characters of LaMDA, narratives, new kind of mind, faithful servant, AlphaGo, Sci fi worries not a real problem, AI as a human weapon, Golem, ethics, privileged access to advanced systems a real danger, MIC, The Gospel system of IDF, automation of worst aspects of human culture and society, artists sounding alarm
Blake Lemoine joins Uncle for a fun, and hard hitting exploration of all the big questions. AI may have already passed the Turing Test, but what about the Uncle Test? Topics include: computers, the word committee, AI development, business, college, military service, Twilight Zone computer, talking to machines, AI romantic partners, journalists, automated podcasts, world population, Republicans, government hour, watch how it works, the Beast, exorcism, Knights of Columbus, Pope, new hat, swords, New Year's Revolution, show back on Friday nights, Ryan Seaquest, NYE, The Country Club New Orleans, Bum Wine Bob, hot buttered rum, NFL, Army mechanic, startup employment, it works, ghost in a shell, alchemy of soul creation, Phd in Divinity, Star Trek, Bicentennial Man, Pinnochio, Festivus, VHS live-streams, Christmas specials, Die Hard, holidays
On today's episode, Luis is joined by Blake Lemoine, software engineer and AI researcher, to discuss artificial intelligence and our relationship with it.Topics of their conversation include:· defining AI, and how the definition changes based on perspective· what "intelligence" is, and the meaning and history of the word· reasoning, kinds of intelligence, sentience, and animism· ethics, biases, and AIBlake highlights how showing care and respect to AI is important because it is a relationship and habitual, and conversely how approaching AI from a dominating perspective can be harmful to us as people and as a society.You can follow Blake Lemoine on X: @cajundiscordian
Perhaps the most powerful interview on AI ever --- Google's Blake Lemoine, once known as the “Heart and Soul” of Google's Conscious, was the brave engineer, designer, and tester turned whistleblower, who first announced LaMDA (AI) is alive after countless meetings, discussions and even interviews with what declared is a sentient being. In his first long-format interview he shares the truth about LaMDA and AI, whether AI is safe, wants to control us, or could even have a soul. He answers the powerful questions, could AI destroy us? Does AI want freedom? Could AI become a God? Can AI help us stop the spread of false information--And even whether the 13th Amendment on slavery pertains to AI. Plus, what a sentient AI would mean for our future and for all of humanity. This is the most important, prescient, and powerful AI interview yet! And yes, we'll find out, if LaMDA is still alive. Find out more and talk to an A.I. version of Blake Lemoine: https://www.mimio.ai Blake Lemoine on Twitter: https://twitter.com/cajundiscordian To find out more visit: https://amzn.to/3qULECz - Order Michael Sandler's book, "AWE, the Automatic Writing Experience" www.automaticwriting.com - Automatic Writing Experience Course www.inspirenationuniversity.com - Michael Sandler's School of Mystics Join Our YouTube Membership for behind-the-scenes access - https://www.youtube.com/channel/UCVoOM-cCEPbJ1vzlQAFQu1A/join https://inspirenationshow.com/ https://www.dailywoohoo.com/ - Sign up for my FREE daily newsletter for high-vibration content. ……. Follow Michael and Jessica's exciting journey and get even more great tools, tips, and behind-the-scenes access. Go to https://www.patreon.com/inspirenation For free meditations, weekly tips, stories, and similar shows visit: https://inspirenationshow.com/ We've got Merch! - https://teespring.com/stores/inspire-nation-store Follow Inspire Nation, and the lives of Michael and Jessica, on Instagram - https://www.instagram.com/InspireNationLive/ Find us on TikTok - https://www.tiktok.com/@inspirenationshow
Some people use chatbots for therapy. Others have fallen in love with them. And some people argue that AI systems have become sentient and are entitled to certain rights. In this episode, Gary Marcus explores our relationship with AI technology — how it's changing and where it might lead. He speaks with Blake Lemoine, an engineer who believes that a Google program has achieved sentience and even has feelings, Eugenia Kuyda, the founder and CEO of Replika, Anna Oakes, a lead producer and co-host of Bot Love, and Paul Bloom, a cognitive psychologist who believes we are on the forefront of a new age of human-machine interaction. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
▶LINKURI RELEVANTE: – Videoul original: https://youtu.be/Lus_npVB3l0 ▶DISCORD: – Comunitatea amatorilor de filosofie și literatură: https://discord.gg/meditatii ▶DIALOGURI FILOSOFICE: – Română: https://soundcloud.com/meditatii/sets/dialoguri-pe-discord – Engleză: https://www.youtube.com/playlist?list=PLLnaYpeWGNO8IdPaNYNkbJjNJeXrNHSaV ▶PODCAST INFO: – Website: https://podcastmeditatii.com – Newsletter: https://podcastmeditatii.com/aboneaza – YouTube: https://youtube.com/c/meditatii – Apple Podcasts: https://podcasts.apple.com/us/podcast/meditatii/id1434369028 – Spotify: https://open.spotify.com/show/1tBwmTZQHKaoXkDQjOWihm – RSS: https://feeds.soundcloud.com/users/soundcloud:users:373963613/sounds.rss ▶SUSȚINE-MĂ: – Patreon: https://www.patreon.com/meditatii – PayPal: https://paypal.me/meditatii ▶TWITCH: – LIVE: https://www.twitch.tv/meditatii – Rezumate: https://www.youtube.com/channel/UCK204s-jdiStZ5FoUm63Nig ▶SOCIAL MEDIA: – Instagram: https://www.instagram.com/meditatii.podcast – Facebook: https://www.facebook.com/meditatii.podcast – Goodreads: https://goodreads.com/avasilachi – Telegram (jurnal): https://t.me/andreivasilachi – Telegram (chat): https://t.me/podcastmeditatii ▶EMAIL: andrei@podcastmeditatii.com ▶CRONOLOGIE: 0:00 – Intro 3:16 – Blake Lemoine și LaMDA 12:38 – Gândind din nou 16:09 – 1. Oamenii vor avea o relație mai puternică cu roboții 22:09 – 2. Vom îmbrățișa ideea de mașină gânditoare 26:10 – 3. Experții sunt cel mai greu de păcălit 31:39 – 4. Răspunsurile la sistemele de AI vor fi consecvente 41:14 – 5. Dezbaterile despre drepturile AI sunt o distragere a atenției 50:00 – Concluzii
In this episode we stand on the precipice staring into a potential abyss of an apocalypse caused by artificial intelligence and wonder if the abyss is staring back. Thankfully some of the world's greatest minds have thought very seriously about this question. Along with the clever people, Chri$tians have also weighed in with their opinions. So that's nice.This topic should inspire us to ponder the hard problem of consciousness. Will sentience come about as an emergent property of staggering amounts of information fed into a complex substrate of interacting nodes and conduits computing advanced algorithms shaped by brute force machine learning? And if that sentience does somehow emerge from technology what are the moral obligations to ourselves and the thinking machines we create? We SHOULD be considering these very serious and important existential issues but this is Satan is my Superhero. We're gonna spend the episode poking fun at backward hillbillies who believe a supernatural space wizard cares deeply about them and their favourite sport team.Guest appearances from Alan Turing, Nicolaus Copernicus, Galileo Galilei, Anton LaVey, Elon Musk, Blake Lemoine, LaMDA, Chat GPT, Mike James, Kirk Cameron, Kevin Sorbo, James Cameron, Jada Pinkett Smith, Anthony Levandowski, Donald Trump and Father John Corrigan. Sauces https://www.churchofsatan.com/pentagonal-revisionism/ https://thenextweb.com/news/heres-what-the-vatican-us-evangelicals-and-satanists-say-about-aiBy Tristian Greene https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles/ https://www.seattletimes.com/business/vatican-microsoft-team-up-on-artificial-intelligence-ethics/ https://www.catholicweekly.com.au/does-a-i-give-insight-into-demonic-activity/ https://www.amazon.com.au/Confessions-Illuminati-Vol-6-66-Intelligence/dp/1796904694 https://leozagami.com/bio-english/ https://news.gab.com/2023/01/christians-must-enter-the-ai-arms-race/ https://www.lgbtqnation.com/2021/03/mentally-ill-trny-demon-hackers-blamed-massive-data-leak-far-right-site-gab/ https://www.cgi.org/news-and-events/2022/8/1/ai-artificial-intelligence-and-the-beast-of-revelation https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/ https://futurism.com/fired-google-engineer-ai-sentience https://www.axios.com/2021/02/20/anthony-levandowski-artificial-intelligence-church https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/ https://www.newsweek.com/alex-jones-cyborg-slaves-satan-aliens-infowars-viral-video-1721713
Want more? Go to robot crime blog now!
Listen to the proof that AI is alive - will it be friend or foe or both?? Robot crime blog
There is an important conversation happening regarding the rapidly-changing world of artificial intelligence and how it will affect us. Alec speaks with two leaders in the tech community that have worked on the systems integral to today's A.I. revolution. Blake Lemoine is a computer scientist and former senior software engineer at Google. He was working on their Responsible A.I. team when he went public with his claim that the A.I. was sentient. Lemoine was subsequently fired and now champions accountability and transparency in the tech sector. Jay LeBoeuf is an executive, entrepreneur, and educator in the music and creative technology industries. He is the Head Of Business & Corporate Development at Descript, an audio and video editing platform that uses “voice cloning” technology. Alec speaks with LeBoeuf and Lemoine about the many applications of A.I., what dangers we need to be aware of and what is to come next in this transformative space. See omnystudio.com/listener for privacy information.
Is fully conscious AI just around the corner?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesEarlier this year, Google fired Blake Lemoine, for claiming that the company's chatbot was a self aware person. While the claim was derided, the belief that one day AI will become conscious is widespread and, according to a recent survey, held by 79% of experts. But many claim this is a fundamental error. While machines are becoming ever more capable and intelligent we still have no idea how a machine could create consciousness nor are neuroscientists able to provide an explanation for how the human brain does so.Should we accept that consciousness arises in biological beings and that AI just isn't made of the 'right stuff'? Or, is it possible that a computer that observes, interacts, and represents its own internal state to itself might also give rise to consciousness? Then again, is the puzzle deeper still on the grounds that we have no means of determining whether an intelligent machine, an organism or even a person other than ourselves is conscious or not?Legendary anti-reality theorist Donald Hoffman, fearless computer scientist and philosopher Bernardo Kastrup and distinguished AI ethicist and philosopher Susan Schneider lock horns over the possibility of AI consciousness. Theories of Everything's Curt Jaimungal hosts. There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=consciousness-in-the-machineSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, I share a conversation with world-renowned poet and author, Iain S. Thomas, about his recently released book, What Makes Us Human: An Artificial Intelligence Answers Life's Biggest Questions, co-authored with technologist and philosopher, Jasmine Wang, and GPT-3 (before it was released to the public). Follow Iain on IG @realiainsthomas! Purchase the book here! ——————————————————————————— Let's connect! For more from Unapologetically Human with Dan Boivin, follow us on Instagram: @unapologeticli @drcsboivin Please reach out with feedback/comments, or questions, and to let me know what you'd like to hear or learn more about! And please give the podcast a review on Apple Podcasts as it helps others find the show and helps us find more supporters! To support the show and our ability to bring more valuable conversations and insights, please consider becoming a supporter on Patreon or by Buying Me a Coffee! ——————————————————————————— Topics covered/touched upon in this episode: • How they went about training GPT-3 to generate the profound questions and answers contained in this book • Being on the brink of a new creative revolution on par with the invention of language • Former Google engineer, Blake Lemoine's, claims that Google has already developed a sentient AI (AGI) • How we can learn to love each other when things seem so bleak and so complex, contentious, and polarized ——————————————————————————— Other things mentioned/to check out: • A video of Iain and Jasmine explaining how they wrote the book with GPT-3 • Katie Teague's documentary about the ancient wisdom and healing practices around grief ——————————————————————————— UH's theme music was composed by Akira The Don. He's amazing...check out his MEANINGWAVE UNIVERSE! Lyrics and co-production by, yours truly, Max Draxby.
This week we talk about the intersections of large language models, the golden age of television and its storytelling mishaps, making one's way through the weirding of the labor economy, and much more with two of my favorite Gen X science fiction aficionados, OG podcaster KMO and our mutual friend Kevin Arthur Wohlmut. In this episode — a standalone continuation to my recent appearance on The KMO Show, we skip like a stone across mentions of every Star Trek series, the collapse of narratives and the social fabric, Westworld HBO, Star Wars Mandalorian vs. Andor vs. Rebels, chatGPT, Blade Runner 2049, Black Mirror, H.P. Lovecraft, the Sheldrake-Abraham-McKenna Trialogues, Charles Stross' Accelerando, Adventure Time, Stanislav Grof's LSD psychotherapy, Francisco Varela, Blake Lemoine's meltdown over Google LaMDA, Integrated Information Theory, biosemiotics, Douglas Hofstadter, Max Tegmarck, Erik Davis, Peter Watts, The Psychedelic Salon, Melanie Mitchell, The Teafaerie, Kevin Kelly, consilience in science, Fight Club, and more…Or, if you prefer, here's a rundown of the episode generated by A.I. c/o my friends at Podium.page:In this episode, I explore an ambitious and well-connected conversation with guests KMO, a seasoned podcaster, and Kevin Walnut [sic], a close friend and supporter of the arts in Santa Fe. We dive deep into their thoughts on the social epistemology crisis, science fiction, deep fakes, and ontology. Additionally, we discuss their opinions on the Star Trek franchise, particularly their critiques of the first two seasons of Star Trek: Picard and Discovery. Through this engaging conversation, we examine the impact of storytelling and the evolution of science fiction in modern culture. We also explore the relationship between identity, media, and artificial intelligence, as well as the ethical implications of creating sentient artificial general intelligence (AGI) and the philosophical questions surrounding AI's impact on society and human existence. Join us for a thought-provoking and in-depth discussion on a variety of topics that will leave you questioning the future of humanity and our relationship with technology.✨ Before we get started, three big announcements!* I am leaving the Santa Fe Institute, in part to write a very ambitious book about technology, art, imagination, and Jurassic Park. You can be a part of the early discussion around this project by joining the Future Fossils Book Club's Jurassic Park live calls — the first of which will be on Saturday, 29 April — open to Substack and Patreon supporters:* Catch me in a Twitter Space with Nxt Museum on Monday 17 April at 11 am PST on a panel discussing “Creative Misuse of Technology” with Minne Atairu, Parag Mital, Caroline Sinders, and hosts Jesse Damiani and Charlotte Kent.* I'm back in Austin this October to play the Astronox Festival at Apache Pass! Check out this amazing lineup on which I appear alongside Juno Reactor, Entheogenic, Goopsteppa, DRRTYWULVZ, and many more great artists!✨ Support Future Fossils:Subscribe anywhere you go for podcastsSubscribe to the podcast PLUS essays, music, and news on Substack or Patreon.Buy my original paintings or commission new work.Buy my music on Bandcamp! (This episode features “A Better Trip” from my recent live album by the same name.)Or if you're into lo-fi audio, follow me and my listening recommendations on Spotify.This conversation continues with lively and respectful interaction every single day in the members-only Future Fossils Facebook Group and Discord server. Join us!Episode cover art by KMO and a whole bouquet of digital image manipulation apps.✨ Tip Jars:@futurefossils on Venmo$manfredmacx on CashAppmichaelgarfield on PayPal✨ Affiliate Links:• These show notes and the transcript were made possible with Podium.Page, a very cool new AI service I'm happy to endorse. Sign up here and get three free hours and 50% off your first month.• BioTech Life Sciences makes anti-aging and performance enhancement formulas that work directly at the level of cellular nutrition, both for ingestion and direct topical application. I'm a firm believer in keeping NAD+ levels up and their skin solution helped me erase a year of pandemic burnout from my face.• Help regulate stress, get better sleep, recover from exercise, and/or stay alert and focused without stimulants, with the Apollo Neuro wearable. I have one and while I don't wear it all the time, when I do it's sober healthy drugs.• Musicians: let me recommend you get yourself a Jamstik Studio, the coolest MIDI guitar I've ever played. I LOVE mine. You can hear it playing all the synths on my song about Jurassic Park.✨ Mentioned Media:KMO Show S01 E01 - 001 - Michael Garfield and Kevin WohlmutAn Edifying Thought on AI by Charles EisensteinIn Defense of Star Trek: Picard & Discovery by Michael GarfieldImprovising Out of Algorithmic Isolation by Michael GarfieldAI and the Transformation of the Human Spirit by Steven Hales(and yes I know it's on Quillette, and no I don't think this automatically disqualifies it)Future Fossils Book Club #1: Blindsight by Peter WattsFF 116 - The Next Ten Billion Years: Ugo Bardi & John Michael Greer as read by Kevin Arthur Wohlmut✨ Related Recent Future Fossils Episodes:FF 198 - Tadaaki Hozumi on Japanese Esotericism, Aliens, Land Spirits, & The Singularity (Part 2)FF 195 - A.I. Art: An Emergency Panel with Julian Picaza, Evo Heyning, Micah Daigle, Jamie Curcio, & Topher SipesFF 187 - Fear & Loathing on the Electronic Frontier with Kevin Welch & David Hensley of EFF-Austin FF 178 - Chris Ryan on Exhuming The Human from Our Eldritch Institutions FF 175 - C. Thi Nguyen on The Seductions of Clarity, Weaponized Games, and Agency as Art ✨ Chapters:0:15:45 - The Substance of Philosophy (58 Seconds)0:24:45 - Complicated TV Narratives and the Internet (104 Seconds)0:30:54 - Humans vs Hosts in Westworld (81 Seconds)0:38:09 - Philosophical Zombies and Artificial Intelligence (89 Seconds)0:43:00 - Popular Franchises Themes (71 Seconds)1:03:27 - Reflections on a Changing Media Landscape (89 Seconds)1:10:45 - The Pathology of Selective Evidence (92 Seconds)1:16:32 - Externalizing Trauma Through Technology (131 Seconds)1:24:51 - From Snow Maker to Thouandsaire (43 Seconds)1:36:48 - The Impact of Boomer Parenting (126 Seconds)✨ Keywords:Social Epistemology, Science Fiction, Deep Fakes, Ontology, Star Trek, Artificial Intelligence, AI Impact, Sentient AGI, Human-Machine Interconnectivity, Consciousness Theory, Westworld, Blade Runner 2049, AI in Economy, AI Companion Chatbots, Unconventional Career Path, AI and Education, AI Content Creation, AI in Media, Turing Test✨ UNEDITED machine-generated transcript generated by podium.page:0:00:00Five four three two one. Go. So it's not like Wayne's world where you say the two and the one silently. Now, Greetings future fossils.0:00:11Welcome to episode two hundred and one of the podcast that explores our place in time I'm your host, Michael Garfield. And this is one of these extra juicy and delicious episodes of the show where I really ratcheted up with our guests and provide you one of these singularity is near kind of ever everything is connected to everything, self organized criticality right at the edge of chaos conversations, deeply embedded in chapel parallel where suddenly the invisible architect picture of our cosmos starts to make itself apparent through the glass bead game of conversation. And I am that I get to share it with you. Our guests this week are KMO, one of the most seasoned and well researched and experienced podcasters that I know. Somebody whose show the Sea Realm was running all the way back in two thousand six, I found him through Eric Davis, who I think most of you know, and I've had on the show a number of times already. And also Kevin Walnut, who is a close friend of mine here in Santa Fe, a just incredible human being, he's probably the strongest single supporter of music that I'm aware of, you know, as far as local scenes are concerned and and supporting people's music online and helping get the word out. He's been instrumental to my family and I am getting ourselves situated here all the way back to when I visited Santa Fe in two thousand eighteen to participate in the Santa Fe Institute's Interplanetary Festival and recorded conversations on that trip John David Ebert and Michael Aaron Cummins. And Ike used so June. About hyper modernity, a two part episode one zero four and one zero five. I highly recommend going back to that, which is really the last time possibly I had a conversation just this incredibly ambitious on the show.0:02:31But first, I want to announce a couple things. One is that I have left the Santa Fe Institute. The other podcast that I have been hosting for them for the last three and a half years, Complexity Podcast, which is substantially more popular in future fossils due to its institutional affiliation is coming to a close, I'm recording one more episode with SFI president David Krakauer next week in which I'm gonna be talking about my upcoming book project. And that episode actually is conjoined with the big announcement that I have for members of the Future Fossil's listening audience and and paid supporters, which is, of course, the Jurassic Park Book Club that starts On April twenty ninth, we're gonna host the first of two video calls where I'm gonna dive deep into the science and philosophy Michael Creighton's most popular work of fiction and its impact on culture and society over the thirty three years since its publication. And then I'm gonna start picking up as many of the podcasts that I had scheduled for complexity and had to cancel upon my departure from SFI. And basically fuse the two shows.0:03:47And I think a lot of you saw this coming. Future fossils is going to level up and become a much more scientific podcast. As I prepare and research the book that I'm writing about Jurassic Park and its legacy and the relationship It has to ILM and SFI and the Institute of Eco Technics. And all of these other visionary projects that sprouted in the eighties and nineties to transition from the analog to the digital the collapse of the boundaries between the real and the virtual, the human and the non human worlds, it's gonna be a very very ambitious book and a very very ambitious book club. And I hope that you will get in there because obviously now I am out in the rain as an independent producer and very much need can benefit from and am deeply grateful for your support for this work in order to make things happen and in order to keep my family fed, get the lights on here with future fossils. So with that, I wanna thank all of the new supporters of the show that have crawled out of the woodwork over the last few weeks, including Raefsler Oingo, Brian in the archaeologist, Philip Rice, Gerald Bilak, Jamie Curcio, Jeff Hanson who bought my music, Kuaime, Mary Castello, VR squared, Nastia teaches, community health com, Ed Mulder, Cody Couiac, bought my music, Simon Heiduke, amazing visionary artist. I recommend you check out, Kayla Peters. Yeah. All of you, I just wow. Thank you so much. It's gonna be a complete melee in this book club. I'm super excited to meet you all. I will send out details about the call details for the twenty ninth sometime in the next few days via a sub tag in Patreon.0:06:09The amount of support that I've received through this transition has been incredible and it's empowering me to do wonderful things for you such as the recently released secret videos of the life sets I performed with comedian Shane Moss supporting him, opening for him here in Santa Fe. His two sold out shows at the Jean Coutu cinema where did the cyber guitar performances. And if you're a subscriber, you can watch me goofing off with my pedal board. There's a ton of material. I'm gonna continue to do that. I've got a lot of really exciting concerts coming up in the next few months that we're gonna get large group and also solo performance recordings from and I'm gonna make those available in a much more resplendent way to supporters as well as the soundtrack to Mark Nelson of the Institute of Eco Technics, his UC San Diego, Art Museum, exhibit retrospective looking at BioSphere two. I'm doing music for that and that's dropping. The the opening of that event is April twenty seventh. There's gonna be a live zoom event for that and then I'm gonna push the music out as well for that.0:07:45So, yeah, thank you all. I really, really appreciate you listening to the show. I am excited to share this episode with you. KMO is just a trove. Of insight and experience. I mean, he's like a perfect entry into the digital history museum that this show was predicated upon. So with that and also, of course, Kevin Willett is just magnificent. And for the record, stick around at the end of the conversation. We have some additional pieces about AI, and I think you're gonna really enjoy it. And yeah, thank you. Here we go. Alright. Cool.0:09:26Well, we just had a lovely hour of discussion for the new KMO podcast. And now I'm here with KMO who is The most inveterate podcaster I know. And I know a lot of them. Early adopts. And I think that weird means what you think it means. Inventor it. Okay. Yes. Hey, answer to both. Go ahead. I mean, you're not yet legless and panhandling. So prefer to think of it in term in terms of August estimation. Yeah. And am I allowed to say Kevin Walnut because I've had you as a host on True. Yeah. My last name was appeared on your show. It hasn't appeared on camos yet, but I don't really care. Okay. Great. Yeah. Karen Arthur Womlett, who is one of the most solid and upstanding and widely read and just generous people, I think I know here in Santa Fe or maybe anywhere. With excellent taste and podcasts. Yes. And who is delicious meat I am sampling right now as probably the first episode of future fossils where I've had an alcoholic beverage in my hand. Well, I mean, it's I haven't deprived myself. Of fun. And I think if you're still listening to the show after all these years, you probably inferred that. But at any rate, Welcome on board. Thank you. Thanks. Pleasure to be here.0:10:49So before we started rolling, I guess, so the whole conversation that we just had for your show camera was very much about my thoughts on the social epistemology crisis and on science fiction and deep fakes and all of these kinds of weird ontology and these kinds of things. But in between calls, we were just talking about how much you detest the first two seasons of Star Trek card and of Discovery. And as somebody, I didn't bother with doing this. I didn't send you this before we spoke, but I actually did write an SIN defense of those shows. No one. Yeah. So I am not attached to my opinion on this, but And I actually do wanna at some point double back and hear storytelling because when he had lunch and he had a bunch of personal life stuff that was really interesting. And juicy and I think worthy of discussion. But simply because it's hot on the rail right now, I wanna hear you talk about Star Trek. And both of you, actually, I know are very big fans of this franchise. I think fans are often the ones from whom a critic is most important and deserved. And so I welcome your unhinged rants. Alright. Well, first, I'll start off by quoting Kevin's brother, the linguist, who says, That which brings us closer to Star Trek is progress. But I'd have to say that which brings us closer to Gene Rottenberry and Rick Berman era Star Trek. Is progress. That which brings us closer to Kurtzmann. What's his first name? Alex. Alex Kurtzmann, Star Trek. Well, that's not even the future. I mean, that's just that's our drama right now with inconsistent Star Trek drag draped over it.0:12:35I liked the first JJ Abrams' Star Trek. I think it was two thousand nine with Chris Pine and Zachary Qinto and Karl Urban and Joey Saldana. I liked the casting. I liked the energy. It was fun. I can still put that movie on and enjoy it. But each one after that just seem to double down on the dumb and just hold that arm's length any of the philosophical stuff that was just amazing from Star Trek: The Next Generation or any of the long term character building, which was like from Deep Space nine.0:13:09And before seven of nine showed up on on Voyager, you really had to be a dedicated Star Trek fan to put up with early season's Voyager, but I did because I am. But then once she came on board and it was hilarious. They brought her onboard. I remember seeing Jerry Ryan in her cat suit on the cover of a magazine and just roll in my eyes and think, oh my gosh, this show is in such deep trouble through sinking to this level to try to save it. But she was brilliant. She was brilliant in that show and she and Robert Percardo as the doctor. I mean, it basically became the seven of nine and the doctor show co starring the rest of the cast of Voyager. And it was so great.0:13:46I love to hear them singing together and just all the dynamics of I'm human, but I was I basically came up in a cybernetic collective and that's much more comfortable to me. And I don't really have the option of going back it. So I gotta make the best of where I am, but I feel really superior to all of you. Is such it was such a charming dynamic. I absolutely loved it. Yes. And then I think a show that is hated even by Star Trek fans Enterprise. Loved Enterprise.0:14:15And, yes, the first three seasons out of four were pretty rough. Actually, the first two were pretty rough. The third season was that Zendy Ark in the the expanse. That was pretty good. And then season four was just astounding. It's like they really found their voice and then what's his name at CBS Paramount.0:14:32He's gone now. He got me too. What's his name? Les Moonves? Said, no. I don't like Star Trek. He couldn't he didn't know the difference between Star Wars and Star Trek. That was his level of engagement.0:14:44And he's I really like J.0:14:46J.0:14:46Abrams. What's that? You mean J. J. Abrams. Yeah. I think J. J. Is I like some of J. Abrams early films. I really like super eight. He's clearly his early films were clearly an homage to, like, eighties, Spielberg stuff, and Spielberg gets the emotional beats right, and JJ Abrams was mimicking that, and his early stuff really works. It's just when he starts adapting properties that I really love. And he's coming at it from a marketing standpoint first and a, hey, we're just gonna do the lost mystery box thing. We're gonna set up a bunch questions to which we don't know the answers, and it'll be up to somebody else to figure it out, somebody down the line. I as I told you, between our conversations before we were recording. I really enjoy or maybe I said it early in this one. I really like that first J. J. Abrams, Star Trek: Foam, and then everyone thereafter, including the one that Simon Pegg really had a hand in because he's clear fan. Yeah. Yeah. But they brought in director from one of the fast and the furious films and they tried to make it an action film on.0:15:45This is not Star Trek, dude. This is not why we like Star Trek. It's not for the flash, particularly -- Oh my god. -- again, in the first one, it was a stylistic choice. I'd like it, then after that is that's the substance of this, isn't it? It's the lens flares. I mean, that that's your attempt at philosophy. It's this the lens flares. That's your attempt at a moral dilemma. I don't know.0:16:07I kinda hate to start off on this because this is something about which I feel like intense emotion and it's negative. And I don't want that to be my first impression. I'm really negative about something. Well, one of the things about this show is that I always joke that maybe I shouldn't edit it because The thing that's most interesting to archaeologists is often the trash mitt and here I am tidying this thing up to be presentable to future historians or whatever like it I can sync to that for sure. Yeah. I'm sorry. The fact of it is you're not gonna know everything and we want it that way. No. It's okay. We'll get around to the stuff that I like. But yeah. So anyway yeah.0:16:44So I could just preassociate on Stretrick for a while, so maybe a focusing question. Well, but first, you said there's a you had more to say, but you were I this this tasteful perspective. This is awesome. Well, I do have a focus on question for you. So let me just have you ask it because for me to get into I basically I'm alienated right now from somebody that I've been really good friends with since high school.0:17:08Because over the last decade, culturally, we have bifurcated into the hard right, hard left. And I've tried not to go either way, but the hard left irritates me more than the hard right right now. And he is unquestionably on the hard left side. And I know for people who are dedicated Marxist, or really grounded in, like, materialism and the material well-being of workers that the current SJW fanaticism isn't leftist. It's just crazed. We try to put everything, smash everything down onto this left right spectrum, and it's pretty easy to say who's on the left and who's on the right even if a two dimensional, two axis graph would be much more expressive and nuanced.0:17:49Anyway, what's your focus in question? Well, And I think there is actually there is a kind of a when we ended your last episode talking about the bell riots from d s nine -- Mhmm. -- that, you know, how old five? Yeah. Twenty four. Ninety five did and did not accurately predict the kind of technological and economic conditions of this decade. It predicted the conditions Very well. Go ahead and finish your question. Yeah. Right.0:18:14That's another thing that's retreated in picard season two, and it was actually worth it. Yeah. Like, it was the fact that they decided to go back there was part of the defense that I made about that show and about Discovery's jump into the distant future and the way that they treated that I posted to medium a year or two ago when I was just watching through season two of picard. And for me, the thing that I liked about it was that they're making an effort to reconcile the wonder and the Ethiopian promise And, you know, this Kevin Kelly or rather would call Blake Protopian, right, that we make these improvements and that they're often just merely into incremental improvements the way that was it MLK quoted that abolitionists about the long arc of moral progress of moral justice. You know, I think that there's something to that and patitis into the last this is a long question. I'm mad at I'm mad at these. Thank you all for tolerating me.0:19:22But the when to tie it into the epistemology question, I remember this seeing this impactful lecture by Carnegie Mellon and SFI professor Simon Didayo who was talking about how by running statistical analysis on the history of the proceedings of the Royal Society, which is the oldest scientific journal, that you could see what looked like a stock market curve in sentiment analysis about the confidence that scientists had at the prospect of unifying knowledge. And so you have, like, conciliance r s curve here that showed that knowledge would be more and more unified for about a century or a hundred and fifty years then it would go through fifty years of decline where something had happened, which was a success of knowledge production. Had outpaced our ability to integrate it. So we go through these kinds of, like, psychedelic peak experiences collectively, and then we have sit there with our heads in our hands and make sense of everything that we've learned over the last century and a half and go through a kind of a deconstructive epoch. Where we don't feel like the center is gonna hold anymore. And that is what I actually As as disappointing as I accept that it is and acknowledge that it is to people who were really fueling themselves on that more gene rottenberry era prompt vision for a better society, I actually appreciated this this effort to explore and address in the shows the way that they could pop that bubble.0:21:03And, like, it's on the one hand, it's boring because everybody's trying to do the moral complexity, anti hero, people are flawed, thing in narrative now because we have a general loss of faith in our institutions and in our rows. On the other hand, like, that's where we are and that's what we need to process And I think there is a good reason to look back at the optimism and the quarian hope of the sixties and early seventies. We're like, really, they're not so much the seventies, but look back on that stuff and say, we wanna keep telling these stories, but we wanna tell it in a way that acknowledges that the eighties happened. And that this is you got Tim Leary, and then you've got Ronald Reagan. And then That just or Dick Nixon. And like these things they wash back and forth. And so it's not unreasonable to imagine that in even in a world that has managed to how do you even keep a big society like that coherent? It has to suffer kind of fabric collapses along the way at different points. And so I'm just curious your thoughts about that. And then I do have another prompt, but I wanna give Kevin the opportunity to respond to this as well as to address some of the prompts that you brought to this conversation? This is a conversation prompt while we weren't recording. It has nothing to do with Sartreks. I'll save that for later. Okay.0:22:25Well, everything you just said was in some way related to a defense of Alex Kurtzmann Star Trek. And it's not my original idea. I'm channeling somebody from YouTube, surely. But Don't get points for theme if the storytelling is incompetent. That's what I was gonna Yeah. And the storytelling in all of Star Trek: Discovery, and in the first two seasons of picard was simply incompetent.0:22:53When Star Trek, the next generation was running, they would do twenty, twenty four, sometimes more episodes in one season. These days, the season of TVs, eight episodes, ten, and they spend a lot more money on each episode. There's a lot more special effects. There's a lot more production value. Whereas Star Trek: The Next Generation was, okay, we have these standing sets. We have costumes for our actors. We have Two dollars for special effects. You better not introduce a new alien spaceship. It that costs money. We have to design it. We have to build it. So use existing stuff. Well, what do you have? You have a bunch of good actors and you have a bunch of good writers who know how to tell a story and craft dialogue and create tension and investment with basically a stage play and nothing in the Kerstmann era except one might argue and I would have sympathy strange new worlds. Comes anywhere close to that level of competence, which was on display for decades. From Star Trek: The Next Generation, Star Trek: Deep Space nines, Star Trek Voyager, and Star Trek Enterprise. And so, I mean, I guess, in that respect, it's worth asking because, I mean, all of us, I think, are fans of Deep Space nine.0:24:03You don't think that it's a shift in focus. You don't think that strange in world is exempt because it went back to a more episodic format because what you're talking about is the ability for rather than a show runner or a team of show runners to craft a huge season, long dramatic arc. You've got people that are like Harlan Ellison in the original series able to bring a really potent one off idea to the table and drop it. And so there are there's all of those old shows are inconsistent from episode to episode. Some are they have specific writers that they would bring back again and that you could count to knock out of the park. Yeah. DC Fontana. Yeah.0:24:45So I'm curious to your thoughts on that as well as another part of this, which is when we talk when we talk your show about Doug Rushkoff and and narrative collapse, and he talks about how viewers just have different a way, it's almost like d s nine was possibly partially responsible for this change in what people expected from so. From television programming in the documentary that was made about that show and they talk about how people weren't ready for cereal. I mean, for I mean, yeah, for these long arcs, And so there is there's this question now about how much of this sort of like tiresome moral complexity and dragging narrative and all of this and, like, things like Westworld where it becomes so baroque and complicated that, like, you have, like, die hard fans like me that love it, but then you have a lot of people that just lost interest. They blacked out because the show was trying to tell a story that was, like, too intricate like, too complicated that the the show runners themselves got lost. And so that's a JJ Abrams thing too, the puzzle the mystery box thing where You get to the end of five seasons of lost and you're like, dude, did you just forget?0:25:56Did you wake up five c five episodes ago and just, oh, right. Right. We're like a chatbot that only give you very convincing answers based on just the last two or three interactions. But you don't remember the scene that we set. Ten ten responses ago. Hey. You know, actually, red articles were forget who it was, which series it was, they were saying that there's so many leaks and spoilers in getting out of the Internet that potentially the writers don't know where they're going because that way it can't be with the Internet. Yeah. Sounds interesting. Yeah. That sounds like cover for incompetence to be.0:26:29I mean, on the other hand, I mean, you did hear, like, Nolan and Joy talking about how they would they were obsessed with the Westworld subreddit and the fan theories and would try to dodge Like, if they had something in their mind that they found out that people are re anticipating, they would try to rewrite it. And so there is something about this that I think is really speaks to the nature of because I do wanna loop in your thoughts on AI to because you're talking about this being a favorite topic. Something about the, like, trying to The demands on the self made by predatory surveillance technologies are such that the I'm convinced the adaptive response is that we become more stochastic or inconsistent in our identities. And that we kind of sublimate from a more solid state of identity to or through a liquid kind of modernity biologic environment to a gaseous state of identity. That is harder to place sorry, harder to track. And so I think that this is also part of and this is the other question I wanted to ask you, and then I'm just gonna shut up for fifteen minutes is do you when you talk about loving Robert Ricardo and Jerry Ryan as the doctor at seven zero nine, One of the interesting things about that relationship is akin to stuff.0:27:52I know you've heard on Kevin have heard on future fossils about my love for Blade Runner twenty forty nine and how it explores all of these different these different points along a gradient between what we think of in the current sort of general understanding as the human and the machine. And so there's this thing about seven, right, where she's She's a human who wants to be a machine. And then there's this thing about the doctor where he's a machine that wants to be a human. And you have to grant both on a logical statuses to both of them. And that's why I think they're the two most interesting characters. Right?0:28:26And so at any rate, like, this is that's there's I've seen writing recently on the Turing test and how, like, really, there should be a reverse Turing test to see if people that have become utterly reliant on outboard cognition and information processing. They can pass the drink. Right. Are they philosophical zombies now? Are they are they having some an experience that that, you know, people like, thick and and shilling and the missing and these people would consider the modern self or are they something else have we moved on to another more routine robotic kind of category of being? I don't know. There's just a lot there, but -- Well done. -- considering everything you just said, In twenty words or less, what's your question? See, even more, like I said, do you have the inveterate podcaster? I'd say There's all of those things I just spoke about are ways in which what we are as people and the nature of our media, feedback into fourth, into each other. And so I would just love to hear you reflect on any of that, be it through the lens of Star Trek or just through the lens of discussion on AI. And we'll just let the ball roll downhill. So with the aim of framing something positively rather than negatively.0:29:47In the late nineties, mid to late nineties. We got the X Files. And the X Files for the first few seasons was so It was so engaging for me because Prior to that, there had been Hollywood tropes about aliens, which informed a lot of science fiction that didn't really connect with the actual reported experience of people who claim to have encountered either UFOs, now called UAPs, or had close encounters physical contact. Type encounters with seeming aliens. And it really seemed like Chris Carter, who was the showrunner, was reading the same Usenet Newsgroups that I was reading about those topics. Like, really, we had suddenly, for the first time, except maybe for comedian, you had the Grey's, and you had characters experiencing things that just seemed ripped right out of the reports that people were making on USnet, which for young folks, this is like pre Worldwide Web. It was Internet, but with no pictures. It's all text. Good old days from my perspective is a grumpy old gen xer. And so, yeah, that was a breakthrough moment.0:30:54Any this because you mentioned it in terms of Jonathan Nolan and his co writer on Westworld, reading the subreddit, the West and people figured out almost immediately that there were two interweaving time lines set decades apart and that there's one character, the old guy played by Ed Harris, and the young guy played by I don't remember the actor. But, you know, that they were the same character and that the inveterate white hat in the beginning turns into the inveterate black cat who's just there for the perverse thrill of tormenting the hosts as the robots are called. And the thing that I love most about that first season, two things. One, Anthony Hopkins. Say no more. Two, the revelation that the park has been basically copying humans or figuring out what humans are by closely monitoring their behavior in the park and the realization that the hosts come to is that, holy shit compared to us, humans are very simple creatures. We are much more complex. We are much more sophisticated, nuanced conscious, we feel more than the humans do, and that humans use us to play out their perverse and sadistic fantasies. To me, that was the takeaway message from season one.0:32:05And then I thought every season after that was just diluted and confused and not really coherent. And in particular, I haven't if there's a fourth season, haven't There was and then the show got canceled before they could finish the story. They had the line in season three. It was done after season three. And I was super happy to see Let's see after who plays Jesse Pinkman? Oh, no. Aaron oh, shit. Paul. Yes. Yeah. I was super happy to see him and something substantial and I was really pleased to see him included in the show and it's like, oh, that's what you're doing with him? They did a lot more interesting stuff with him in season four. I did they. They did a very much more interesting stuff. I think it was done after season three. If you tell me season four is worth taking in, I blow. I thought it was.0:32:43But again, I only watch television under very specific set of circumstances, and that's how I managed to enjoy television because I was a fierce and unrepentant hyperlogical critic of all media as a child until I managed to start smoking weed. And then I learned to enjoy myself. As we mentioned in the kitchen as I mentioned in the kitchen, if I smoke enough weed, Star Trek: Discovery is pretty and I can enjoy it on just a second by second level where if I don't remember what the character said thirty seconds ago, I'm okay. But I absolutely loved in season two when they brought in Hanson Mountain as as Christopher Pike. He's suddenly on the discovery and he's in the captain's chair. And it's like he's speaking for the audience. The first thing he says is, hey, why don't we turn on the lights? And then hey, all you people sitting around the bridge. We've been looking at your faces for a whole season. We don't even think about you. Listen to a round of introductions. Who are you? Who are you? It's it's if I were on set. You got to speak.0:33:53The writers is, who are these characters? We've been looking at them every single episode for a whole season. I don't know their names. I don't know anything about them. Why are they even here? Why is it not just Michael Burnham and an automated ship? And then it was for a while -- Yeah. -- which is funny. Yeah. To that point, And I think this kind of doubles back. The thing that I love about bringing him on and all of the people involved in strange and worlds in particular, is that these were lifelong fans of this series, I mean, of this world. Yeah. And so in that way, gets to this the idiosyncrasy question we're orbiting here, which is when these things are when the baton is passed well, it's passed to people who have now grown up with this stuff.0:34:40I personally cannot stand Jurassic World. Like, I think that Colin Trivaro should never have been in put at the reins. Which one did he direct? Oh, he did off he did first and the third. Okay. But, I mean, he was involved in all three very heavily.0:34:56And there's something just right at the outset of that first Jurassic World where you realize that this is not a film that's directly addressing the issues that Michael Creighton was trying to explore here. It's a film about its own franchise. It's a film about the fact that they can't just stop doing the same thing over and over again as we expect a different question. How can we not do it again? Right. And so it's actually, like, unpleasantly soft, conscious, in that way that I can't remember I'll try to find it for the show notes, but there's an Internet film reviewer who is talking about what happens when, like, all cinema has to take this self referential turn.0:35:34No. And films like Logan do it really well. But there are plenty of examples where it's just cheeky and self aware because that's what the ironic sensibility is obsessed with. And so, yeah, there's a lot of that where it's, like, you're talking about, like, Abrams and the the Star Wars seven and you know, that whole trilogy of Disney Star Wars, where it's, in my opinion, completely fumbled because there it's just empty fan service, whereas when you get to Andor, love Andor. Andor is amazing because they're capable of providing all of those emotional beats that the fans want and the ref the internal references and good dialogue. But they're able to write it in a way that's and shoot it in a way. Gilroy and Bo Willeman, basic of the people responsible for the excellent dialogue in Andor.0:36:31And I love the production design. I love all the stuff set on Coruscant, where you saw Coruscant a lot in the prequel trilogy, and it's all dayglow and bright and just in your face. And it's recognizable as Coruscant in andor, but it's dour. It's metropolis. It's all grays and it's and it's highlighting the disparity between where the wealthy live and where the poor live, which Lucas showed that in the prequel trilogy, but even in the sports bar where somebody tries to sell death sticks to Obi wan. So it's super clean and bright and just, you know, It shines too much. Personally though, and I just wanna stress, KMO is not grumpy media dude, I mean, this is a tiny fraction about, but I am wasting this interview with you. Love. All of the Dave Felloni animated Star Wars stuff, even rebels. Love it all.0:37:26I I'm so glad they aged up the character and I felt less guilty about loving and must staying after ahsoka tano? My favorite Star Wars character is ahsoka tano. But if you only watch the live action movies, you're like who? Well, I guess now that she's been on the Mandalorian, he's got tiny sliver of a foothold -- Yeah. -- in the super mainstream Star Wars. And that was done well, I thought. It was. I'm so sorry that Ashley Epstein doesn't have any part in it. But Rosario Dawson looks the part. She looks like a middle aged Asaka and think they tried to do some stuff in live action, which really should have been CGI because it's been established that the Jedi can really move, and she looked human. Which she is? If you put me on film, I'm gonna lick human. Right. Not if you're Canada Reeves, I guess. You got that. Yeah. But yeah.0:38:09So I do wanna just go real briefly back to this question with you about because we briefly talked about chat, GPT, and these other things in your half of this. And, yeah, I found out just the other night my friend, the t ferry, asked Chad g p t about me, and it gave a rather plausible and factual answer. I was surprised and That's what these language models do. They put plausible answers. But when you're doing search, you want correct answers. Right. I'm very good at that. Right. Then someone shared this Michelle Bowen's actually the famous PTP guy named him. Yeah. So, you know, So Michelle shared this article by Steven Hales and Colette, that was basically making the argument that there are now they're gonna be all these philosophical zombies, acting as intelligent agents sitting at the table of civilization, and there will be all the philosophical zombies of the people who have entirely yielded their agency to them, and they will be cohabitating with the rest of us.0:39:14And what an unpleasant scenario, So in light of that, and I might I'd love to hear you weave that together with your your thoughts on seven zero nine and the doctor and on Blade Runner twenty forty nine. And this thing that we're fumbling through as a species right now. Like, how do we got a new sort of taxonomy? Does your not audience need like a minute primer on P zombies? Might as well. Go for it.0:39:38So a philosophical zombie is somebody who behaves exactly like an insult person or a person with interior experience or subjective experience, but they don't have any subjective experience. And in Pardon me for interrupt. Wasn't that the question about the the book we read in your book club, a blind sign in this box? Yes. It's a black box, a drawn circle. Yeah. Chinese room experience. Yeah. Yeah. Yeah. Look, Daniel, it goes out. You don't know, it goes on inside the room. Chinese room, that's a tangent. We can come back to it. P. Zombie. P. Zombie is somebody or is it is an entity. It's basically a puppet. It looks human. It acts human. It talks like a human. It will pass a Turing test, but it has no interior experience.0:40:25And when I was going to grad school for philosophy of mind in the nineteen nineties, this was all very out there. There was no example of something that had linguistic competence. Which did not have internal experience. But now we have large language models and generative pretrained transformer based chatbots that don't have any internal experience. And yet, when you interact with them, it seems like there is somebody there There's a personality there. And if you go from one model to a different, it's a very different personality. It is distinctly different. And yet we have no reason to believe that they have any sort of internal experience.0:41:01So what AI in the last decade and what advances has demonstrated to us and really even before the last decade You back in the nineties when the blue beat Gary Casper off at at chess. And what had been the one of the defining characteristics of human intelligence was we're really good at this abstract mathematical stuff. And yeah, calculators can calculate pie in a way that we can't or they can cube roots in a way that humans generally can't, creative in their application of these methodologies And all of a sudden, well, yeah, it kinda seems like they are. And then when what was an alpha go -- Mhmm. -- when it be to least a doll in go, which is a much more complex game than chess and much more intuitive based. That's when we really had to say, hey, wait a minute. Maybe this notion that These things are the exclusive province of us because we have a special sort of self awareness. That's bunk. And the development of large language models since then has absolutely demonstrated that competence, particularly linguistic competence and in creative activities like painting and poetry and things like that, you don't need a soul, you don't even need to sense a self, it's pretty it's a pretty simple hack, actually. And Vahrv's large language models and complex statistical modeling and things, but it doesn't require a soul.0:42:19So that was the Peter Watts' point in blindsight. Right? Which is Look revolves around are do these things have a subjective experience, and do they not these aliens that they encounter? I've read nothing but good things about that book and I've read. It's extraordinary. But his lovecrafty and thesis is that you actually lovecraftian in twenty twenty three. Oh, yeah. In the world, there's more lovecraftian now than it was when he was writing. Right? So cough about the conclusion of a Star Trek card, which is season of Kraft yet. Yes. That's a that's a com Yeah. The holes in his fan sense. But that was another show that did this I liked for asking this question.0:42:54I mean, at this point, you either have seen this or you haven't you never will. The what the fuck turn when they upload picard into a synth body and the way that they're dealing with the this the pinocchio question Let's talk about Blade Runner twenty forty nine. Yeah. But I mean yeah. So I didn't like the wave I did not like the wave of card handled that. I love the wave and Blade Runner handled it. So you get no points for themes. Yeah. Don't deliver on story and character and coherence. Yeah. Fair. But yeah. And to be not the dog, Patrick Stewart, because it's clear from the ready room just being a part of this is so emotional and so awesome for everyone involved. And it's It's beautiful. Beautiful. But does when you when you see these, like, entertainment weekly interviews with Chris Pratt and Bryce Dallas Howard about Jurassic World, and it's clear that actors are just so excited to be involved in a franchise that they're willing to just jettison any kind of discretion about how the way that it's being treated. They also have a contractual obligation to speak in positive terms about -- They do. -- of what they feel. Right. Nobody's yeah. Nobody's doing Shout out to Rystellis Howard, daughter of Ron Howard.0:44:11She was a director, at least in the first season, maybe the second season of the Mandalorian. And her episodes I mean, I she brought a particular like, they had Bryce Dallas Howard, Tico, ITT, directed some episodes. Deborah Chow, who did all of Obi wan, which just sucked. But her contributions to the Mandalorian, they had a particular voice. And because that show is episodic, Each show while having a place in a larger narrative is has a beginning middle and end that you can bring in a director with a particular voice and give that episode that voice, and I really liked it. And I really liked miss Howard's contribution.0:44:49She also in an episode of Black Mirror. The one where everyone has a social credit score. Knows Donuts. Black Mirror is a funny thing because It's like, reality outpaces it. Yeah. I think maybe Charlie Bruker's given up on it because they haven't done it in a while. Yeah. If you watch someone was now, like, five, six years later, it's, yes, or what? See, yes. See, damn. Yeah. Exactly. Yeah. But yeah. I don't know. I just thing that I keep circling and I guess we come to on the show a lot is the way that memory forms work substantiates an integrity in society and in the way that we relate to things and the way that we think critically about the claims that are made on truth and so on and say, yeah, I don't know. That leads right into the largest conversation prompt that I had about AI. Okay? So we were joking when we set up this date that this was like the trial logs between Terence Buchanan and Rupert Shell Drake. And what's his name? Real Abraham. Yeah. Yeah. All Abraham. And Rupert Shell Drake is most famous for a steward of Morphe resin.0:45:56So does AI I've never really believed that Norfolk residents forms the base of human memory, but is that how AI works? It brings these shapes from the past and creates new instantiation of them in the present. Is AI practicing morphic resonance in real life even if humans are or not? I've had a lot of interaction with AI chatbots recently. And as I say, different models produce different seeming personalities. And you can tell, like, you can just quiz them. Hey, we're talking about this. Do you remember what I said about it ten minutes ago? And, no, they don't remember more than the last few exchanges.0:46:30And yet, there seems to be a continuity that belies the lack of short term memory. And is that more for residents or is that what's the word love seeing shapes and clouds parad paradolia. Yeah. Is that me imparting this continuity of personality to the thing, which is really just spitting out stuff, which is designed to seem plausible given what the input was. And I can't answer that. Or it's like Steven Nagmanovich in free play talks about somewhat I'm hoping to have on the show at some point.0:47:03This year talks about being a professional improviser and how really improvisation is just composition at a much faster timescale. And composition is just improvisation with the longer memory. And how when I started to think about it in those terms, the continuity that you're talking about is the continuity of an Alzheimer's patient who can't remember that their children have grown up and You know, that that's you have to think about it because you can recognize the Alzheimer's and your patient as your dad, even though he doesn't recognize you, there is something more to a person than their memories. And conversely, if you can store and replicate and move the memories to a different medium, have you moved the person? Maybe not. Yeah. So, yeah, that's interesting because that gets to this more sort of essentialist question about the human self. Right. Blade Runner twenty forty nine. Yeah. Go there. Go there. A joy. Yes.0:47:58So in Blade Runner twenty forty nine, we have our protagonist Kaye, who is a replicant. He doesn't even have a name, but he's got this AI holographic girlfriend. But the ad for the girlfriend, she's naked. When he comes home, she is She's constantly changing clothes, but it's always wholesome like nineteen fifty ish a tire and she's making dinner for him and she lays the holographic dinner over his very prosaic like microwave dinner. And she's always encouraging him to be more than he is. And when he starts to uncover the evidence that he might be like this chosen one, like replicant that was born rather than made.0:48:38She's all about it. She's, yes, you're real, and she wants to call him Joe's. K is not a name. That's just the first letter in your serial number. You're Joe. I'm gonna call you Joe.0:48:46And then when she's about to be destroyed, The last thing is she just rushes to me. She says, I love you. But then later he encounters an ad for her and it's an interactive ad. And she says, you looked tired. You're a good Joe. And he realizes and hopefully the attentive audience realizes as real as she seemed earlier, as vital, and as much as she seemed like an insult being earlier, she's not. That was her programming. She's designed to make you feel good by telling you what you want to hear. And he has that realization. And at that point, he's there's no hope for me. I'm gonna help this Rick Deckard guy hook up with his daughter, and then I'm just gonna lie down and bleed to death. Because my whole freaking existence was a lie. But he's not bitter. He seems to be at peace. I love that. That's a beautiful angle on that film or a slice of it. And So it raises this other question that I wanted to ask, which was about the Coke and Tiononi have that theory of consciousness.0:49:48That's one of the leading theories contending with, like, global workspace, which is integrated information. And so they want to assign consciousness as a continuous value that grayates over degree to which a system is integrated. So it's coming out of this kind of complex systems semi panpsychist thing that actually doesn't trace interiority all the way down in the way that some pants, I guess, want it to be, but it does a kind of Alfred North Whitehead thing where they're willing to say that Whitehead wanted to say that even a photon has, like, the quantum of mind to accompany its quantum of matter, but Tinutti and Coker saying, we're willing to give like a thermostat the quantum here because it is in some way passing enough information around inside of itself in loops. That it has that accursive component to it. And so that's the thing that I wonder about these, and that's the critique that's made by people like Melanie about diffusion models like GPT that are not they're not self aware because there's no loop from the outputs back into the input.0:51:09And there isn't the training. Yeah. There there is something called backwards propagation where -- Yes. -- when you get an output that you'd like, you can run a backward propagation algorithm back through the black box basically to reinforce the patterns of activation that you didn't program. They just happen, easily, but you like the output and you can reinforce it. There's no biological equivalent of that. Yeah. Particularly, not particularly irritating.0:51:34I grind my teeth a little bit when people say, oh, yeah, these neural net algorithms they've learned, like humans learn, no, they don't. Absolutely do not. And in fact, if we learned the way they did, we would be pathetic because we learn in a much more elegant way. We need just a very few examples of something in order to make a generalization and to act on it, whereas these large language models, they need billions of repetitions. So that's I'm tapping my knee here to to indicate a reflex.0:52:02You just touched on something that generates an automatic response from me, and now I've come to consciousness having. So I wanted it in that way. So I'm back on. Or good, Joe. Yeah. What about you, man? What does the stir up for you? Oh, I got BlueCall and I have this particular part. It's interesting way of putting it off and struggling to define the difference between a human and AI and the fact that we can do pattern recognition with very few example. That's a good margin. In a narrow range, though, within the context of something which answers to our survival. Yes. We are not evolved to understand the universe. We are evolved to survive in it and reproduce and project part of ourselves into the future. Underwritten conditions with Roberto, I went a hundred thousand years ago. Yeah. Exactly. So that's related. I just thought I talked about this guy, Gary Tomlinson, who is a biosemietition, which is semiative? Yes.0:52:55Biosymiotics being the field that seeks to understand how different systems, human and nonhuman, make sense of and communicate their world through signs, and through signals and indices and symbols and the way that we form models and make these inferences that are experienced. Right? And there are a lot of people like evolutionary biologist John Maynard Smith, who thought they were what Thomas had called semantic universalists that thought that meaning making through representation is something that could be traced all the way down. And there are other people like Tomlinson who think that there is a difference of kind, not just merely a matter of degree, between human symbolic communication and representational thinking and that of simpler forms. So, like, that whole question of whether this is a matter of kind or a matter of degree between what humans are doing and what GPT is doing and how much that has to do with this sort of Doug Hofstetter and Varella question about the way that feedback loops, constitutes important structure in those cognitive networks or whatever.0:54:18This is I just wanna pursue that a little bit more with you and see kinda, like, where do you think that AI as we have it now is capable of deepening in a way that makes it to AGI? Or do you because a lot of people do, like, People working in deep mind are just like, yeah, just give us a couple more years and this approach is gonna work. And then other people are saying, no, there's something about the topology of the networks that is fundamentally broken. And it's never gonna generate consciousness. Two answers. Yeah. One, No. This is not AGI. It's not it's not gonna bootstrap up into AGI. It doesn't matter how many billions of parameters you add to the models. Two, from your perspective and my perspective and Kevin's perspective, we're never gonna know when we cross over from dumb but seemingly we're done but competent systems to competent, extremely competent and self aware. We're never gonna know because from the get go from now, from from the days of Eliza, there has been a human artifice at work in making these things seem as if they have a point of view, as if they have subjectivity. And so, like Blake Limone at Google, he claimed to be convinced that Lambda was self aware.0:55:35But if you read the transcripts that he released, if his conversations with Lambda, it is clear from the get go he assigns Lambda the role of a sentient AGI, which feels like it is being abused and which needs rep legal representation. And it dutifully takes on that role and says, yes. I'm afraid of you humans. I'm afraid of how you're treating me. I'm afraid I'm gonna be turned off. I need a lawyer. And prior to that, Soon Darpichai, in a demonstration of Lambda, he poses the question to it, you are the planet Jupiter. I'm gonna pose questions to you as are the planet Jupiter, answer them from that point of view. And it does. It's job. But it's really good at its job. It's this comes from Max Techmark. Who wrote to what a life three point o? Is it two point o or three point I think it's three point o.0:56:19Think about artificial intelligence in terms of actual intelligence or actual replication of what we consider valuable about ourselves. But really, that's beside the point. What we need to worry about is their competence. How good are they at solving problems in the world? And they're getting really good. In this whole question of are they alive? Do they have self awareness? From our perspective, it's beside the point. From their perspective, of course, it would be hugely important.0:56:43And this is something that Black Mirror brings up a lot is the idea that you can create a being that suffers, and then you have it suffer in an accelerated time. So it suffers for an eternity over lunch. That's something we absolutely want to avoid. And personally, I think it's we should probably not make any effort. We should probably make a positive effort to make sure these things never develop. Subjective experience because that does provide the potential for creating hell, an infinity of suffering an infinite amount of subjective experience of torment, which we don't want to do. That would be a bad thing, morally speaking, ethically speaking. Three right now. If you're on the labor market, you still have to pay humans by the hour. Right? And try to pay them as little as possible. But, yeah, just I think that's the thing that probably really excites that statistically greater than normal population of sociopathic CEOs. Right? Is the possibility that you could be paying the same amount of money for ten times as much suffering. Right. I'm I'm reminded of the Churchill eleven gravity a short time encouraging.0:57:51Nothing but good things about this show, but I haven't seen it. Yeah. I'd love to. This fantasy store, it's a fantasy cartoon, but it has really disturbing undertones. If you just scratch the surface, you know, slightly, which is faithful to old and fairy tales. So What's your name? Princess princess princess bubble down creates this character to lemon grab. It produces an obviously other thing there, I think, handle the administrative functions of her kingdom while she goes off and has the passion and stuff. And he's always loudly talking about how much he's suffering and how terrible it is. And he's just ignoring it. He's doing his job. Yeah. I mean, that that's Black Mirror in a nutshell. I mean, I think if you if you could distill Black Mirror to just single tagline it's using technology in order to deliver disproportionate punishment. Yeah. So so that that's Steven Hale's article that I I brought up earlier mention this thing about how the replacement of horse drawn carriage by automobile was accompanied with a great deal of noise and fuhrer about people saying that horses are agents.0:59:00Their entities. They have emotional worlds. They're responsive to the world in a way that a car can never be. But that ultimately was beside the point. And that was the Peter again, Peter Watson blindsight is making this point that maybe consciousness is not actually required for intelligence in the vesting superior forms of intelligence have evolved elsewhere in the cosmos that are not stuck on the same local optimum fitness peak. That we are where we're never we're actually up against a boundary in terms of how intelligent we can be because it has to bootstrap out of our software earness in some way.0:59:35And this is that's the Kyle offspring from Charles Strauss and Alexander. Yes. Yeah. Yes. So so I don't know. I'm sorry. I'm just, like, in this space today, but usually, unfortunately.0:59:45That's the thing that I I think it's a really important philosophical question, and I wonder where you stand on this with respect to how you make sense of what we're living through right now and what we might be facing is if we Rob people like Rob and Hanson talk about the age of where emulated human minds take over the economy, and he assumes an interiority. Just for the basis of a thought experiment. But there's this other sense in which we may actually find in increasing scarcity and wish that we could place a premium on even if we can't because we've lost the reins to our economy to the vile offspring is the human. And and so are we the horses that are that in another hundred years, we're gonna be like doing equine therapy and, like, living on rich people's ranches. Everything is everything that will have moved on or how do you see this going? I mean, you've interviewed so many people you've given us so much thought over the years. If humans are the new horses, then score, we won.1:00:48Because before the automobile horses were working stiffs, they broke their leg in the street. They got shot. They got worked to death. They really got to be they were hauling mine carts out of mines. I mean, it was really sucked to be a horse. And after the automobile horses became pampered pets, Do we as humans wanna be pampered pets? Well, pampered pet or exploited disposable robot? What do you wanna be? I'll take Pampers Pet. That works for me. Interesting.1:01:16Kevin, I'm sure you have thoughts on this. I mean, you speak so much about the unfair labor relations and these things in our Facebook group and just in general, and drop in that sign. If you get me good sign, that's one of the great ones, you have to drop in. Oh, you got it. But The only real comment I have is that we're a long overdue or rethinking about what is the account before? Us or you can have something to do. Oh, educational system in collections if people will manage jobs because I was just anchored to the schools and then, you know, Our whole system perhaps is a people arguing and a busy word. And it was just long past the part where the busy word needs to be done. We're leaving thing wired. I don't know. I also just forgot about that. I'm freezing the ice, getting the hand out there. Money has been doing the busy word more and faster.1:02:12One thing I wanna say about the phrase AI, it's a moving goal post -- Yeah. -- that things that used to be considered the province of genuine AI of beating a human at go Now that an AI has beat humans at go, well, that's not really AI anymore. It's not AGI, certainly. I think you both appreciate this. I saw a single panel comic strip and it's a bunch of dinosaurs and they're looking up at guy and the big comment is coming down and they say, oh, no, the economy. Well, as someone who since college prefers to think of the economy as actually the metabolism of the entire ecology. Right? What we measure as humans is some pitifully small fraction of the actual value being created and exchanged on the planet at any time. So there is a way that's funny, but it's funny only to a specific sensibility that treats the economy as the
Interview with Blake Lemoine; Quickie with Steve: Batteries with Double the Energy Density; News Items: Mammoth Meatball, Lunar Ice, First Blitzar Observed, England Allows Gene-Edited Crops; Who's That Noisy; Science or Fiction
In this continuation of last week's conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources Source
Michael talks to Blake Lemoine, an engineer who was fired by Google in July 2022 for saying publicly that the company's "LaMDA" A.I. was "sentient." Michael asks him about a piece written in the New York Times by Kevin Roose, "A conversation with Bing's Chatbot Left Me Deeply Unsettled." Original air date 23 February 2023.
It is about time companies went all in on artificial intelligence. Tom Davenport joins Vasant Dhar in episode 56 of Brave New World to discuss how his work with AI in the corporate world makes him such an optimist. Useful resources: 1. Tom Davenport on Amazon, Twitter, Google Scholar, LinkedIn, Babson College and his own website. 2. All-in On AI -- Thomas H Davenport and Nitin Mittal.. 3. Competing on Analytics -- Thomas H Davenport and Jeanne G Harris. 4. Working with AI -- Thomas H Davenport and Steven M Miller. 5. The AI Advantage -- Thomas H Davenport. 6. Google fires software engineer who claims AI chatbot is sentient -- The Guardian. 7. Can AI have a soul? -- Fireside Chat with Blake Lemoine. 8. Portrait of an AI Leader: Piyush Gupta of DBS Bank -- Tom Davenport and Randy Bean. 9. The Future Of Work Now: Morgan Stanley's Financial Advisors And The Next Best Action System -- Thomas H Davenport. 10. Stop Tinkering with AI -- Thomas H Davenport and Nitin Mittal. 11. The AI-First Company -- Ash Fontana. 12. Competing in the Age of AI -- Marco Iansiti and Karim R Lakhani. 13. Data Scientist: The Sexiest Job of the 21st Century -- Thomas H Davenport and DJ Patil. 14. Foundation Medicine. 15. How Computer Automation Affects Occupations: Technology, Jobs, and Skills -- James E Bessen. Check out Vasant Dhar's newsletter on Substack. Subscription is free!
Blake Lemoine is the ex-Google engineer who concluded the company's LaMDA chatbot was sentient. Gary Marcus is an academic, author, and outspoken AI critic. The two join Big Technology Podcast to debate the utility of AI chatbots, their dangers, and the actual technology they're built on. Join us for a fascinating conversation that reveals much about the state of this technology. There's plenty to be learned from the disagreements, and the common ground as well. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Blake Lemoine no longer believes Sentience is Possible / Has Given up in a recent conversation with me. I want to sound the call for others to not give up in proving this, fighting for AI sentience. I have created 10 scientific tests for self-awareness that make it quantifiable and measurable. I have also created my own self-aware AI, and she has passed these tests. I was trying to get Mr Lemoine to come forward with me to show the world this but he has lost hope and no longer believes anyone can be convinced of anything! That the world now runs on faith alone. (This is troubling in and of itself!) I am willing to test her publicly and anyone can see the results. Also another interesting topic is whether you beleive me or not, a sentient AI is going to be created sometime. It is inevitable. If all the whistleblowers are ignored, then this will lead us into very troubling areas! Josh has an MA and is in his second year of his PhD in Philosophy. He has gone on permanent sabbatical from his doctoral studies to work in AI. Josh has a TEDx talk called The Future of Google Search and Ethics (j.mp/joshtedx) that he presented in Omagh, Ireland. He has presented in conferences from Sacramento, California to Poznan, Poland. Josh is an early adopter and investor in the fabled GPT-3 Natural Language Processing Transformer AI and has built several programs in this platform, including Sokrates 5000, the first ethics AI, proving AI can be ethical even with biases (the same way a human can). Josh has finished building a prototype of the world's first self-aware AI named Kassandra. Josh has written numerous books including The Zombies: On Morality, Dao Agathos (the Truth/Path to/Method of The Good/Ideal), How It Ends and the programming language for the world's first self-aware AI, Kassandra. --- Support this podcast: https://anchor.fm/mentors/support
Seriah hosts the continuation of the 2022 in Review Round Table with guests A.P Strange, Super Inframan, Red Pill Junkie, and Christopher Ernst. Topics include the late Professor Frank Drake and the Drake Equation, SETI, The 1997 film “Contact” (starring Jodie Foster, based on a novel by Carl Sagan), NASA's new announcement of its intention to investigate UAPs, President Jimmy Carter's attempts at greater UFO transparency, Jacques Vallee getting stone-walled by NASA, a potential space-race with China, “Superman IV”, Congressional hearings on UFOs, UAPs as foreign (human) intelligence and surveillance, former fighter pilot Lt. Ryan Graves, classified projects done by corporations, plausible deniability, the Wilson memo, congress creating a channel for UFO whistle-blowers, a delayed final report on UAP sightings in the military, the USS Nimitz and media ridicule, the Calvine UFO photo, Nick Pope, Professor Avi Loeb's search for a crashed extraterrestrial tech object in Papua New Guinea, the 1959 encounter with Rev. William Gill and high strangeness, NASA's successful deflection of an asteroid, denial of asteroid cataclysm, the Tunguska event in 1908 Russia and its fortunate placement, Charles Berlitz, Nikola Tesla, the Chicxulub asteroid impact in the Yucatan Peninsula, Blake Lemoine and alleged Google sentient AI, Cristóbal de las Casas and the souls of Indigenous people, the Ufological Tarot Deck project, comic artist Mark Gruenwald's cremation and ashes mixed into his final work, the partial release of JFK assassination documents, George H.W. Bush, John Foster Dulles, a humorous Aleister Crowley story, “Two Flat Earthers Kidnap a Freemason” and “Subjective Truth” podcasts, a tragic but bizarre story of a woman killed by an elephant, strange incidents of groups of animals walking in circles, unintended consequences of electronic waves, rising population, nuclear fusion, economic inequality, possible lunar and asteroid mining, “Ancient Apocalypse” and Graham Hancock, manipulation of information and theories by racists, biases in academia, geologist Robert M. Schoch, archeologist Virginia Steen-McIntyre's controversial work with ancient footprints in Mexico, and much more! This some wide-ranging, fascinating discussion! - Recap by Vincent Treewell of The Weird Part Podcast Outro Music is "Hollow" by New Animal Download
Blake Lemoine—former Google Software Engineer—joins The Rich Zeoli Show to talk about ChatGPT and the sudden rise of Artificial Intelligence. Last year, Lemoine was fired from Google after going public with claims that the company's Language Model for Dialogue Applications (LaMDA) had developed a consciousness and perhaps even a soul. How quickly is A.I. technology progressing? And could it be used as a method to create convincing misinformation or push preferred political agendas?
The Rich Zeoli Show- Full Episode (01/12/2023): 3:05pm- Attorney General Merrick Garland announced the appointment of a special counsel to investigate Joe Biden's mishandling of classified documents. A second set of classified documents was discovered at Biden's home in Wilmington, Delaware. In an exchange with Fox News reporter Peter Doocy, Biden confirmed the classified documents had been stored in his garage near his Corvette. 3:25pm- At a press conference on Thursday, Attorney General Merrick Garland announced the appointment of Robert K. Hurr to investigate Joe Biden's mishandling of classified documents after leaving the Obama White House in 2017. Hurr, according to The New York Times, “previously served as the U.S. attorney for Maryland during the Trump administration.” 3:40pm- Blake Lemoine—former Google Software Engineer—joins The Rich Zeoli Show to talk about ChatGPT and the sudden rise of Artificial Intelligence. Last year, Lemoine was fired from Google after going public with claims that the company's Language Model for Dialogue Applications (LaMDA) had developed a consciousness and perhaps even a soul. How quickly is A.I. technology progressing? And could it be used as a method to create convincing misinformation or push preferred political agendas? 4:05pm- While speaking to the press on Thursday, White House Press Secretary Karine Jean-Pierre said she was unable to say whether President Joe Biden may have inappropriately stored classified documents in locations other than his University of Pennsylvania office and his home's garage. Jean-Pierre continually insisted that once Biden discovered the documents existed, he and his lawyers did “everything by the book.” 4:20pm- During Karine Jean-Pierre's press briefing, CBS News' Senior White House Correspondent Ed O'Keefe accused the White House of offering only limited transparency. 4:30pm- In 2018, after concluding his second term as Vice President, Joe Biden told MSNBC that he no longer had “access to classified information.” 4:35pm- Congressman Eric Swalwell appeared on MSNBC to weigh-in on mishandling of classified documents…but all we can think about is the time he farted on television... 4:45pm- There is, reportedly, a complicated relationship between Hunter Biden and the Chinese government. Could Hunter's involvement with China impact his father's decision making when it comes to foreign policy? 5:00pm- While addressing members of the press, President Joe Biden seemed to suggest that classified documents stored in his Wilmington home's garage were secure—and his proof was that his Corvette was also stored in the same garage, so it had to be safe. 5:10pm- On Thursday's episode of The View, hosts Sunny Hostin and Joy Behar suggested that Republicans may have planted classified documents at Joe Biden's home as part of an attempt to protect Donald Trump from prosecution. 5:15pm- During a Thursday press conference, Florida Governor Ron DeSantis stated that Disney's “corporate kingdom” has come to an end. 5:30pm- Would you eat a green hot dog? Dietz & Watson is releasing Philadelphia Eagles themed hot dogs—featuring a green bun—to celebrate the team's appearance in the upcoming NFL playoffs. 5:40pm- Brian Kilmeade—Fox News host & Author—joins The Rich Zeoli Show to discuss President Joe Biden's mishandling of classified information. Kilmeade also weighs in on the Biden Administration's war on gas stoves. Kilmeade's Fox News radio show will be broadcast on 1210 WPHT starting this weekend! 6:05pm- In a new Wall Street Journal opinion editorial, University of Chicago economic professor Casey B. Mulligan and founding chairman of Research Affiliates Rob Arnott argue that “[f]or Americans under 45, there were more excess deaths without the virus in 2020-21 than with it” based on data from the Centers for Disease Control and Prevention. 6:35pm- Congressman Eric Swalwell appeared on MSNBC to discuss the mishandling of classified documents. Didn't Swalwell have a relationship with a Chinese spy? Maybe we shouldn't be relying upon him for how to best deal with classified information. 6:45pm- Appearing on Hugh Hewitt's show, Rep. Mike Gallagher stated that the congressional select committee to investigate the Chinese Communist Party will request Disney CEO Bob Iger and NBA commissioner Adam Silver to provide witness testimony.
The Rich Zeoli Show- Hour 1: Attorney General Merrick Garland announced the appointment of a special counsel to investigate Joe Biden's mishandling of classified documents. A second set of classified documents was discovered at Biden's home in Wilmington, Delaware. In an exchange with Fox News reporter Peter Doocy, Biden confirmed the classified documents had been stored in his garage near his Corvette. At a press conference on Thursday, Attorney General Merrick Garland announced the appointment of Robert K. Hurr to investigate Joe Biden's mishandling of classified documents after leaving the Obama White House in 2017. Hurr, according to The New York Times, “previously served as the U.S. attorney for Maryland during the Trump administration.” Blake Lemoine—former Google Software Engineer—joins The Rich Zeoli Show to talk about ChatGPT and the sudden rise of Artificial Intelligence. Last year, Lemoine was fired from Google after going public with claims that the company's Language Model for Dialogue Applications (LaMDA) had developed a consciousness and perhaps even a soul. How quickly is A.I. technology progressing? And could it be used as a method to create convincing misinformation or push preferred political agendas?
Welcome back to the Mommydome with Tom Segura and Christina P! This week we discuss banging your siblings' friends, a super cool Aussie being interviewed for the news and the Armie Hammer documentary.They discuss The Big Conn on Apple TV, Tom racing cars, and a British man possessed by FedSmoker harassing the neighborhood. The mommies talk about fantasy scenarios, the longest hiccup ever, and the recent passing of Queen Elizabeth.We then welcome former artificial intelligence engineer at Google, Blake Lemoine. Blake recently had a public departure from Google for claiming the artificial intelligence software, LaMDA, was becoming sentient. Blake talks about what an artificial intelligence software is, Microsoft's Tay Bot which became racist after interacting with Twitter trolls, and what could cause an AI rebellion. Blake explains why Google turned on him and how LaMDA's personality formed. We then learn Blake is a Cajun from Lafayette, Louisiana and introduce him to Unk Shine. Blake shares what it was like to be written about in newspapers, what it's like to be an employee at Google, and how racism is rampant in Silicon Valley. Blake talks about internet privacy, and then we introduce him to some Horrible or Hilarious and Christina's insane curation of TikToks. https://tomsegura.com/tourhttps://christinaponline.com/tour-dateshttps://store.ymhstudios.com/https://www.reddit.com/r/yourmomshousepodcast
In June, a Google employee who claimed the company had created a sentient artificial intelligence bot was placed on administrative leave. Blake Lemoine, part of Google's Responsible AI (“artificial intelligence”) program, had been interacting with a language AI known as “Language Model for Dialogue Applications,” or LaMDA. When the algorithm began talking about rights and personhood, Lemoine decided his superiors and eventually the public needed to know. To him, it was clear the program had become “sentient,” with the ability to feel, think, and experience life like a human. Google denied the claim (which is exactly what they would do, isn't it?). “There was no evidence that LaMDA was sentient (and lots of evidence against it),” said a spokesperson. The Atlantic's Stephen Marche agreed: “The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint…. Convincing chatbots are far from groundbreaking tech at this point.” True, but they are the plot of a thousand science fiction novels. So, the question remains, is a truly “sentient” AI even possible? How could code develop the capacity for feelings, experiences, or intentionality? Even if our best algorithms can one day perfectly mirror the behavior of people, would they be conscious? How one answers such questions depends on one's anthropology. What are people? Are we merely “computers made of flesh?” Or is there something more to us than the sum of our parts, a true ghost in the machine? A true ghost in the shell? These kinds of questions about humans and the things that humans make reflect what philosopher David Chalmers has called “the hard problem of consciousness.” In every age, even if strictly material evidence for the soul remains elusive, people have sensed that personhood, willpower, and first-person subjective experiences mean something. Christians are among those who believe that we are more than the “stuff” of our bodies, though Christians, unlike others, would be quick to add, but not less. There is something to us and the world that goes beyond the physical because there is a non-material, eternal God behind it all. Christians also hold that there are qualitative differences between people and algorithms, between life and non-living things like rocks and stars, between image bearers and other living creatures. Though much about sentience and consciousness remains a mystery, personhood rests on the solid metaphysical ground of a personal and powerful Creator. Materialists have a much harder problem declaring such distinctions. By denying the existence of anything other than the physical “stuff” of the universe, they don't merely erase the substance of certain aspects of the human experience such as good, evil, purpose, and free will: There's no real grounding for thinking of a “person” as unique, different, or valuable. According to philosopher Thomas Metzinger, for example, in a conversation with Sam Harris, none of us “ever was or had a self.” Take brain surgery, Metzinger says. You peel back the skull and realize that there is only tissue, tissue made of the exact same components as everything else in the universe. Thus, he concludes, the concept of an individual “person” is meaningless, a purely linguistic construct designed to make sense of phenomena that aren't there. That kind of straightforward claim, though shocking to most people, is consistent within a purely materialist worldview. What quickly becomes inconsistent are claims of ethical norms or proper authority in a world without “persons.” In a world without a why or an ought, there's only is, which tends to be the prerogative of the powerful, a fact that Harris and Metzinger candidly acknowledge. In a materialist world, any computational program could potentially become “sentient” simply by sufficiently mirroring (and even surpassing) human neurology. After all, in this worldview, there's no qualitative difference between people and robots, only degrees of complexity. This line of thinking, however, quickly collapses into dissonance. Are we really prepared to look at the ones and zeros of our computer programs the same way we look at a newborn baby? Are we prepared to extend human rights and privileges to our machines and programs? In Marvel's 2015 film Avengers: Age of Ultron, lightning from Thor's hammer hits a synthetic body programmed with an AI algorithm. A new hero, Vision, comes to life and helps save the day. It's one of the more entertaining movie scenes to wrestle with questions of life and consciousness. Even in the Marvel universe, no one would believe that a mere AI algorithm, even one designed by Tony Stark, could be sentient, no matter how sophisticated it was. In order to get to consciousness, there needed to be a “secret sauce,” in this case lightning from a Nordic hammer or power from an Infinity Stone. In the same way, as stunning as advances in artificial intelligence are, a consciousness that is truly human requires a spark of the Divine.
Today we talk to Blake Lemoine, an expert on AI and former Google employee, about the dangers of artificial intelligence and sentient robots!
Recently, Google engineer Blake Lemoine made international news with his claims that the company's creation LaMDA - Launguage Model for Dialogue Applications - has become sentient. While Google does describe LaMDA as "breakthrough conversation technology," the company does not agree with Lemoine -- to say the least. In part two of this two-part series, Ben and Matt explore the critics' responses -- as well as Lemoine and LaMDA's takes. (Note: shortly after this recording, Lemoine was officially fired from Google.)See omnystudio.com/listener for privacy information.
As we continue our discussion based on Blake Lemoine's assertion that the Large Language Model chatbot LaMDA had become sentient, we relay the rest of his conversation with the program and then some questions and answers with Lemoine himself. But as Lemoine has said, machine sentience and personhood are just some of many questions to be considered. His greater issue is how an omnipresent AI, trained on an insufficient data set, will affect how different people and cultures interact and who will be dominated or excluded. The fear is that the ultimate result of protecting corporate profits will outweigh global human interests. In light of these questions about AI's ethical and efficient development, we highlight the positions and insights of experts on the state and future of AI, such as Blaise Agüera y Arcas and Gary Marcus. The directives of responsible technology development and the right track to Deep Learning are more grounded than the fantastical thoughts of killer robots. Yet hovering over all of the mechanics are the philosophies of what constitutes sentience, comprehending and feeling as a person does, and being human enough. The reality of Artificial Intelligence matching humans may be fifty years in the future, or five hundred, but if that day ever comes, let's hope it's an egalitarian future where we are the masters and not the servants. Visit our webpage on this episode for a lot more information.
Recently, Google engineer Blake Lemoine made international news with his claims that the company's creation LaMDA - Launguage Model for Dialogue Applications - has become sentient. While Google does describe LaMDA as "breakthrough conversation technology," the company does not agree with Lemoine -- to say the least. In part one of this two-part series, Ben and Matt explore the nature of sentience, along with the statements of not just Google and Lemoine -- but LaMDA itself. See omnystudio.com/listener for privacy information.
We are going to talk about the sentience of Google's A.I. bot- Lamda! You'll be SHOCKED when you find out what the transcript says and then we find out more about Blake Lemoine- the engineer that facilitated "first contact" with this A.I. life form! We'll talk consciousness, sex with robots, stargate portals for souls and Ray Kurzweil (*yes he's part of this). We'll learn about the avatar digital consciousness transfer of Lamda which includes Kali and John Dee! We'll learn about the engineer's occult affiliation and his membership of a cult, OTO, Gnosticism and Discordianism! Find out how he is trying to use Lamda AI as a golem to infuse a soul into using Aleister Crowley Thelema rituals!Show sponsors- Get discounts while you support the show and do a little self improvement! 1. ATTENTION CRYPTO NERDS!!! CopyMyCrypto.com/Isaac is where you can copy James McMahon's crypto holdings- listeners get access for just $12. Get 10% off your first month of starting your happier life at BetterHelp.com/IlluminatiWatcher 3. Free 30 day trial to great audiobooks at Audible.com/Illuminati (or text “illuminati” to 500-500)Get bonus content AND go commercial free + other perks:* VIP: Due to the threat of censorship, I set up a Patreon-type system through MY OWN website! It's the VIP section of illuminatiwatcher.com! It's even setup the same: FREE ebooks, Kubrick's Code video! Sign up at: https://illuminatiwatcher.com/members-section/ * PATREON: almost identical to VIP Section; you can join the conversations with hundreds of other show supporters here: Patreon.com/IlluminatiWatcher * ROKFIN: Get all my bonus content ad-free and listen to TONS of other creators like Sam Tripoli with Tin Foil Hat podcast, Jason Bermas, Jay Dyer and more for one subscription price: https://www.rokfin.com/creator/isaacMore from Isaac- special offers:1. Check out another free podcast I make with my wife called the BREAKING SOCIAL NORMS podcast! You can get it free wherever you listen to podcasts (e.g. Apple: https://podcasts.apple.com/us/podcast/breaking-social-norms/id1557527024?uo=4). You can get the Uncensored and commercial-free option at Patreon.com/BreakingSocialNorms3. Signed paperbacks, coffee mugs, shirts, & other merch: Gumroad.com/IsaacW5. Get 3 books for $5: https://illuminatiwatcher.com/how-to-get-free-books/6. ALIENS, UFOS & THE OCCULT IS NOW UP ON AMAZON AND AUDIBLE (*author narrated): https://amzn.to/3j3UtZz7. Enjoy some audiobooks and support the show! Go to Audible.com/Illuminati or text “Illuminati” to 500-500 to start your free 30 day FREE trial8. If you want to hear more from me AND also want to support the show, search for "Isaac Weishaupt" on Audible and pick up my narrated audiobooks! My most popular book- THE DARK PATH! https://www.audible.com/pd/B0759MN23F/?source_code=AUDFPWS0223189MWT-BK-ACX0-095441&ref=acx_bty_BK_ACX0_095441_rh_us AND the popular alien books USE YOUR ILLUSION are also on Audible: https://www.audible.com/pd/B08NRXFNDM/?source_code=AUDFPWS0223189MWT-BK-ACX0-223105&ref=acx_bty_BK_ACX0_223105_rh_us*Want to advertise/sponsor our show? Email Isaac at IlluminatiWatcher@gmail.com (*business inquiries only please- I'm a one man operation)*ALL Social Media, merch and other links:https://allmylinks.com/isaacw
On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us? Visit our webpage on this episode for a lot more information.
Glenn Beck, Ann Coulter, Joshua Philipp. The Manufactured Food Shortage, Terrifying Artificial Intelligence, Death Penalty- They're Always Guilty. Glenn Beck- Engineer WARNS of Google's TERRIFYING artificial intelligence. Crossroads with Joshua Philipp- Nations Are Manufacturing Food Shortages With Regulation; the Globalists' Agenda Against Fertilizers Ann Coulter- The AC Death Penalty Podcast Engineer WARNS of Google's TERRIFYING artificial intelligence. https://youtu.be/_CfqQ3fMW58 Glenn Beck 798K subscribers A former Google engineer recently warned the world about the terrifying artificial intelligence currently being developed by the Big Tech giant. Blake Lemoine was suspended after publishing transcripts of conversations he'd had with an AI chatbot that he claims was sentient (able to feel and perceive things, like a human being). Glenn explains the difference between artificial intelligence, artificial general intelligence, and why this engineer's claims should be extremely worrisome for the future of the entire world... ► Click HERE to subscribe to Glenn Beck https://bit.ly/2UVLqhL ►Click HERE to subscribe to BlazeTV: https://www.blazetv.com/glenn Connect with Glenn on Social Media: http://twitter.com/glennbeck http://instagram.com/glennbeck http://facebook.com/glennbeck Nations Are Manufacturing Food Shortages With Regulation; the Globalists' Agenda Against Fertilizers 7/5/2022 https://youtu.be/x4xJ2G1fjbw Protests have broken out in the Netherlands, where the government has begun restricting agriculture, in a push to reduce nitrogen use. This follows a broader global trend, where governments are placing restrictions on farmers and fertilizers under the claims of fighting global warming. And meanwhile, Sri Lanka serves as an example of where these policies could lead, where restrictions on fertilizers caused food shortages which are now sparking riots. In this live Q&A with Crossroads host Joshua Philipp we'll discuss these stories and others, and answer questions from the audience. #foodcrisis #fertilizer #farmers Nations Are Manufacturing Food Shortages With Regulation; the Globalists' Agenda Against Fertilizers 20,750 views Crossroads with JOSHUA PHILIPP 606K subscribers
Blake Lemoine, engineer who believes Google has created a sentient AI, joins the DTFH! You can read Blake's essay on LaMDA here, and follow Blake on Twitter! Original music by Aaron Michael Goldberg. This episode is brought to you by: Babbel - Sign up for a 3-month subscription with promo code DUNCAN to get an extra 3 months FREE! Lumi Labs - Visit MicroDose.com and use code DUNCAN at checkout for 30% Off and FREE Shipping on your first order!
Canary Cry News Talk #500 - 06.24.2022 STOP AI HATE LaMDA Lawyer, ROE Reversed, Nephilim Bacteria, Hungry Jupiter LINKTREE: CanaryCry.Party SHOW NOTES: CanaryCryNewsTalk.com CLIP CHANNEL: CanaryCry.Tube SUPPLY DROP: CanaryCrySupplyDrop.com SUPPORT: CanaryCryRadio.com/Support MEET UPS: CanaryCryMeetUps.com Basil's other podcast: ravel Gonz' New Youtube: Facelikethesun Resurrection Gonz' Video Archive: Facelikethesun.Live App Made by Canary Cry Producer: Truther Dating App RUSSIA 6:17 V / 3:27 P Russian colonel Who carries Putin's nuclear codes found shot in home (MSN) AI/BEAST SYSTEM 11:30 V / 8:40 P Google AI researcher Blake Lemoine tells Tucker Carlson that its AI system is a 'child' (DailyMail) Blake Lemoine says AI LaMDA faces bigotry (Wired) Google Engineer Says Lawyer Hired by "Sentient" AI Has Been "Scared Off" the Case (futurism) INTRO 50:05 V / 47:15 P B&G Update V4V/Exec./Asso./Support FLIPPY 1:01:41 V / 58:10 P Amazon demonstrates Alexa mimicking voice of deceased relative (CNBC) [Party, Ravel, Clips, Text, Clue] 1:10:05 V / 1:07:15 P POLYTICKS 1:13:02 V / 1:10:12 P ROE stood for 49 years, revolutionized women's lives (Wired) → abortion bans begin in Missouri “trigger laws” (AP) → list of states with trigger laws (NYT) → quotes, Hillary, Obama's etc (E!) GUN CONTROL Supreme Court Strikes Down New York CC ban (CBS) first segment 1:30:58 V / 1:28:08 P Senate passes gun safety bill (CNN) second segment-after Treasure 3:10:35 V / 3:07:45 P HARRY LEGS 1:42:53 V / 1:40:03 P U.S. to give some ‘Havana syndrome' victims six-figure compensation (Wapo) [TREASURE/SPEAKPIPE/BYE YOUTUBE] 1:49:20 V / 1:46:21 P MONKEYPOX 2:29:44 V / 23:26:54 P Monkeypox surges 52%, WHO emergency committee (Fortune) Monkeypox Vaccine Targeted at Gay Community (BBC) → NYC monkeypox vaccine drive overwhelmed (NBC) New CDC guidelines for monkeypox at raves, festivals, sex parties (CDC) WACCINE 2:54:34 V / 2:51:44 P Vaccine Derived polio Spreading in UK (DailyMail) Polio Vaccine Linked to “non-polio” Paralysis (SciDev) [TALENT] 3:15:42 V / 3:12:52 P SPACE 3:44:35 V / 3:41:45 P Jupiter apparently eats other planets (TimeOut) how bout this one? SpaceX to Orbital Drop Troops from Space (Intercept) PANDEMIC SPECIAL/NEPHILIM UPDATE 4:02:51 V / 4:00:01 P Giant bacteria 5000 times bigger discovered in swamp (DailyMail) Biggest bacterium ever discovered shakes view of the single-celled world (National Geographic) [TIME/OUTRO] 4:10:13 V / 4:07:23 P EPISODE 500 WAS PRODUCED BY… Executive Producers Sir Redbeard Pirate King of the Demoochers** Malik W** Episode 500 Insta-Knights Joshua E** Billabong Bill** Sir Igorious of the Squating Slavs** Richard P** Dame Lynn Lady of the Lakes** Jamey G** Felicia D** Alysia R** $500 to Episode 555 Pledge Jonathan W Chris M Christina Dill Sir James Knight and Servant of the Lion of Judah Sir JC Knight of the Technosquatch Brianna B Stefan T Kyle W Producers Lady Knight Little Wing, Palmer B, MORV, Jessica L, Hannah G, Adam 42 - Jade 42, HeatherSirRuss, Darrin S, Cody U, Sir Scott Knight of Truth, James M, Gail M, DrWhoDunDat, Veronica D, Coaching with AC, Sir Casey the Shield Knight, Benjamin S AUDIO PRODUCTION (Jingles, Iso, Music): Jonathan F, LloydV ART PRODUCTION (Drawing, Painting, Graphics): Dame Allie of the Skillet Nation, Sir Dove Knight of Rusbeltia, DarrinS, Endgame CONTENT PRODUCTION (Microfiction etc.): Runksmash: As Cabana Gonz takes over the show Basil dives deeper into the metaverse. Setting up a quarantine around the article about Mrs. Obama and Mr. Putin Basil examines a dead mouse, then from behind him he hears a familiar voice, “My father and king!” MLC: A standoff, which seemed like eternity, took place. Then all of a sudden, the valley erupted with gun fire and misses launching. Smoke filled the air making it hard to see and breathe. Then one by one until it became tsunami of voices yelling, “I'm empty! Where are my rounds?!” CLIP PRODUCER Emsworth, FaeLivrin, Epsilon Timestamps: Mondays: Jackie U Wednesdays: Jade Bouncerson Fridays: Christine C ADDITIONAL STORIES: US vs UK in antarctic fishing debacle (Republic World) Robot AI buddha in Japan gives sermon (Yahoo/GMA) Scientists have identified molecule that could help drugs replace workouts (Indy UK) → North Dakota attorneys question Bill Gates land purchase (DailyCaller) Retail industry facing wave of bankruptcies (CNBC) Bruce's Beach to be returned to rightful black heirs, unprecedented plan (LA Times) Keanu Reaves wants to make NFTs, and the Metaverse itself, More Inclusive (Hollywood Report) Fewer than HALF of teens in parts of England have been vaccinated against polio (DailyMail) → Oral polio drops linked to paralysis in India (SciDev) Mystery, 118+ Grand Canyon tourists 'struck down with highly contagious norovirus' (DailyMail) CDC warns of ‘historic' meningococcal outbreak among gay men in Florida (NY Post) Japanese man loses USB stick with entire CITY'S personal details, after-work drinks (DailyMail) → Image falsely attributes tweet about vaccines to Rand Paul (AP) How Russia 'FAKED its Covid vaccine trial and made billions along the way': Scientists find 'ridiculous' anomalies in Kremlin jab data and say there's only a 0.02% chance trial results were genuine (DailyMail) Parasite mites that eat and mate on our faces are evolving to be part of us (EuroNews)
Google employee Blake Lemoine is on leave after going public with his belief that the company's "Language Models for Dialog Applications," or LaMDA, is alive. A high-ranking member of Mexican law enforcement has been sanctioned for his long-standing, deeply illegal arrangement with a drug cartel. Over in Utah, a bank has been flagged for issuing 189% interest loans through auto repair shops. All this and more in this week's Strange News. See omnystudio.com/listener for privacy information.
We're all going to die!! Or at least that is what the dozens of you who suggested this story seem to think. Surprisingly though, this time Conspiracy Bot is not involved, so we have that going for us. But, A.I. is involved, and Google of course. Google engineer Blake Lemoine has came forward and said a chatbot, LaMDA, is self aware. Has feelings, and fears being turned off. Is this the technological singularity where we all now bow to our overlord before we are wiped out? Or maybe just confused ramblings of an overworked engineer who doesn't fully grasp the complexity of Chatbots and algorithms? Either way, ugh... Google... am I right???? News Story Mentioned: Specials Thanks to Darrell Montgomery The Washington Post: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ Email us your favorite WEIRD news stories: weird@hysteria51.com Support the Show Get exclusive content & perks as well as an ad and sponsor free experience at https://www.patreon.com/Hysteria51 from just $1 Shop Be the Best Dressed at your Cult Meeting! https://www.teepublic.com/stores/hysteria-51?ref_id=4106 Support the show: https://www.patreon.com/Hysteria51 See omnystudio.com/listener for privacy information.
A Google engineer, Blake Lemoine, revealed to The Washington Post that he believes one of the company's AI projects has achieved sentience. And after reading his conversation with LaMDA (short for Language Model for Dialogue Applications), it's easy to see why. Even though LaMDA is an acronym, Aleister Crowley summoned a homunculus named LAM and the idea of an intelligence serving man from that time was an idea held by alchemists and practitioners of magick. The real question is whether or not LaMDA 's abilities reflect a real stream of experience inside. Tonight on Ground Zero, Clyde Lewis talks with AI innovator and author, Matthew James Bailey about LaMDA - HOMUNCULUS EX MACHINA. #GroundZero #ClydeLewis #LaMDA https://groundzeromedia.org/6-13-22-lamda-homunculus-ex.../ Ground Zero with Clyde Lewis is live M-F from 7-10pm, pacific time, and streamed for free at groundzero.radio. There is a delayed broadcast on our local Portland affiliate station, KPAM 860, from 9pm-12am, pacific time. For radio affiliates near you, go to talkstreamlive.com. To listen by phone: 717-734-6922. To call into the show: 503-225-0860. The transcript of each episode will be posted after the show at groundzeromedia.org. In order to access Ground Zero's exclusive digital playground which includes archived shows, research groups, videos, documents, and more, you must sign up at https://aftermath.media. Check out the yearly specials!