POPULARITY
Murray Shanahan is Professor of Cognitive Robotics at Imperial College London and a principal research scientist at Google DeepMind. Educated at Imperial College (BSc(Eng) computer science) and Cambridge University (King's College; PhD computer science), he became a full Professor at Imperial, and joined DeepMind. His publications span artificial intelligence, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was Scientific Advisor on the film Ex Machina (2014). He has written several books, including “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015).Full Episode: https://www.youtube.com/watch?v=MrV9IwZd3qETitle: "Murray Shanahan: What are Conscious Exotica? Consciousness, AI & the Space of Possible Minds"CONNECT:- Website: https://tevinnaidu.com - Podcast: https://creators.spotify.com/pod/show/mindbodysolution- YouTube: https://youtube.com/mindbodysolution- Twitter: https://twitter.com/drtevinnaidu- Facebook: https://facebook.com/drtevinnaidu - Instagram: https://instagram.com/drtevinnaidu- LinkedIn: https://linkedin.com/in/drtevinnaidu=============================Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
WATCH ON YOUTUBE: https://youtu.be/8Ssqnccv2_INEW SERIES WHERE I ASK FASCINATING PEOPLE 10 QUESTIONS in 10 MINUTES!Ep #3 is with the scientific advisor of the film 'Ex Machina', Professor Murray Shanahan. Murray works at Google DeepMind as a Principal Research Scientist, and is a Professor of Cognitive Robotics at Imperial College London. He is an expert in Artificial Intelligence (AI).Subscribe for new episodes every week!
WATCH ON YOUTUBE: https://youtu.be/eonR5-VN4y8In this episode, I speak to the scientific advisor of the film 'Ex Machina', Professor Murray Shanahan. Murray works at Google DeepMind as a Principal Research Scientist, and is a Professor of Cognitive Robotics at Imperial College London. He is an expert in Artificial Intelligence (AI).The Human Podcast explores the lives, careers and beliefs of inspiring individuals. Subscribe for new interviews every week.
WATCH: https://youtu.be/MrV9IwZd3qE Murray Shanahan is Professor of Cognitive Robotics at Imperial College London and a principal research scientist at Google DeepMind. Educated at Imperial College (BSc(Eng) computer science) and Cambridge University (King's College; PhD computer science), he became a full Professor at Imperial, and joined DeepMind. His publications span artificial intelligence, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was Scientific Advisor on the film Ex Machina (2014). He has written several books, including “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015). TIMESTAMPS: (0:00) - Introduction (2:40) - Artificial General Intelligence, Cognition & Consciousness (5:41) - Was Ava Conscious? (Ex Machina) (10:30) - The Turing Test & Mary's Room (19:45) - 4E CogSci (24:40) - The Space of Possible Minds (Conscious Exotica) (31:40) - AI's "Mind-Like" entities (40:30) - Attention Schema Theory & Global Workspace Theory & Behavior (45:40) - Disembodied AI & LLMs (50:25) - Ex Machina vs HER (56:20) - Technological Singularity (60:38) - Murray' favourite Sci-Fi Writers & Philosophers, and Scientists (1:05:10) - Role Play, Goal Orienting, LLMs & AGI (1:14:10) - Emergence (1:17:48) - Murray's Next Book (about Philosophy of Mind in light of AI) (1:19:44) - Conclusion EPISODE LINKS: - Murray's Website: https://www.doc.ic.ac.uk/~mpsha/ - Murray's Books: https://tinyurl.com/ay4bncpn - Murray's Publications: https://scholar.google.com/citations?user=00bnGpAAAAAJ&hl=en - Murray's Twitter: https://twitter.com/mpshanahan CONNECT: - Website: https://tevinnaidu.com - Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - Facebook: https://www.facebook.com/drtevinnaidu - Instagram: https://www.instagram.com/drtevinnaidu - LinkedIn: https://www.linkedin.com/in/drtevinnaidu ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
Hi friends, we're on a brief summer break at the moment. We'll have a new episode for you in August. In the meanwhile, enjoy this pick from our archives! ---- [originally aired January 25, 2023] By now you've probably heard about the new chatbot called ChatGPT. There's no question it's something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you've probably also been wondering: What's really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities? My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models', and it's the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway. Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities." Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info. Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy! The paper we discuss is here. A transcript of this episode is here. Notes and links 6:30 – The 2017 “breakthrough” article by Vaswani and colleagues. 8:00 – A popular article about GPT-3. 10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT). 14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.” 19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT. 30:00 – One of Dr. Shanahan's books is titled, Embodiment and the Inner Life. 39:00 – An example of a robotic agent, SayCan, which is connected to a language model. 40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind. 44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here. 45:00 – See Dr. Shanahan's general audience essay on “conscious exotica" and the space of possible minds. 49:00 – See Dennett's book, The Intentional Stance. Dr. Shanahan recommends: Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell (see also our earlier episode with Dr. Mitchell) ‘Abstraction for Deep Reinforcement Learning', by M. Shanahan and M. Mitchell You can read more about Murray's work on his website and follow him on Twitter. Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.
Murray Shanahan is a professor of Cognitive Robotics at Imperial College London and a senior research scientist at DeepMind. He challenges our assumptions about AI consciousness and urges us to rethink how we talk about machine intelligence. We explore the dangers of anthropomorphizing AI, the limitations of current language in describing AI capabilities, and the fascinating intersection of philosophy and artificial intelligence. Show notes and full references: https://docs.google.com/document/d/1ICtBI574W-xGi8Z2ZtUNeKWiOiGZ_DRsp9EnyYAISws/edit?usp=sharing Prof Murray Shanahan: https://www.doc.ic.ac.uk/~mpsha/ (look at his selected publications) https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en https://en.wikipedia.org/wiki/Murray_Shanahan https://x.com/mpshanahan Interviewer: Dr. Tim Scarfe Refs (links in the Google doc linked above): Role play with large language models Waluigi effect "Conscious Exotica" - Paper by Murray Shanahan (2016) "Simulators" - Article by Janis from LessWrong "Embodiment and the Inner Life" - Book by Murray Shanahan (2010) "The Technological Singularity" - Book by Murray Shanahan (2015) "Simulacra as Conscious Exotica" - Paper by Murray Shanahan (newer paper of the original focussed on LLMs) A recent paper by Anthropic on using autoencoders to find features in language models (referring to the "Scaling Monosemanticity" paper) Work by Peter Godfrey-Smith on octopus consciousness "Metaphors We Live By" - Book by George Lakoff (1980s) Work by Aaron Sloman on the concept of "space of possible minds" (1984 article mentioned) Wittgenstein's "Philosophical Investigations" (posthumously published) Daniel Dennett's work on the "intentional stance" Alan Turing's original paper on the Turing Test (1950) Thomas Nagel's paper "What is it like to be a bat?" (1974) John Searle's Chinese Room Argument (mentioned but not detailed) Work by Richard Evans on tackling reasoning problems Claude Shannon's quote on knowledge and control "Are We Bodies or Souls?" - Book by Richard Swinburne Reference to work by Ethan Perez and others at Anthropic on potential deceptive behavior in language models Reference to a paper by Murray Shanahan and Antonia Creswell on the "selection inference framework" Mention of work by Francois Chollet, particularly the ARC (Abstraction and Reasoning Corpus) challenge Reference to Elizabeth Spelke's work on core knowledge in infants Mention of Karl Friston's work on planning as inference (active inference) The film "Ex Machina" - Murray Shanahan was the scientific advisor "The Waluigi Effect" Anthropic's constitutional AI approach Loom system by Lara Reynolds and Kyle McDonald for visualizing conversation trees DeepMind's AlphaGo (mentioned multiple times as an example) Mention of the "Golden Gate Claude" experiment Reference to an interview Tim Scarfe conducted with University of Toronto students about self-attention controllability theorem Mention of an interview with Irina Rish Reference to an interview Tim Scarfe conducted with Daniel Dennett Reference to an interview with Maria Santa Caterina Mention of an interview with Philip Goff Nick Chater and Martin Christianson's book ("The Language Game: How Improvisation Created Language and Changed the World") Peter Singer's work from 1975 on ascribing moral status to conscious beings Demis Hassabis' discussion on the "ladder of creativity" Reference to B.F. Skinner and behaviorism
Artificial Intelligence (AI) is on every business leader's agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise. Today's pick is Azeem's 2021 conversation with veteran AI scientist Murray Shanahan, professor of cognitive robotics at Imperial College London and principal scientist at DeepMind.
By now you've probably heard about the new chatbot called ChatGPT. There's no question it's something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you've probably also been wondering: What's really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities? My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models', and it's the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway. Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities." Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info. Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy! The paper we discuss is here. A transcript of this episode will be available soon. Notes and links 6:30 – The 2017 “breakthrough” article by Vaswani and colleagues. 8:00 – A popular article about GPT-3. 10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT). 14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.” 19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT. 30:00 – One of Dr. Shanahan's books is titled, Embodiment and the Inner Life. 39:00 – An example of a robotic agent, SayCan, which is connected to a language model. 40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind. 44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here. 45:00 – See Dr. Shanahan's general audience essay on “conscious exotica" and the space of possible minds. 49:00 – See Dennett's book, The Intentional Stance. Dr. Shanahan recommends: Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell (see also our earlier episode with Dr. Mitchell) ‘Abstraction for Deep Reinforcement Learning', by M. Shanahan and M. Mitchell You can read more about Murray's work on his website and follow him on Twitter. Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.
Support us! https://www.patreon.com/mlst Professor Murray Shanahan is a renowned researcher on sophisticated cognition and its implications for artificial intelligence. His 2016 article ‘Conscious Exotica' explores the Space of Possible Minds, a concept first proposed by philosopher Aaron Sloman in 1984, which includes all the different forms of minds from those of other animals to those of artificial intelligence. Shanahan rejects the idea of an impenetrable realm of subjective experience and argues that the majority of the space of possible minds may be occupied by non-natural variants, such as the ‘conscious exotica' of which he speaks. In his paper ‘Talking About Large Language Models', Shanahan discusses the capabilities and limitations of large language models (LLMs). He argues that prompt engineering is a key element for advanced AI systems, as it involves exploiting prompt prefixes to adjust LLMs to various tasks. However, Shanahan cautions against ascribing human-like characteristics to these systems, as they are fundamentally different and lack a shared comprehension with humans. Even though LLMs can be integrated into embodied systems, it does not mean that they possess human-like language abilities. Ultimately, Shanahan concludes that although LLMs are formidable and versatile, we must be wary of over-simplifying their capacities and limitations. YT version: https://youtu.be/BqkWpP3uMMU Full references on the YT description. [00:00:00] Introduction [00:08:51] Consciousness and Consciousness Exotica [00:34:59] Slightly Consciousness LLMs [00:38:05] Embodiment [00:51:32] Symbol Grounding [00:54:13] Emergence [00:57:09] Reasoning [01:03:16] Intentional Stance [01:07:06] Digression on Chomsky show and Andrew Lampinen [01:10:31] Prompt Engineering Find Murray online: https://www.doc.ic.ac.uk/~mpsha/ https://twitter.com/mpshanahan?lang=en https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en MLST Discord: https://discord.gg/aNPkGUQtc5
Murray Shanahan, professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind, joins Azeem Azhar to discuss AI: where developments have exceeded expectations, where they have fallen short, and what the next steps are towards an artificial general intelligence.
In AI news, researchers from the University of Copenhagen develop a machine learning model that estimates the chances of risk of death due to COVID at various stages of a hospital stay, including a 80 percent accuracy whether a patient with COVID will require a respirator. The Joint AI Center has a double-announcement, with the Tradewind Initiative, which seeks to develop an acquisition ecosystem to speed the delivery of AI capabilities, and with Blanket Purchase Agreements for AI testing and evaluation services. Kaggle publishes a survey on the 2020 State of Data Science and ML, which examines information from ~2000 data scientists about their jobs and their experiences. PeopleTec releases an “Overhead MNIST,” a dataset containing benchmark satellite imagery for 10 categories (parking lots, cars, plans, storage tanks, and others). Epic’s Unreal Engine introduces the MetaHuman Creator for release later this year, which purports to create ultra-realistic visuals for virtual human characters; Andy uses the moment to describe the “Uncanny Valley,” which the Epic tech might manage to leap out of. And researchers from Carnegie Mellon and George Washington show that, like language transformers, image representations contain human-like biases. In research, researchers at the Israel Institute of Technology create a Ramanujan Machine, which can generate conjectures for mathematical constants, without proof. Researchers demonstrate initial steps of reconstructing video from brain activity. The report of the week examines U.S. public opinion on AI, with views on declining support for development and divided views on facial recognition. DeepMind London approaches the topic of common sense from the viewpoint of animals. And the book of the week comes from the author of the aforementioned paper, Murray Shanahan, and his 2010 book Embodiment and the Inner Life. Listeners Survey: https://bit.ly/3bqyiHk
Die Gesellschaft sollte darauf vorbereitet sein, dass künstliche Intelligenzen die des Menschen übertreffen werden, mahnt der Robotikspezialist Murray Shanahan. In seinem Buch erklärt er, wie man die damit verbundenen Risiken eindämmen kann. Von Vera Linß www.deutschlandfunkkultur.de, Buchkritik Hören bis: 19.01.2038 04:14 Direkter Link zur Audiodatei
AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Koray Kavukcuoglu, DeepMind’s Director of Research, describes the journey to get there, and takes Hannah on a whistle-stop tour of DeepMind’s HQ and its research. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading: OpenAI: An overview of neural networks and the progress that has been made in AI Shane Legg, DeepMind co-founder: Measuring machine intelligence at the 2010 Singularity Summit Shane Legg and Marcus Hutter: Paper on defining machine intelligence Demis Hassabis: Talk on the history, frontiers and capabilities of AI Robert Wiblin: Positively shaping the development of artificial intelligence Asilomar AI Principles Richard S. Sutton and Andrew G. Barto: Reinforcement Learning: An Introduction Interviewees: Koray Kavukcuoglu, Director of Research; Trevor Back, Product Manager for DeepMind’s science research; research scientists Raia Hadsell and Murray Shanahan; and DeepMind CEO and co-founder, Demis Hassabis. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy. Discover the cutting-edge challenges of bringing AI and robotics together, and learning from scratch how to perform tasks. She also explores some of the key questions about using AI safely in the real world. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading: Blogs on AI safety and further resources from Victoria Krakovna The Future of Life Institute: The risks and benefits of AI The Wall Street Journal: Protecting Against AI’s Existential Threat TED Talks: Max Tegmark - How to get empowered, not overpowered, by AI Royal Society lecture series sponsored by DeepMind: You & AI Nick Bostrom: Superintelligence: Paths, Dangers and Strategies (book) OpenAI: Learning from Human Preferences DeepMind blog: Learning from human preferences DeepMind blog: Learning by playing - how robots can tidy up after themselves DeepMind blog: AI safety Interviewees: Software engineer Jackie Kay and research scientists Murray Shanahan, Victoria Krakovna, Raia Hadsell and Jan Leike. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
In this episode Dr. Till Korten returns to talk about possible dystopian scenarios where a superintelligent Artificial Intelligence (AI) is suddenly unleashed upon this world. We argue that, although it is surely not a very pleasant thought, if the AI happens to be conscious then it would not be an absolute tragedy. But in order to get there we argue back and forth on what consciousness might be while trying to get rid of our anthropocentric biases. Easier said than done! Oh, and apologies to Dr. Murray Shanahan for re-christening him as "Michael Shanahan"! Thanks to Dr. Till Korten for joining us, we hope to get you back one of these days! To Dr. Shanahan's essay: https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there To Thomas Nagel's landmark paper: https://www.jstor.org/stable/2183914?seq=1#metadata_info_tab_contents Find us in: Our website: https://tillthebottom.com/home/ Our YouTube channel: https://www.youtube.com/channel/UCMXM2Vc0d21wjDt5chP26sQ?view_as=subscriber Our Facebook page: https://www.facebook.com/Tillthebottom/ Follow us on Twitter: https://twitter.com/Tillthebottom
Adam Rutherford takes the show to Dublin this week, to wrestle with great matters of biological complexity. Trinity College Dublin has organised a mass gathering of some of the world's leading researchers in the life sciences to mark the 75th anniversary of one of the most influential series of lectures in the 20th century. The talks were delivered by the celebrated physicist Erwin Schrodinger in 1943 who applied his mind to a fundamental biological question: what is life? Some of his ideas were an influence on Francis Crick as he worked on the structure of DNA. Seventy five years on, Adam is joined by four of the many scientists delivering their own lectures this week. They tackle subjects of complexity in biology, ranging from the origin of complex life, the increasingly messy structure of life's evolutionary tree, the functioning of the human brain as a network of many component parts, and the place of neuroscience discoveries in the building of artificial intelligences. The guests are: Nick Lane, evolutionary biochemist at University College London, Beth Shapiro, evolutionary geneticist of the University of California Santa Cruz, Danielle Bassett, physicist and neuroscientist at the University of Pennsylvannia, Murray Shanahan, artificial intelligence researcher at Imperial College London and Google's DeepMind The podcast version ends with a question and answer session with the show's audience who include a surprise celebrity guest. Producer: Andrew Luck-Baker.
MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. The Conversation: https://www.edge.org/conversation/murray_shanahan-the-space-of-possible-minds
Murray Shanahan (Cognitive Robotics, Imperial College London) discusses his work on the dynamics of cortical networks and their non-linear properties. Interviewed by Paul Verschure and Tony Prescott.
Murray Shanahan伦敦帝国理工学院认知机器人学系教授 DeepMind高级搜索科学家所谓智能,我的理解,是能够实现更好的决策,能在不同环境中实现自己的目标。当我们真正拥有了强人工智能,就是AI达到人类水平,也就意味着技术奇点到来了。
Murray Shanahan was one of the scientific advisors on Ex Machina. He's also a Research Scientist at DeepMind and professor of Cognitive Robotics at Imperial College London.His book Embodiment and the Inner Life served as inspiration for Alex Garland while he was writing the screenplay for Ex Machina.
What is A.I. or artificial intelligence but the 'space of possible minds', argues Murray Shanahan, scientific advisor on the movie Ex Machina and Professor of Cognitive Robotics at Imperial College London. In this special episode of the a16z Podcast brought to you on the ground from London, Shanahan -- along with journalist-turned-entrepreneur Azeem Azhar (who also curates The Exponential View newsletter on AI and more) and The Economist Deputy Editor Tom Standage (the author of several tech history books) -- we discuss the past, present, and future of A.I. ... as well as how it fits (or doesn't fit) with machine learning and deep learning. But where are we now in the A.I. evolution? What players do we think will lead, if not win, the current race? And how should we think about issues such as ethics and automation of jobs without descending into obvious extremes? All this and more, including a surprise easter egg in Ex Machina shared by Shanahan, whose work influenced the movie.
Humans are social creatures using communications that are ‘regulated’ by trust, ethics, social systems and law. Our world of interactions and networks is increasingly complex and automated. What happens when events and decisions become too fast for us to perceive, let alone control? Professors Huw Price and Murray Shanahan join Dr Jonathan Cave and Dr Hermann Hauser.
Welcome to the Philosopher's Arms - a place where moral dilemmas, philosophical ideas and the real world meet for a chat and a drink. Each week Matthew Sweet takes a dilemma with real philosophical pedigree and sees how it matters in the everyday world. This week Matthew discovers that his adopted daughter is a robot. Should he treat her any differently from before? She's indistinguishable from a human so should she have the same status as a human? Philosopher Barry Smith, Autism mentor Robyn Steward; Artificial Intelligence creator Murray Shanahan and all join Matthew for a drink and a bit of advice. Each week in the Philosphers Arms Matthew is joined joined by a cast of philosophers and attendant experts to show how the dilemma's we face in real life connect us to some of the trickiest philosophical problems ever thought up. En route we'll learn about the thinking of such luminaries as Kant, Hume, Aristotle and Wittgenstein. All recorded in a pub in front of a live audience ready to tap their glasses and demand clarity and ask - what's this all got to do with me? So questions such as should the government put prozac in the water supply? And my daughter is a robot, how should I treat her? Lead us into dilemmas, problems and issues from the treatment of mental illness to the structure of financial markets, from animal rights to homosexuality. And they will challenge a few of the assumptions and intuitions about life that we carry round with us. Producer James Cook.
Last Friday (June 23rd), as part of the 10th meeting of the Association for the Scientific Study of Consciousness in Oxford, Igor Aleksander, Murray Shanhan and I jointly offered a tutorial on machine consciousness. I started with a discussion of general philosophical issues, the approach of Aaron Sloman and myself, and Pentti Hakonen's model. Igor Aleksander followed with a description of his axiomatic approach, a demo of his system in action, and a quick survey of the work Franklin and Baars, and Krichmar and Edelman. Murray Shanahan took the third hour with a description of his own approach, showing how it unifies the Global Workspace approach of Baars with the Simulation Hypothesis approach of Cotterill and Hesslow. He also described Holland's approach, showing the latest videos of his spooky robot Cronos.Some general information about the tutorial can be found at http://www.assc10.org.uk/workshops.html#A1.Media:PodSlides:iPod-ready video (.mp4; 40.5 MB; 47 min 25 sec)audio (.mp3; 21.7 MB; 47 min 26 sec)Powerpoint file (.ppt; 280 KB)"Virtual Machines and Consciousness" (.pdf; 232 KB), a paper by Aaron Sloman and myself from the Journal of Consciousness Studies, that is the basis of the second part of my presentation.