London Futurists

London Futurists

Follow London Futurists
Share on
Copy link to clipboard

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

London Futurists


    • May 28, 2025 LATEST EPISODE
    • monthly NEW EPISODES
    • 37m AVG DURATION
    • 114 EPISODES


    Search for episodes from London Futurists with a specific topic:

    Latest episodes from London Futurists

    Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin

    Play Episode Listen Later May 28, 2025 42:20


    Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives. He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence. In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs.Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast.Selected follow-ups:PRISM podcastHenry Shevlin - personal siteKinds of Intelligence - Leverhulme Centre for the Future of IntelligenceConsciousness, Machines, and Moral Status - 2024 paper by Henry ShevlinApply rich psychological terms in AI with care - by Henry Shevlin and Marta HalinaWhat insects can tell us about the origins of consciousness - by Andrew Barron and Colin KleinConsciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et alAssociation for the Study of ConsciousnessOther researchers mentioned:Blake LemoineThomas NagelNed BlockPeter SengeGalen StrawsonDavid ChalmersDavid BenatarThomas MetzingerBrian TomasikMurray ShanahanMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts Spotify

    The case for a conditional AI safety treaty, with Otto Barten

    Play Episode Listen Later May 9, 2025 38:12


    How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That's the dilemma we'll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we'll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.Selected follow-ups:Existential Risk ObservatoryThere Is a Solution to AI's Existential Risk Problem - TimeInternational Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty - Otto Barten and colleaguesThe Precipice: Existential Risk and the Future of Humanity - book by Toby OrdGrand futures and existential risk - Lecture by Anders Sandberg in London attended by OttoPauseAIStopAIResponsible Scaling Policies - METRMeta warns of 'worse' experience for European users - BBC NewsAccidental Nuclear War: a Timeline of Close Calls - FLIThe Vulnerable World Hypothesis - Nick BostromSemiconductor Manufacturing Optics - ZeissCalifornia Institute for Machine ConsciousnessTipping point for large-scale social change? Just 25 percent - Penn TodayMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

    Humanity's final four years? with James Norris

    Play Episode Listen Later Apr 30, 2025 49:36


    In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

    Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

    Play Episode Listen Later Apr 23, 2025 42:27


    Our subject in this episode may seem grim – it's the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren't pleasant to contemplate, but there's a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we'll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.Selected follow-ups:Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmersTrustworthy AI Systems To Monitor Other AI: Yoshua BengioThe Unilateralist's Curse - by Nick Bostrom and Anders SandbergMusic: Spike Protein, by Koi Discovery, availabPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

    The best of times and the worst of times, updated, with Ramez Naam

    Play Episode Listen Later Mar 26, 2025 45:07


    Our guest in this episode, Ramez Naam, is described on his website as “climate tech investor, clean energy advocate, and award-winning author”. But that hardly starts to convey the range of deep knowledge that Ramez brings to a wide variety of fields. It was his 2013 book, “The Infinite Resource: The Power of Ideas on a Finite Planet”, that first alerted David to the breadth of scope of his insight about future possibilities – both good possibilities and bad possibilities. He still vividly remembers its opening words, quoting Charles Dickens from “The Tale of Two Cities”:Quote: “‘It was the best of times; it was the worst of times' – the opening line of Charles Dickens's 1859 masterpiece applies equally well to our present era. We live in unprecedented wealth and comfort, with capabilities undreamt of in previous ages. We live in a world facing unprecedented global risks—risks to our continued prosperity, to our survival, and to the health of our planet itself. We might think of our current situation as ‘A Tale of Two Earths'.” End quote.12 years after the publication of “The Infinite Resource”, it seems that the Earth has become even better, but also even worse. Where does this leave the power of ideas? Or do we need more than ideas, as ominous storm clouds continue to gather on the horizon?Selected follow-ups:Ramez Naam - personal websiteThe Infinite Resource: The Power of Ideas on a Finite PlanetThe Nexus Trilogy (Nexus Crux Apex)Jesse Jenkins (Princeton)Six Degrees: Our Future on a Hotter Planet - book by Mark Lynas1991 eruption of Mount Pinatubo - WikipediaWe cool Earth, with reflective clouds - Make SunsetsDirect Air Capture (DAC) - WikipediaFrontier: An advance market commitment to accelerate carbon removalToward a Responsible Solar Geoengineering Research Program - by David KeithSouth Korea scales down plans for nuclear powerMicrosoft chooses infamous nuclear site for AI powerMachines of Loving Grace: How AI Could Transform the World for the Better - Essay by Dario AmodeiMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

    PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay

    Play Episode Listen Later Feb 27, 2025 38:51


    In this episode, our guest is Rebecca Finlay, the CEO at Partnership on AI (PAI). Rebecca previously joined us in Episode 62, back in October 2023, in what was the run-up to the Global AI Safety Summit in Bletchley Park in the UK. Times have moved on, and earlier this month, Rebecca and the Partnership on AI participated in the latest global summit in that same series, held this time in Paris. This summit, breaking with the previous naming, was called the Global AI Action Summit. We'll be hearing from Rebecca how things have evolved since we last spoke – and what the future may hold.Prior to joining Partnership on AI, Rebecca founded the AI & Society program at global research organization CIFAR, one of the first international, multistakeholder initiatives on the impact of AI in society. Rebecca's insights have been featured in books and media including The Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She is a Fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S.Selected follow-ups:Partnership on AIRebecca FinlayOur previous episode featuring RebeccaCIFAR (The Canadian Institute for Advanced Research)"It is more than time that we move from science fiction" - remarks by Anne BouverotInternational AI Safety Report 2025 - report from expert panel chaired by Yoshua BengioThe Inaugural Conference of the International Association for Safe and Ethical AI (IASEAI)A.I. Pioneer Yoshua Bengio Proposes a Safe Alternative Amid Agentic A.I. HypeUS and UK refuse to sign Paris summit declaration on ‘inclusive' AICurrent AICollaborative event on AI accountabilityCERN for AIAI Summit Day 1: Harnessing AI for the Future of WorkThe Economic SingularityWhy is machine consciousness important? (Conscium)Brain, Mind & Consciousness (CIFAR)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

    AI agents: challenges ahead of mainstream adoption, with Tom Davenport

    Play Episode Listen Later Feb 3, 2025 33:29


    The most highly anticipated development in AI this year is probably the expected arrival of AI agents, also referred to as “agentic AI”. We are told that AI agents have the potential to reshape how individuals and organizations interact with technology.Our guest to help us explore this is Tom Davenport, Distinguished Professor in Information Technology and Management at Babson College, and a globally recognized thought leader in the areas of analytics, data science, and artificial intelligence. Tom has written, co-authored, or edited about twenty books, including "Competing on Analytics" and "The AI Advantage." He has worked extensively with leading organizations and has a unique perspective on the transformative impact of AI across industries. He has recently co-authored an article in the MIT Sloan Management Review, “Five Trends in AI and Data Science for 2025”, which included a section on AI agents – which is why we invited him to talk about the subject.Selected follow-ups:Tom Davenport - personal siteFive Trends in AI and Data Science for 2025 - MIT Sloan Management ReviewMichael Martin Hammer - WikipediaAI winter - WikipediaAI is coming for the OnlyFans chat industry - FortuneHow Gen AI and Analytical AI Differ — and When to Use Each - Harvard Business ReviewTruth Terminal - The AI Bot That Became a Crypto Millionaire - a16zJim Simons - WikipediaWhy The "Godfather of AI" Now Fears His Own Creation - Curt Jaimungal interviews Geoffrey HintonAttention Is All You Need - Google researchers Apple suspends error-strewn AI generated news alerts - BBC NewsGen AI cuts costs by 30% - London Futurists Podcast episode featuring David Wakeling, partner at A&O ShearmanThe path to agentic automation is UiPath - UiPathMicrosoft CEO Predicts: "AI Agents Will Replace ALL Software" - AI Insights ExplorerNVIDIA CEO Jensen Huang Keynote at CES 2025 - NvidiaPioneering Safe, Efficient AI - ConsciumA New Survey Of Generative AI Shows Lots Of Work To Do - October 2023 article by Tom DavenportGen AI: Too much spend, too little benefit? - Goldman SachsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Post-labour economics, with David Shapiro

    Play Episode Listen Later Jan 23, 2025 42:49


    In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead. To use a term coined by this podcast's cohost Calum Chace, this theme is the Economic Singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question: what are our options for managing the transition of society to increasing technological unemployment and technological underemployment.Our guest, who will be sharing his thinking on these questions, is the prolific writer and YouTuber David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labour economics – how an economy might continue to function even if humans can no longer gain financial rewards in direct return for their labour.Selected follow-ups:David Shapiro's SubstackDavid Shapiro's channel on YouTubeJulia McCoy's channel on YouTubeNext stop: Miami - WaymoResource Based EconomyDebt: The First 5,000 Years - book by David GraeberBroken Money: Why Our Financial System is Failing Us and How We Can Make it Better - book by Lyn AldenThe Bitcoin Standard: The Decentralized Alternative to Central Banking - book by Saifedean AmmousNormalcy bias - WikipediaWhy Nations Fail: The Origins of Power, Prosperity, and Poverty - book by Daron Acemoğlu and James A. RobinsonPrinciples for Dealing with the Changing World Order: Why Nations Succeed and Fail - book by Ray DalioVulture Capitalism: Corporate Crimes, Backdoor Bailouts, and the Death of Freedom - book by Grace BlakeleyThe Economic Singularity: Artificial Intelligence and Fully Automated Luxury Capitalism - book by Calum ChaceMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Longevity activism at 82, 86, and beyond, with Kenneth Scott and Helga Sands

    Play Episode Listen Later Jan 10, 2025 45:10


    Our guests in this episode have been described as the world's two oldest scientifically astute longevity activists. They are Kenneth Scott, aged 82, who is based in Florida, and Helga Sands, aged 86, who lives in London.David has met both of them several times at a number of longevity events, and they always impress him, not only with their vitality and good health, but also with the level of knowledge and intelligence they apply to the question of which treatments are the best, for them personally and for others, to help keep people young and vibrant.Selected follow-ups:Waiting For God - 1990s BBC ComedyAdelle Davis, NutritionistRoger J. Williams, BiochemistThe Importance of Maintaining a Low Omega-6/Omega-3 RatioLife Extension MagazineCalifornia Age Management InstituteFibrinogen and agingProfessor Angus Dalgleish, Nuffield HealthAbout Aubrey de Grey speaking at the Royal InstitutionGeorge Church, GeneticistJames Kirkland, Mayo ClinicDaniel Munoz-Espin, CambridgeNobel Prize for John Gurdon and Shinya YamanakaVSELs and S.O.N.G. laserXtend Optimal HealthFollistatin gene therapy, MinicircleExosomes vs Stem CellsPrevent and Reverse Heart Disease - book by Caldwell Esselstyn Jr Dasatinib and Quercetin (senolytics)We reverse atherosclerosis - Repair BiotechnologiesBioreactor-Grown Mitochondria - MitrixNobel Winner Shinya Yamanaka: Cell Therapy Is ‘Very Promising' For Cancer, Parkison's, MoreDeath of the world's oldest man, 25th Nov 2024Blueprint protocol - Bryan JohnsonMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Models for society when humans have zero economic value, with Jeff LaPorte

    Play Episode Listen Later Jan 2, 2025 41:02


    Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‑AI societies.Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them.Selected follow-ups:Jeff LaPorte personal business websiteRoad to Artificia: A newsletter about discovering the principles of societies post‑AIValuing Humans in the Age of Superintelligence: HumaneRankIdeas Lying Around - article by Cory Doctorow about a famous saying by Milton FriedmanPageRank - WikipediaNosedive (Black Mirror episode) - IMDbThe Economic Singularity - book by Calum ChaceWorld Chess Championship 2024 - WIkipediaWALL.E (2008 movie) - IMDbA day in the life of Asimov, 2045 - short story by David WoodWhy didn't electricity immediately change manufacturing? - by Tim Harford, BBCResponsible use of artificial intelligence in government - Government of CanadaBipartisan House Task Force Report on Artificial Intelligence - U.S. House of RepresentativesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    From ineffective altruism to effective altruism? with Stefan Schubert

    Play Episode Listen Later Dec 26, 2024 34:12


    Our subject in this episode is altruism – our human desire and instinct to assist each other, making some personal sacrifices along the way. More precisely, our subject is the possible future of altruism – a future in which our philanthropic activities – our charitable donations, and how we spend our discretionary time – could have a considerably greater impact than at present. The issue is that many of our present activities, which are intended to help others, aren't particularly effective.That's the judgement reached by our guest today, Stefan Schubert. Stefan is a researcher in philosophy and psychology, currently based in Stockholm, Sweden, and has previously held roles at the LSE and the University of Oxford. Stefan is the co-author of the recently published book “Effective Altruism and the Human Mind”.Selected follow-ups:Stefan Schubert - Effective AltruismEffective Altruism and the Human Mind: The Clash Between Impact and Intuition - Oxford University Press (open access)Centre for Effective AltruismProfessor Nadira Faber - Uehiro Institute, OxfordWhat are the best charities to support in 2024? - Giving What We CanEffective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed - TimeVirtues for Real-World Utilitarians - by Stefan Schubert & Lucius Caviola, UtilitarianismDeworming - Effective Altruism ForumWhat we know about Musk's cost-cutting mission - BBC article about DOGEWhat is your p(doom)? with Darren McKeeLongtermism - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The global energy transition: an optimistic assessment, with Amory Lovins

    Play Episode Listen Later Dec 16, 2024 34:34


    Our guest in this episode is Amory Lovins, a distinguished environmental scientist, and co-founder of RMI, which he co-founded in 1982 as Rocky Mountain Institute. It's what he calls a think do and scale tank, with 700 people in 62 countries, and a budget of well over $100m a year.For over five decades, Amory has championed innovative approaches to energy systems, advocating for a world where energy services are delivered with least cost and least impact. He has advised all manner of governments, companies, and NGOs, and published 31 books and over 900 papers. It's an over-used word, but in this case it is justified: Amory is a true thought leader in the global energy transition.Selected follow-ups:Inside Amory's Brain - RMIGet to know us - RMIBooks by Amory B. Lovins - GoodreadsReinventing Fire - RMIIntegrative Design: A Practice to Tackle Complex Challenges - Stanford d.schoolWhat is Integrative Design? - RMIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Building brain-like AIs, with Alexander Ororbia

    Play Episode Listen Later Dec 9, 2024 47:08


    Some people say that all that's necessary to improve the capabilities of AI is to scale up existing systems. That is, to use more training data, to have larger models with more parameters in them, and more computer chips to crunch through the training data. However, in this episode, we'll be hearing from a computer scientist who thinks there are many other options for improving AI. He is Alexander Ororbia, a professor at the Rochester Institute of Technology in New York State, where he directs the Neural Adaptive Computing Laboratory.David had the pleasure of watching Alex give a talk at the AGI 2024 conference in Seattle earlier this year, and found it fascinating. After you hear this episode, we hope you reach a similar conclusion. Selected follow-ups:Alexander Ororbia - Rochester Institute of TechnologyAlexander G. Ororbia II - Personal websiteAGI-24: The 17th Annual AGI Conference - AGI SocietyJoseph Tranquillo - Bucknell UniversityHopfield network - WikipediaKarl Friston - UCLPredictive coding - WikipediaMortal Computation: A Foundation for Biomimetic Intelligence - Quantitative BiologyThe free-energy principle: a unified brain theory? - Nature Reviews NeuroscienceI Am a Strange Loop (book by Douglas Hofstadter) - WikipediaMark Solms - WikipediaConscium: Pioneering Safe, Efficient AIThe Hidden Spring: A Journey to the Source of Consciousness (book by Mark Solms)Carver Mead - WikipediaEvent camera (includes Dynamic Vision Sensors) - WikipediaICRA (International Conference on Robotics and Automation)Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit AssignmentA Review of Neuroscience-Inspired Machine Learningngc-learnTaking Neuromorphic Computing to the Next Level with Loihi 2 Technology Brief - IntelMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    To sidestep death, preserve your connectome, with Ariel Zeleznikow-Johnston

    Play Episode Listen Later Nov 18, 2024 41:23


    In David's life so far, he has read literally hundreds of books about the future. Yet none has had such a provocative title as this: “The future loves you: How and why we should abolish death”. That's the title of the book written by the guest in this episode, Ariel Zeleznikow-Johnston. Ariel is a neuroscientist, and a Research Fellow at Monash University, in Melbourne, Australia.One of the key ideas in Ariel's book is that so long as your connectome – the full set of the synapses in your brain – continues to exist, then you continue to exist. Ariel also claims that brain preservation – the preservation of the connectome, long after we have stopped breathing – is already affordable enough to be provided to essentially everyone. These claims raise all kinds of questions, which are addressed in this conversation.Selected follow-ups:Dr Ariel Zeleznikow-Johnston - personal websiteBook webpage - includes details of when Ariel is speaking in the UK and elsewhereMonash Neuroscience of ConsciousnessDeep hypothermic circulatory arrest - WikipediaSentience and the Origins of Consciousness - article by Karl Friston that mentions bacteriaList of advisors to ConsciumDoes the UK use £15,000, £30,000 or a £70,000 per QALY cost effectiveness threshold? by Jason ShafrinResearchers simulate an entire fly brain on a laptop. Is a human brain next? - US Berkeley NewsWhat are memories made of? A survey of neuroscientists on the structural basis of long-term memory - Preprint by Ariel Zeleznikow-Johnston, Emil Kendziora, and Andrew McKenzieRelated previous episodes:Ep 91: The low-cost future of preserving brains, with Jordan SparksEp 77: The case for brain preservation, with Kenneth HayworthMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Insights from 15 years leading the self-driving vehicle industry, with Sterling Anderson

    Play Episode Listen Later Nov 5, 2024 41:24


    Our guest in this episode is Sterling Anderson, a pioneer of self-driving vehicles. With a masters degree and a PhD from MIT, Sterling led the development and launch of the Tesla Model X, and then led the team that delivered Tesla Autopilot. In 2017 he co-founded Aurora, along with Chris Urmson, who was a founder and CTO of Google's self-driving car project, which is now Waymo, and also Drew Bagnell, who co-founded and led Uber's self-driving team.Aurora is concentrating on automating long-distance trucks, and expects to be the first company to deploy fully self-driving trucks in the US when it deploys big driverless trucks (16 tons and more) between Dallas and Houston in April 2025.Self-driving vehicles will be one of the most significant technologies of this decade, and we are delighted that one of the stars of the sector, Sterling, is joining us to share his perspectives.Selected follow-ups:The future of transportation is here - Aurora websiteLeadership Team - Aurora websitePrevious episodes also featuring self-driving vehicles:Ep 58: Whatever happened to self-driving cars, with Timothy LeeEp 26: Peter James, best-selling crime-writer and transhumanistMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The race for AI supremacy, with Parmy Olson

    Play Episode Listen Later Oct 29, 2024 46:49


    Our guest in this episode is Parmy Olson, a columnist for Bloomberg covering technology. Parmy has previously been a reporter for the Wall Street Journal and for Forbes. Her first book, “We Are Anonymous”, shed fascinating light on what the subtitle calls “the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency”.But her most recent book illuminates a set of high-stakes relations with potentially even bigger consequences for human wellbeing. The title is “Supremacy: AI, ChatGPT and the Race That Will Change the World”. The race is between two remarkable individuals, Sam Altman of OpenAI and Demis Hassabis of DeepMind, who are each profoundly committed to build AI that exceeds human capabilities in all aspects of reasoning.Selected follow-ups:Parmy Olson, BloombergSupremacy: AI, ChatGPT, and the Race that Will Change the WorldAI Superpowers: China, Silicon Valley and the new world order - book by Kai-Fu LeeThe Coming Wave - book by Mustafa SuleymanBromance Gone Sour: OpenAI and Microsoft's Partnership Hits a Rough Patch - GeekflareFor our Posterity - essay by Leopold AschenbrennerOpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of DirectorsDo Computers Have Feelings? Don't Let Google Alone Decide - article by Parmy Olson about Blake LemoineConscium - Pioneering Safe, Efficient AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    A narrow path to a good future with AI, with Andrea Miotti

    Play Episode Listen Later Oct 21, 2024 40:46


    Our guest in this episode is Andrea Miotti, the founder and executive director of ControlAI. On their website, ControlAI have the tagline, “Fighting to keep humanity in control”. Control over what, you might ask. The website answers: control deepfakes, control scaling, control foundation models, and, yes, control AI.The latest project from ControlAI is called “A Narrow Path”, which is a comprehensive policy plan split into three phases: Safety, Stability, and Flourishing. To be clear, the envisioned flourishing involves what is called “Transformative AI”. This is no anti-AI campaign, but rather an initiative to “build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control”.The initiative has already received lots of feedback, both positive and negative, which we discuss.Selected follow-ups:A Narrow Path - main websiteControlAIConjecture - Redefining AI SafetyWhat is Agentic AI - Interface.AIChat GPT's new O1 model escaped its environment to complete “impossible” hacking task - by Mihai AndreiBiological Weapons Convention - United NationsPoisoning of Sergei and Yulia Skripal - Wikipedia (use of Novichok nerve agent in Salisbury, UK)Gathering of AI Safety Institutes in November in San FranciscoConscium - Pioneering safe, efficient AIThe UK's APPG (All Party Parliamentary Group) on AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Gen AI cuts costs by 30%: lessons from a leading law firm, with David Wakeling

    Play Episode Listen Later Oct 11, 2024 35:30


    Our guest in this episode is David Wakeling, a partner at A&O Shearman, which became the world's third largest law firm in May, thanks to the merger of Allen and Overy, a UK “magic circle” firm, with Shearman & Sterling of New York.David heads up a team within the firm called the Markets Innovation Group (MIG), which consists of lawyers, developers and technologists, and is seeking to disrupt the legal industry. He also leads the firm's AI Advisory practice, through which the firm is currently advising 80 of the largest global businesses on the safe deployment of AI.One of the initiatives David has led is the development and launch of ContractMatrix, in partnership with Microsoft and Harvey, an OpenAI-backed, GPT-4-based large language model that has been fine-tuned for the legal industry. ContractMatrix is a contract drafting and negotiation tool powered by generative AI. It was tested and honed by 1,000 of the firm's lawyers prior to launch, to mitigate against risks like hallucinations. The firm estimates that the tool is saving up to seven hours from the average contract review, which is around a 30% efficiency gain. As well as internal use by 2,000 of its lawyers, it is also licensed to clients.This is the third time we have looked at the legal industry on the podcast. While lawyers no longer use quill pens, they are not exactly famous for their information technology skills, either. But  the legal profession has a couple of characteristics which make it eminently suited to the deployment of advanced AI systems: it generates vast amounts of data and money, and lawyers frequently engage in text-based routine tasks which can be automated by generative AI systems.Previous London Futurists Podcast episodes on the legal industry:Ep 53: The Legal Singularity, with Benjamin AlarieEp 47: AI transforming professional services, with Shamus RaeOther selected follow-ups:David WakelingA&O ShearmanContractMatrixHarvey AIRAG - Retrieval-Augmented GenerationDigital Operational Resilience Act (impacts banking)The Productivity J-Curve (PDF), by Erik Brynjolfsson, Daniel Rock, Chad SyversonAgentic AI: The Next Big Breakthrough That's Transforming Business And Technology, by Bernard MarrMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Climate change and populism: Grounds for optimism? with Matt Burgess

    Play Episode Listen Later Sep 26, 2024 38:44


    Our guest in this episode is Matt Burgess. Matt is an Assistant Professor at the University of Wyoming, where he moved this year after six years at the University of Boulder, Colorado. He has specialised in the economics of climate change.Calum met Matt at a recent event in Jackson Hole, Wyoming, and knows from their conversations then that Matt has also thought deeply about the impact of social media, the causes of populism, and many other subjects.Selected follow-ups:Matt Burgess at the University of WyomingGuided Civic Revival - Substack of Matt BurgessHow polarization will destroy itselfRoger A. Pielke Jr. - Wikipedia‘My Life as a Climate Lukewarmer' - National ReviewShared Socioeconomic Pathways - Wikipedia (includes climate scenario SSP5-8.5)Exceeding 1.5°C global warming could trigger multiple climate tipping points - ScienceFat-Tailed Uncertainty in the Economics of Catastrophic Climate Change (PDF) - explains "The Dismal Theorem"Sri Lanka's organic farming disaster, explained - VoxSolar panel prices have fallen by around 20% every time global capacity doubled - Our World in DataSpecial guest speech by Mark Carney - YouTubeYounger Dryas - Wikipedia (prehistoric period with rapid climate change)Platform policies of Jill Stein, US Green Party leaderAgrowth – should we better be agnostic about growth? - degrowth‘4°C of global warming is optimal' – even Nobel Prize winners are getting things catastrophically wrong - The ConversationEconomists' Statement on Carbon DividendsWho Is Favored To Win The 2024 Presidential Election? - Nate SilverMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Rejuvenation biotech - progress and potential, with Karl Pfleger

    Play Episode Listen Later Sep 18, 2024 45:03


    Our guest in this episode is Karl Pfleger. Karl is an angel investor in rejuvenation biotech startups, and is also known for creating and maintaining the website Aging Biotech Info. That website describes itself as “Structured info about aging and longevity”, and has the declared mission statement, “Everything important in the field (outside of academia), organized.”Previously, Karl worked at Google from 2002 to 2013, as a research scientist and data analyst, applying AI and machine learning at scale. He has a BSE in Computer Science from Princeton, and a PhD in Computer Science and AI from Stanford.Previous London Futurists Podcast episodes mentioned in this conversation:Ep 74: The Longevity Singularity, with Daniel IvesEp 45: Generative AI drug discovery breakthrough, with Alex ZhavoronkovEp 12: Pioneering AI drug development, with Alex ZhavoronkovOther selected follow-ups:AgingBiotech.InfoLifespan.io rejuvenation roadmapAgingDB listStealth BiotherapeuticsNewLimitJuvenityJuvena TherapeuticsImmunisPartnership between Calico and AbbVieHevolutionUnity BiotechnologyHere's Why resTORbio Fell Over 83% TodayA4LI Responds to NIH Reform ProposalGreat Desire for Extended Life and Health amongst the American Public - a paper by Karl Pfleger. Kristen Fortney, Joe Betts-LaCroix and others, LEV achieved for young people before it is achieved for old peopleLongevity Biotech FellowshipForesight InstituteLongevity GlobalVitalism.IOMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    ChatGPT runs for president, with Pedro Domingos

    Play Episode Listen Later Sep 1, 2024 49:23


    Our guest today is Pedro Domingos, who is joining an elite group of repeat guests – he joined us before in episode 34 in April 2023.Pedro is Professor Emeritus Of Computer Science and Engineering at the University of Washington. He has done pioneering work in machine learning, like the development of Markov logic networks, which combine probabilistic reasoning with first-order logic. He is probably best known for his book "The Master Algorithm" which describes five different "tribes" of AI researchers, and argues that progress towards human-level general intelligence requires a unification of their approaches.More recently, Pedro has become a trenchant critic of what he sees as exaggerated claims about the power and potential of today's AI, and of calls to impose constraints on it.He has just published “2040: A Silicon Valley Satire”, a novel which ridicules Big Tech and also American politics.Selected follow-ups:Pedro Domingos - University of WashingtonPrevious London Futurists Podcast episode featuring Pedro Domingos2040: A Silicon Valley SatireThe Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our WorldThe Bonfire of the VanitiesRon HowardMike JudgeMartin ScorsesePandora's BrainTranscendenceFuture of Life Institute moratorium open letterOpenAI working on new reasoning technology under code name ‘Strawberry'Artificial Intelligence: A Modern Approach - by Stuart Russell and Peter NorvigGoogle's AI reasons its way around the London Underground - NatureConsciumIs LaMDA Sentient? — an Interview - by Blake LemoineCould a Large Language Model be Conscious? - Talk by David Chalmers at NeurIPS 2022Jeremy BenthamThe Extended Phenotype - 1982 book by Richard DawkinsClarion West: Workshops for people who are serious about writingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The rise of digital pandemics, with James Ball

    Play Episode Listen Later Aug 20, 2024 39:31


    Our guest in this episode is the journalist and author James Ball. James has worked for the Bureau of Investigative Journalism, The Guardian, WikiLeaks, BuzzFeed, The New European, and The Washington Post, among other organisations. As special projects editor at The Guardian, James played a key role in the Pulitzer Prize-winning coverage of the NSA leaks by Edward Snowden.Books that James has written include “Post-Truth: How Bullshit Conquered the World”, “Bluffocracy”, which makes the claim that Britain is run by bluffers, “The System: Who Owns the Internet, and How It Owns Us”, and, most recently, “The Other Pandemic: How QAnon Contaminated the World”.That all adds up to enough content to fill at least four of our episodes, but we mainly focus on the ideas in the last of these books, about digital pandemics.Selected follow-ups:James Ball (personal website)The Other Pandemic: How QAnon Contaminated the World - book by James BallGuardian and Washington Post win Pulitzer prize for NSA revelationsMeme - as described by Richard DawkinsDreyfus affairBlood libelFuture Shock - book by Alvin and Heidi TofflerHow The Gulf Of Tonkin Incident Sparked The Vietnam WarWhy Narcissists Love Conspiracy TheoriesNigel Farage - UK politician WarGames - 1983 movieGish gallop - rhetorical techniqueDominic Cummings has admitted the Leave campaign won by lyingReality check: how do Farage's claims on immigration, economy and crime hold up?Facts don't change minds – and there's data to prove itMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Thinking more athletically about the future, with Brett King and Rob Tercek

    Play Episode Listen Later Aug 12, 2024 55:24


    In this episode, we have not one guest but two – Brett King and Robert Tercek, the hosts of the Futurists Podcast.Brett King is originally from Australia, and is now based in Thailand. He is a renowned author, and the founder of a breakthrough digital bank. He consults extensively with clients in the financial services industry.Robert Tercek, based in the United States, is an expert in digital media with a successful career in broadcasting and innovation which includes serving as a creative director at MTV and a senior vice president at Sony Pictures. He now consults to CEOs about digital transformation.David and Calum had the pleasure of joining them on their podcast recently, where the conversation delved into the likely future impacts of artificial intelligence and other technologies, and also included politics.This return conversation covers a wide range of themes, including the dangers of Q-day, the prospects for technological unemployment, the future of media, different approaches to industrial strategy, a plea to "bring on the machines", and the importance of "thinking more athletically about the future".Selected follow-ups:The FuturistsBrett KingRobert TercekEpisode of The Futurists featuring David and CalumNeptune's Brood - Wikipedia article on the novel by Charles StrossJobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages - McKinsey Global InstituteWirecutter - New York Times product review siteCould AI create a one-person unicorn? Sam Altman thinks so - FortuneThe book The Rise of TechnosocialismProfessor Richard PettyComparison of economic growth, Europe vs. USA - Centre for European ReformLinkedIn founder Reid Hoffman wants Kamala Harris, if elected, to replace Lina Khan as head of the Federal Trade Commission - MSNBCMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The low-cost future of preserving brains, with Jordan Sparks

    Play Episode Listen Later Aug 2, 2024 38:31


    Our guest in this episode is Jordan Sparks, the founder and executive director of Oregon Brain Preservation (OBP), which is located at Salem, the capital city of Oregon. OBP offers the service of chemically preserving the brain in the hope of future restoration.Previously, Jordan was a dentist and a computer programmer, and he was successful enough in those fields to generate the capital required to start OBP.Brain preservation is a fascinating subject that we have covered in a number of recent episodes, in which we have interviewed Kenneth Hayworth, Max More, and Emil Kendziorra.Most people whose brains have been preserved for future restoration have undergone cryopreservation, which involves cooling the brain (and sometimes the whole body) down to a very low temperature and keeping it that way. OBP does offer that service occasionally, but its focus – which may be unique – is chemical fixation of the brain.Previous episodes on biostasis and brain preservation:The case for brain preservation, with Kenneth HayworthCryonics, cryocrastination, and the future: changing minds, with Max MoreStop cryocrastinating! with Emil KendziorraAdditional selected follow-ups:Oregon Brain PreservationThe costs of the services provided by Oregon Brain PreservationFocused Ultrasound: A Promising Tool for Cryonics - Tomorrow BioInvestigation of Electromagnetic Resonance Rewarming Enhanced by Magnetic Nanoparticles for Cryopreservation - LangmuirPre-epithelialized cryopreserved tracheal allograft for neo-trachea flap engineering - Frontiers in Bioengineering and BiotechnologyAldehyde-stabilized cryopreservation by Robert McIntyre and Gregory Fahy - CryobiologyOregon's Death with Dignity Act14-year-old girl who died of cancer wins right to be cryogenically frozen - The GuardianMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationOut-of-the-box insights from digital leadersDelivered is your window in the minds of people behind successful digital products. Listen on: Apple Podcasts Spotify

    Surveillance and diversity: surprising insights from the Gulf, with Holly Joint

    Play Episode Listen Later Jul 25, 2024 35:36


    Our guest in this episode is Holly Joint, who was born and educated in the UK, but lives in Abu Dhabi in the UAE.Holly started her career with five years at the business consultancy Accenture, and then worked in telecomms and banking. The latter took her to the Gulf, where she then spent what must have been a fascinating year as programme director of Qatar's winning bid to host the 2022 World Cup. Since then she has run a number of other start-ups and high-growth businesses in the Gulf.Holly is currently COO of Trivandi and also has a focus on helping women to have more power in a future dominated by technology.Calum met Holly at a conference in Dubai this year, where she quizzed him on-stage about machine consciousness.Selected follow-ups:Women for Tech UAETrivandi appoints Holly JointTrivandi - "Creating Events and Venues, Better"With a Few Bits of Data, Researchers Identify ‘Anonymous' People - New York TimesThe Age of Surveillance Capitalism - Shoshana ZuboffRankings out of 142 cities - Smart City Observatory (Abu Dhabi ranked #10 in 2024)Women in Tech: Time to close the gender gap - A PwC research reportWhy are so many big tech whistleblowers women? - The ConversationFalcon - the Arabic language LLMCollapse of Silicon Valley Bank - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The double-edged sword of technology, with Wendell Wallach

    Play Episode Listen Later Jul 19, 2024 53:53


    How do we keep technology from slipping beyond our control? That's the subtitle of the latest book by our guest in this episode, Wendell Wallach.Wendell is the Carnegie-Uehiro fellow at Carnegie Council for Ethics in International Affairs, where he co-directs the Artificial Intelligence & Equality Initiative. He is also Emeritus Chair of Technology and Ethics Studies at Yale University's Interdisciplinary Center for Bioethics, a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.Earlier in his life, Wendell was founder and president of two computer consulting companies, Farpoint Solutions and Omnia Consulting Inc.Selected follow-ups:Wendell Wallach Personal WebsiteWendell Wallach - Carnegie Council for Ethics in International AffairsThe Artificial Intelligence & Equality InitiativeNobel Peace Prize Lecture by Christian Lous Lange (1921)Thomas Midgley Jr. - WikipediaMontreal Protocol - WikipediaRobot Dog Highlighted at China-Cambodia Joint Military Exercise (video)For Our Posterity - essay by Leopold AschenbrennerCampaign by Control/AI against deepfakesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Stop cryocrastinating! with Emil Kendziorra

    Play Episode Listen Later Jul 12, 2024 40:05


    Our guest in this episode is Dr. Emil Kendziorra. Emil graduated summa cum laude, which means, with the highest honours, from the University of Göttingen in Germany, having previously studied at the University of Pécs in Hungary. For several years, he then devoted himself to cancer research with the hope of contributing to longevity science. After realizing how slowly life-extension research was progressing, he pivoted into entrepreneurship. He has been CEO of multiple tech and medical companies, most recently as a Founder and CEO of Medlanes and onFeedback, which were sold, respectively, to Zava and QuestionPro.Emil then decided to dedicate the next decades of his life, he says, to advancing medical biostasis and cryomedicine. He is currently the CEO of Tomorrow Bio and the President of the Board at the European Biostasis Foundation.A special offer:Thanks to Tomorrow Bio, an offer has been created, exclusively for listeners to the London Futurists Podcast who decide to become members of Tomorrow Bio after listening to this episode. When signing up online, use the code mentioned toward the end of the episode to reduce the cost of monthly or annual subscriptions by 30%.Small print: This offer doesn't apply to lifetime subscriptions, and is only available to new members of Tomorrow Bio. Importantly, this offer will expire on 15 September 2024, so don't delay if you want to take advantage of it. Selected follow-ups:tomorrow.bioEuropean Biostasis FoundationDignitasThe case for brain preservation - Our episode featuring Kenneth HayworthCryonics, cryocrastination, and the future: changing minds - Our episode featuring Max MoreMy next 20+ years towards a moonshot - Blogpost written by Emil Kendziorra in May 2020The Cryosphere - A Discord server for discussion of anything cryonics relatedGlobal Cryonics Summit - Miami, Florida, 20 & 21 July 2024Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Introducing Conscium, with Daniel Hulme and Ted Lappas

    Play Episode Listen Later Jul 1, 2024 42:01


    This episode is a bit different from the usual, because we are interviewing Calum's boss. Calum says that mainly to tease him, because he thinks the word “boss” is a dirty word.His name is Daniel Hulme, and this is his second appearance on the podcast. He was one of our earliest guests, long ago, in episode 8. Back then, Daniel had just sold his AI consultancy, Satalia, to the advertising and media giant WPP. Today, he is Chief AI Officer at WPP, but he is joining us to talk about his new venture, Conscium - which describes itself as "the world's first applied AI consciousness research organisation".Conscium states that "our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity".Also joining us is Ted Lappas, who is head of technology at Conscium, and he is also one of our illustrious former guests on the podcast.By way of full disclosure, Calum is CMO at Conscium, and David is on the Conscium advisory board.Selected follow-ups:ConsciumSataliaSix categories of application of AISix singularities - TEDx talk by Daniel HulmeProfessor Mark SolmsProfessor Karl FristonA recent paper on different theories of consciousness, by Patrick Butlin, Robert Long, et alProfessor Nicola ClaytonProfessor Jonathan BirchWPPThe Conscious AI meetupAI for organisations - Previous episode featuring Daniel HulmeHow to use GPT-4 yourself - Previous episode featuring Ted LappasMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Taming the Machine, with Nell Watson

    Play Episode Listen Later Jun 20, 2024 46:31


    Those who rush to leverage AI's power without adequate preparation face difficult blowback, scandals, and could provoke harsh regulatory measures. However, those who have a balanced, informed view on the risks and benefits of AI, and who, with care and knowledge, avoid either complacent optimism or defeatist pessimism, can harness AI's potential, and tap into an incredible variety of services of an ever-improving quality.These are some words from the introduction of the new book, “Taming the machine: ethically harness the power of AI”, whose author, Nell Watson, joins us in this episode.Nell's many roles include: Chair of IEEE's Transparency Experts Focus Group, Executive Consultant on philosophical matters for Apple, and President of the European Responsible Artificial Intelligence Office. She also leads several organizations such as EthicsNet.org, which aims to teach machines prosocial behaviours, and CulturalPeace.org, which crafts Geneva Conventions-style rules for cultural conflict.Selected follow-ups:Nell Watson's websiteTaming the Machine - book websiteBodiData (corporation)Post Office Horizon scandal: Why hundreds were wrongly prosecuted - BBC NewsDutch scandal serves as a warning for Europe over risks of using algorithms - PoliticoRobodebt: Illegal Australian welfare hunt drove people to despair - BBC NewsWhat is the infected blood scandal and will victims get compensation? - BBC NewsMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)British engineering giant Arup revealed as $25 million deepfake scam victim - CNNZersetzung psychological warfare technique - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationWhat If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify Climate ConfidentWith a new episode every Wed morning, the Climate Confident podcast is weekly podcast...Listen on: Apple Podcasts Spotify

    AI Impacts Survey - The key implications, with Katja Grace

    Play Episode Listen Later Jun 13, 2024 33:56


    Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed. Selected follow-ups:AI ImpactsWorld Spirit Sock Puppet - Katja's blogSurvey of 2,778 AI authors: six parts in pictures - from AI ImpactsOpenAI researcher who resigned over safety concerns joins Anthropic - article  in The Verge about Jan LeikeMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)Future of Humanity Institute 2005-2024: Final Report - by Anders Sandberg (PDF)Centre for the Governance of AIReasons for Persons - Article by Katja about Derek Parfit and theories of personal identity OpenAI Says It Has Started Training GPT-4 Successor - article in Forbes Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration What If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify

    Cryonics, cryocrastination, and the future: changing minds, with Max More

    Play Episode Listen Later Jun 5, 2024 50:15


    Our guest in this episode is Max More. Max is a philosopher, a futurist, and a transhumanist - a term which he coined in 1990, the same year that he legally changed his name from O'Connor to More.One of the tenets of transhumanism is that technology will allow us to prevent and reverse the aging process, and in the meantime we can preserve our brains with a process known as cryonics. In 1995 Max was awarded a PhD for a thesis on the nature of death, and from 2010 to 2020, he was CEO of Alcor, the world's biggest cryonics organisation.Max is firmly optimistic about our future prospects, and wary of any attempts to impede or regulate the development of technologies which can enhance or augment us.Selected follow-ups:Extropic Thoughts - Max More's writing on SubstackThe Biostasis Standard - Max's writings on "the latest in the field of biostasis and cryonics"Neophile - WikipediaThe Time of the Ice Box - Episode of 1970 BBC children's TV series TimeslipCryostasis Revival: The Recovery of Cryonics Patients  through Nanomedicine - 2022 book by Robert FreitasResearchers perform first successful transplant of functional cryopreserved rat kidney - news from the University of MinnesotaLarge Mammal BPF Prize Winning Announcement - news from the Brain Preservation FoundationThe European Biostasis FoundationAlcor Life Extension FoundationMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationWhat If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify High Street Matters: Accessibility - Unlocking the High StreetWe arm retailers with the key to unlock the potential of the high street... for all.Listen on: Apple Podcasts Spotify

    Stem cells, lab-grown meat, and potential new medical treatments, with Mark Kotter

    Play Episode Listen Later May 27, 2024 34:33


    Our guest in this episode is Dr. Mark Kotter. Mark is a neurosurgeon, stem cell biologist, and founder or co-founder of three biotech start-up companies that have collectively raised hundreds of millions of pounds: bit.bio, clock.bio, and Meatable.In addition, Mark still conducts neurosurgeries on patients weekly at the University of Cambridge.We talk to Mark about all his companies, but we start by discussing Meatable, one of the leading companies in the cultured meat sector. This is an area of technology which should have a far greater impact than most people are aware of, and it's an area we haven't covered before in the podcast.Selected follow-ups:Dr Mark Kotter at the University of CambridgeMeatablebit.bioclock.bioAfter 25 years of hype, embryonic stem cells are still waiting for their moment - Article in MIT Technology ReviewThe Nobel Prize in Physiology or Medicine 2012Moo's Law: An Investor's Guide to the New Agrarian Revolution - book by Jim MellonWhat is the climate impact of eating meat and dairy?Guidance for businesses on cell-cultivated products and the authorisation processWild mammals make up only a few percent of the world's mammals - Our World In DataBlueRock TherapeuticsTherapies under development at bit.bioStem Cell Gene Therapy Shows Promise in ALS Trial - from Cedars-Sinai Medical CenterMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The economic case for a second longevity revolution, with Andrew Scott

    Play Episode Listen Later May 16, 2024 41:45


    The public discussion in a number of countries around the world expresses worries about what is called an aging society. These countries anticipate a future with fewer younger people who are active members of the economy, and a growing number of older people who need to be supported by the people still in the workforce. It's an inversion of the usual demographic pyramid, with less at the bottom, and more at the top.However, our guest in this episode recommends a different framing of the future – not as an aging society, but as a longevity society, or even an evergreen society. He is Andrew Scott, Professor of Economics at the London Business School. His other roles include being a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University's Center on Longevity.Andrew's latest book is entitled “The Longevity Imperative: Building a Better Society for Healthier, Longer Lives”. Commendations for the book include this from the political economist Daron Acemoglu, “A must-read book with an important message and many lessons”, and this from the historian Niall Ferguson, “Persuasive, uplifting and wise”. Selected follow-ups:Personal website of Andrew ScottAndrew Scott at the London Business SchoolThe book The Longevity Imperative: How to Build a Healthier and More Productive Society to Support Our Longer LivesLongevity, the 56 trillion dollar opportunity, with Andrew Scott - episode 40 in this seriesPopulation Pyramids of the World from 1950 to 2100Thomas Robert Malthus - WikipediaDALYs (Disability-adjusted life years) and QALYs (Quality-adjusted life years) - WikipediaVSL (Value of Statistical Life) - WikipediaThe economic value of targeting aging - paper in Nature Aging, co-authored by Andrew Scott, Martin Ellison, and David SinclairA great-grandfather from Merseyside has become the world's oldest living man - BBC, 5th April 2024Related quotations:Aging is "...revealed and made manifest only by the most unnatural experiment of prolonging an animal's life by sheltering it from the hazards of its ordinary existence" - Peter Medawar, 1951"To die of old age is a death rare, extraordinary, and singular, and, therefore, so much less natural than the others; 'tis the last and extremest sort of dying: and the more remote, the less to be hoped for" - Michel de Montaigne, 1580Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationThe Neil Ashton PodcastThis podcast focuses on explaining the fascinating ways that science and engineering...Listen on: Apple Podcasts Spotify

    Can AI be conscious? with Nicholas Humphrey

    Play Episode Listen Later May 7, 2024 45:43


    In this episode we return to the subject of whether AIs will become conscious, or, to use a word from the title of the latest book from our guest today, whether AIs will become sentient.Our guest is Nicholas Humphrey, Emeritus Professor of Psychology at London School of Economics, and Bye Fellow at Darwin College, Cambridge. His latest book is “Sentience: the invention of consciousness”, and it explores the emergence and role of consciousness from a variety of perspectives.The book draws together insights from the more than fifty years Nick has been studying the evolution of intelligence and consciousness. He was the first person to demonstrate the existence of “blindsight” after brain damage in monkeys, studied mountain gorillas with Dian Fossey in Rwanda, originated the theory of the “social function of intellect”, and has investigated the evolutionary background of religion, art, healing, death-awareness, and suicide. Among his other awards are the Martin Luther King Memorial Prize, the Pufendorf Medal, and the International Mind and Brain Prize.The conversation starts with some reflections on the differences between the views of our guest and his long-time philosophical friend Daniel Dennett, who had died shortly before the recording took place.Selected follow-ups:The website of Nicholas HumphreyThe book Sentience: The Invention of ConsciousnessHow did consciousness evolve? - Recording of talk at the Royal InstitutionThe book Consciousness Explained by Daniel DennettPenrose triangle (article contains "real impossible triangles")Keith Frankish (philosopher of mind)The psychonic theory of consciousness - a theory included in the 1929 edition of Encyclopaedia BritannicaLawrence (Larry) Weiskrantz - the supervisor of Nicholas HumphreyBlindside patient 'TN'The Tin Men by Michael FraynWhat's it like to be an AI: Anil Seth on London Futurists PodcastJoe Simpson (mountaineer)The New York Declaration on Animal ConsciousnessScientific Declaration on Insect Sentience and WelfareRupert SheldrakeAlternative Natural Philosophy Association (ANPA)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Progress with ending aging, with Aubrey de Grey

    Play Episode Listen Later Apr 21, 2024 40:52


    Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they've heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"Selected follow-ups:Aubrey de Grey on X (Twitter)The book Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our LifetimeThe Longevity Escape Velocity (LEV) FoundationThe SENS paradigm for ending aging , contrasted with the "Hallmarks of Aging" - a 2023 article in Rejuvenation ResearchProgress reports from the current RMR projectThe plan for RMR 2The RAID (Rodent Aging Interventions Database) analysis that guided the design of RMR 1 and 2Longevity Summit Dublin (LSD): 13-16 June 2024Unblocking the Brain's Drains to Fight Alzheimer's - Doug Ethell of Leucadia Therapeutics at LSD 2023 (explains the possible role of the cribriform plate)Targeting Telomeres to Clear Cancer – Vlad Vitoc of MAIA Biotechnology at LSD 2023How to Run a Lifespan Study of 1,000 Mice - Danique Wortel of Ichor Life Sciences at LSD 2023XPrize HealthspanThe Dublin Longevity Declaration ("DLD")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    What's it like to be an AI, with Anil Seth

    Play Episode Listen Later Apr 13, 2024 45:52


    As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"Selected follow-ups:Anil Seth's websiteBooks by Anil Seth, including Being YouConsciousness in humans and other things - presentation by Anil Seth at The Royal Society, March 2024Is consciousness more like chess or the weather? - an interview with Anil SethAutopoiesis - Wikipedia article about the concept introduced by Humberto Maturana and Francisco Varela Akinetic mutism, WikipediaCerebral organoid (Brain organoid), WikipediaAI Scientists: Safe and Useful AI? - by Yoshua Bengio, on AIs as oraclesEx Machina (2014 film, written and directed by Alex Garland)The Conscious Electromagnetic Information (Cemi) Field Theory by Johnjoe McFaddenThe Electromagnetic Field Theory of Consciousness by Susan PockettMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationClimate ConfidentWith a new episode every Wed morning, the Climate Confident podcast is weekly podcast...Listen on: Apple Podcasts Spotify Inspiring ComputingThe Inspiring Computing podcast is where computing meets the real world. This podcast...Listen on: Apple Podcasts Spotify

    Regulating Big Tech, with Adam Kovacevich

    Play Episode Listen Later Apr 4, 2024 38:01


    Our guest in this episode is Adam Kovacevich. Adam is the Founder and CEO of the Chamber of Progress, which describes itself as a center-left tech industry policy coalition that works to ensure that all citizens benefit from technological leaps, and that the tech industry operates responsibly and fairly.Adam has had a front row seat for more than 20 years in the tech industry's political maturation, and he advises companies on navigating the challenges of political regulation.For example, Adam spent 12 years at Google, where he led a 15-person policy strategy and external affairs team. In that role, he drove the company's U.S. public policy campaigns on topics such as privacy, security, antitrust, intellectual property, and taxation.We had two reasons to want to talk with Adam. First, to understand the kerfuffle that has arisen from the lawsuit launched against Apple by the U.S. Department of Justice and sixteen state Attorney Generals. And second, to look ahead to possible future interactions between tech industry regulators and the industry itself, especially as concerns about Artificial Intelligence rise in the public mind.Selected follow-ups:Adam Kovacevich's websiteThe Chamber of ProgressGartner Hype Cycle"Justice Department Sues Apple for Monopolizing Smartphone Markets"The Age of Surveillance Capitalism by Shoshana ZuboffEpic Games v. Apple (Wikipedia)"AirTags Are the Best Thing to Happen to Tile" (Wired)Adobe FireflyThe EU AI ActMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The case for brain preservation, with Kenneth Hayworth

    Play Episode Listen Later Mar 29, 2024 42:11


    In this episode, we are delving into the fascinating topic of mind uploading. We suspect this idea is about to explode into public consciousness, because Nick Bostrom has a new book out shortly called “Deep Utopia”, which addresses what happens if superintelligence arrives and everything goes well. It was Bostrom's last book, “Superintelligence”, that ignited the great robot freak-out of 2015.Our guest is Dr Kenneth Hayworth, a Senior Scientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. Janelia is probably America's leading research institution in the field of connectomics – the precise mapping of the neurons in the human brain.Kenneth is a co-inventor of a process for imaging neural circuits at the nanometre scale, and he has designed and built several automated machines to do it. He is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy imaging of brain tissue to encompass much larger volumes than are currently possible.Along with John Smart, Kenneth co-founded the Brain Preservation Foundation in 2010, a non-profit organization with the goal of promoting research in the field of whole brain preservation.During the conversation, Kenneth made a strong case for putting more focus on preserving human brains via a process known as aldehyde fixation, as a way of enabling people to be uploaded in due course into new bodies. He also issued a call for action by members of the global cryonics community.Selected follow-ups:Kenneth HayworthThe Brain Preservation FoundationAn essay by Kenneth Hayworth: Killed by Bad PhilosophyThe short story Psychological Counseling for First-time Teletransport Users (PDF)21st Century MedicineJanelia Research CampusMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    AGI alignment: the case for hope, with Lou de K

    Play Episode Listen Later Mar 22, 2024 34:42


    Our guest in this episode is Lou de K, Program Director at the Foresight Institute.David recently saw Lou give a marvellous talk at the TransVision conference in Utrecht in the Netherlands, on the subject of “AGI Alignment: Challenges and Hope”. Lou kindly agreed to join us to review some of the ideas in that talk and to explore their consequences. Selected follow-ups:Personal website of Lou de K (Lou de Kerhuelvez)Foresight.orgTransVision Utrecht 2024The AI Revolution: The Road to Superintelligence by Tim Urban on Wait But WhyAI Alignment: A Comprehensive Survey - 98 page PDF with authors from Peking University and other universitiesSynthetic Sentience: Can Artificial Intelligence become conscious? - Talk by Joscha Bach at CCC, December 2023Pope Francis "warns of risks of AI for peace" (Vatican News)Claude's Constitution by AnthropicRoman Yampolskiy discusses multi-multi alignment (Future of Life podcast)Shoggoth with Smiley Face on Know Your MemeShoggoth on AISafetyMemes on X/TwitterOrthogonality Thesis on LessWrongQuotes by the poet Lucille CliftonDecentralized science (DeSci) on Ethereum.orgListing of Foresight Institute fellowsThe Network State by Balaji SrinivasanThe Network State vs. Coordi-Nations featuring the ideas of Primavera De FilippiDeSci London event, Imperial College Business School, 23-24 MarchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    The Political Singularity and a Worthy Successor, with Daniel Faggella

    Play Episode Listen Later Mar 15, 2024 43:03


    Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we'll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they're actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.The topics that featured in this conversation included:"The Political Singularity" - when the general public realize that one political question has become more important than all the others, namely should humanity be creating an AI with godlike powers, and if so, under what conditionsCriteria to judge whether a forthcoming superintelligent AI is a "worthy successor" to humanity.Selected follow-ups:The website of Dan FaggellaThe BGI24 conference, lead organiser Ben Goertzel of SingularityNETThe Intelligence Trajectory Political MatrixThe Political SingularityA Worthy Successor - the purpose of AGIRoko Mijic on Twitter/XThe novel Diaspora by Greg EganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationThe Code of Entry PodcastThe Code of Entry Podcast, hosted by the insightful Greg Bew, delves deep into the...Listen on: Apple Podcasts Spotify

    The Longevity Singularity, with Daniel Ives

    Play Episode Listen Later Mar 7, 2024 47:32


    In the wide and complex subject of biological aging, one particular kind of biological aging has been receiving a great deal of attention in recent years. That's the field of epigenetic aging, where parts of the packaging or covering, as we might call it, of the DNA in all of our cells, alters over time, changing which genes are turned on and turned off, with increasingly damaging consequences.What's made this field take off is the discovery that this epigenetic aging can be reversed, via an increasing number of techniques. Moreover, there is some evidence that this reversal gives a new lease of life to the organism.To discuss this topic and the opportunities arising, our guest in this episode is Daniel Ives, the CEO of Shift Bioscience. As you'll hear, Shift Bioscience is a company that is carrying out some very promising research into this field of epigenetic aging.Daniel has a PhD from the University of Cambridge, and co-founded Shift Bioscience in 2017.The conversation highlighted a way of using AI transformer models and a graph neural network to dramatically speed up the exploration of which proteins can play the best role in reversing epigenetic aging. It also considered which other types of aging will likely need different sorts of treatments, beyond these proteins. Finally, conversation turned to a potential fast transformation of public attitudes toward the possibility and desirability of comprehensively treating aging - a transformation called "all hell breaks loose" by Daniel, and "the Longevity Singularity" by Calum.Selected follow-ups:Shift BioscienceAubrey de Grey's TED talk "A roadmap to end aging"Epigenetic clocks (Wikipedia)Shinya Yamanaka (Wikipedia)scGPT - bioRxiv preprint by Bo Wang and colleaguesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationThe Code of Entry PodcastThe Code of Entry Podcast, hosted by the insightful Greg Bew, delves deep into the...Listen on: Apple Podcasts Spotify

    Where are all the Dyson spheres? with Paul Sutter

    Play Episode Listen Later Feb 21, 2024 40:02


    In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.Selected follow-ups:Paul M. Sutter - website"Would building a Dyson sphere be worth it? We ran the numbers" - Ars TechnicaForthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt"The Kardashev scale: Classifying alien civilizations" - Space.com"Modified Newtonian dynamics" as a possible alternative to the theory of dark matterThe Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian GreeneThe Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl SaganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.

    Provably safe AGI, with Steve Omohundro

    Play Episode Listen Later Feb 13, 2024 42:59


    AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Robots and the people who love them, with Eve Herold

    Play Episode Listen Later Feb 6, 2024 36:42


    In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.Selected follow-ups:Eve Herold: What lies ahead for the human raceEve Herold on Macmillan PublishersThe book Robots and the People Who Love ThemHealthspan Action CoalitionHanson RoboticsSophia, Desi, and GraceThe AIBO robotic puppySome of the films discussed:A.I. (2001)Ex Machina (2014)I, Robot (2004)I'm Your Man (2021)Robot & Frank (2012)WALL.E (2008)Metropolis (1927)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Education and work - past, present, and future, with Riaz Shah

    Play Episode Listen Later Jan 25, 2024 36:41


    Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There's an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.Selected follow-ups:Riaz Shah at EYThe TEDx talk Rise Above the Machines by Riaz Shah One Degree Mentoring CharityOne Degree AcademyEY Tech MBA by Hult International Business SchoolGallup survey: State of the Global Workplace, 2023BCG report: How People Can Create—and Destroy—Value with Generative AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    What is your p(doom)? with Darren McKee

    Play Episode Listen Later Jan 18, 2024 42:17


    In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That's a new book on a vitally important subject.The book's front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There's also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I've read.”Calum and David had lots of questions ready to put to the book's author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Climate Change: There's good news and bad news, with Nick Mabey

    Play Episode Listen Later Jan 11, 2024 44:44


    Our guest in this episode is Nick Mabey, the co-founder and co-CEO of one of the world's most influential climate change think tanks, E3G, where the name stands for Third Generation Environmentalism. As well as his roles with E3G, Nick is founder and chair of London Climate Action Week, and he has several independent appointments including as a London Sustainable Development Commissioner.Nick has previously worked in the UK Prime Minister's Strategy Unit, the UK Foreign Office, WWF-UK, London Business School, and the UK electricity industry. As an academic he was lead author of “Argument in the Greenhouse”; one of the first books examining the economics of climate change.He was awarded an OBE in the Queen's Jubilee honours list in 2022 for services to climate change and support to the UK COP 26 Presidency.As the conversation makes clear, there is both good news and bad news regarding responses to climate change.Selected follow-ups:Nick Mabey's websiteE3G"Call for UK Government to 'get a grip' on climate change impacts"The IPCC's 2023 synthesis reportChatham House commentary on IPCC report"Why Climate Change Is a National Security Risk"The UK's Development, Concepts and Doctrine Centre (DCDC)Bjørn LomborgMatt RidleyTim LentonJason HickelMark CarneyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Meet the electrome! with Sally Adee

    Play Episode Listen Later Jan 5, 2024 36:41


    Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari's book “Homo Deus”.Selected follow-ups:Sally Adee's websiteThe book "We are Electric"Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"tDCS (Transcranial direct-current stimulation)The conference "Anticipating 2025" (held in 2014)Article: "Brain implants help people to recover after severe head injury"Article on enhancing memory in older peopleBioelectricity cancer researcher Mustafa DjamgozArticle on Tumour Treating FieldsArticle on "Motile Living Biobots"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Don't try to make AI safe; instead, make safe AI, with Stuart Russell

    Play Episode Listen Later Dec 27, 2023 49:04


    We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Aligning AI, before it's too late, with Rebecca Gorman

    Play Episode Listen Later Dec 9, 2023 34:30


    Our guest in this episode is Rebecca Gorman, the co-founder and CEO of Aligned AI, a start-up in Oxford which describes itself rather nicely as working to get AI to do more of the things it should do and fewer of the things it shouldn't.Rebecca built her first AI system 20 years ago and has been calling for responsible AI development since 2010. With her co-founder Stuart Armstrong, she has co-developed several advanced methods for AI alignment, and she has advised the EU, UN, OECD and the UK Parliament on the governance and regulation of AI.The conversation highlights the tools faAIr, EquitAI, and ACE, developed by Aligned AI. It also covers the significance of recent performance by Aligned AI software in the CoinRun test environment, which demonstrates the important principle of "overcoming goal misgeneralisation". Selected follow-ups:buildaligned.aiArticle: "Using faAIr to measure gender bias in LLMs"Article: "EquitAI: A gender bias mitigation tool for generative AI"Article: "ACE for goal generalisation""CoinRun: Solving Goal Misgeneralisation" - a publication on arXivAligned AI repositories on GitHub"Specification gaming examples in AI" - article by Victoria KrakovnaRebecca Gorman speaking at the Cambridge Union on "This House Believes Artificial Intelligence Is An Existential Threat" (YouTube)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Shazam! with Dhiraj Mukherjee

    Play Episode Listen Later Nov 27, 2023 32:49


    Our guest in this episode is Dhiraj Mukherjee, best known as the co-founder of Shazam. Calum and David both still remember the sense of amazement we felt when, way back in the dotcom boom, we used Shazam to identify a piece of music from its first couple of bars. It seemed like magic, and was tangible evidence of how fast technology was moving: it was creating services which seemed like science fiction.Shazam was eventually bought by Apple in 2018 for a reported 400 million dollars. This gave Dhiraj the funds to pursue new interests. He is now a prolific investor and a keynote speaker on the subject of how companies both large and small can be more innovative.In this conversation, Dhiraj highlights some lessons from his personal entrepreneurial journey, and reflects on ways in which the task of entrepreneurs is changing, in the UK and elsewhere. The conversation covers possible futures in fields such as Climate Action and the overcoming of unconscious biases.Selected follow-ups:https://dhirajmukherjee.com/https://www.shazam.com/https://dandelionenergy.com/https://technation.io/Entrepreneur Firsthttps://fairbrics.co/https://neoplants.com/Al Gore's Generation Investment Management Fundhttps://www.mevitae.com/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Claim London Futurists

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel