Podcast appearances and mentions of von neumann

  • 136PODCASTS
  • 189EPISODES
  • 46mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about von neumann

Latest podcast episodes about von neumann

Code-Garage
Circuits #9 - Le secret caché des CPU/GPU

Code-Garage

Play Episode Listen Later Mar 3, 2025 5:38


Les fabricants des puces qui composent vos processeurs et vos cartes graphiques cachent un lourd secret lors de la fabrication et de la mise en vente des cartes... Mais lequel est-ce ?Notes de l'épisode :L'architecture Von Neumann : https://code-garage.com/podcast/circuits/episode-8

Creative On Purpose
No Rules Jam With Casey von Neumann

Creative On Purpose

Play Episode Listen Later Feb 18, 2025 41:43


Join me for my next live video in the app This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit creativeonpurpose.substack.com/subscribe

Of Je Stopt De Stekker Er In
#073 | Von Neumann Architectuur

Of Je Stopt De Stekker Er In

Play Episode Listen Later Feb 18, 2025 25:42


Heb je ooit in de file gestaan, bumper aan bumper, terwijl je wanhopig naar je bestemming verlangde? Zo voelt de von Neumann-architectuur voor AI! Vandaag duiken onder andere in de wondere wereld van kunstmatige intelligentie en de hardware-uitdagingen die haar vooruitgang belemmeren.

New Books Network
Our History with AI is (much) Longer than You Think (with Kevin LaGrandeur)

New Books Network

Play Episode Listen Later Feb 8, 2025 65:53


It's the UConn Popcast, and when did we really start dreaming about the promise, and the danger, of artificial intelligence? When ChatGPT was released in 2022? When IBMs Deep Blue defeated Chess world champion Garry Kasparov in 1997? When Stanley Kubrick introduced us to HAL 9000 in 1968? Or perhaps you think it was much earlier. Maybe we have had the dream of AI since the development of the first computers by Von Neumann, or even earlier, by Babbage. Or maybe you think the dawning of the age of science itself is ground zero for our thoughts of artificial intelligence. Kevin LaGrandeur traces our dreams - and fears - of artificial intelligence back way further than this. LaGrandeur argues that ideas of artificial slaves can be found in the writing of Aristotle, in the Renaissance-era idea of the Homunculus, in the Jewish legend of the Golem. LaGrandeur, a longtime professor at the New York Institute of Technology and now an independent scholar and Director of Research at the Global AI Ethics Institute, has more than 25 years of experience teaching, writing and speaking about technology and society. We were thrilled to be able to have a wide-ranging conversation with Professor LaGrandeur about his pathbreaking research on Androids and intelligent networks in early modern culture, and his current work on the ethics and implications of AI. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in History
Our History with AI is (much) Longer than You Think (with Kevin LaGrandeur)

New Books in History

Play Episode Listen Later Feb 8, 2025 65:53


It's the UConn Popcast, and when did we really start dreaming about the promise, and the danger, of artificial intelligence? When ChatGPT was released in 2022? When IBMs Deep Blue defeated Chess world champion Garry Kasparov in 1997? When Stanley Kubrick introduced us to HAL 9000 in 1968? Or perhaps you think it was much earlier. Maybe we have had the dream of AI since the development of the first computers by Von Neumann, or even earlier, by Babbage. Or maybe you think the dawning of the age of science itself is ground zero for our thoughts of artificial intelligence. Kevin LaGrandeur traces our dreams - and fears - of artificial intelligence back way further than this. LaGrandeur argues that ideas of artificial slaves can be found in the writing of Aristotle, in the Renaissance-era idea of the Homunculus, in the Jewish legend of the Golem. LaGrandeur, a longtime professor at the New York Institute of Technology and now an independent scholar and Director of Research at the Global AI Ethics Institute, has more than 25 years of experience teaching, writing and speaking about technology and society. We were thrilled to be able to have a wide-ranging conversation with Professor LaGrandeur about his pathbreaking research on Androids and intelligent networks in early modern culture, and his current work on the ethics and implications of AI. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history

New Books in Intellectual History
Our History with AI is (much) Longer than You Think (with Kevin LaGrandeur)

New Books in Intellectual History

Play Episode Listen Later Feb 8, 2025 65:53


It's the UConn Popcast, and when did we really start dreaming about the promise, and the danger, of artificial intelligence? When ChatGPT was released in 2022? When IBMs Deep Blue defeated Chess world champion Garry Kasparov in 1997? When Stanley Kubrick introduced us to HAL 9000 in 1968? Or perhaps you think it was much earlier. Maybe we have had the dream of AI since the development of the first computers by Von Neumann, or even earlier, by Babbage. Or maybe you think the dawning of the age of science itself is ground zero for our thoughts of artificial intelligence. Kevin LaGrandeur traces our dreams - and fears - of artificial intelligence back way further than this. LaGrandeur argues that ideas of artificial slaves can be found in the writing of Aristotle, in the Renaissance-era idea of the Homunculus, in the Jewish legend of the Golem. LaGrandeur, a longtime professor at the New York Institute of Technology and now an independent scholar and Director of Research at the Global AI Ethics Institute, has more than 25 years of experience teaching, writing and speaking about technology and society. We were thrilled to be able to have a wide-ranging conversation with Professor LaGrandeur about his pathbreaking research on Androids and intelligent networks in early modern culture, and his current work on the ethics and implications of AI. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/intellectual-history

New Books in Science, Technology, and Society
Our History with AI is (much) Longer than You Think (with Kevin LaGrandeur)

New Books in Science, Technology, and Society

Play Episode Listen Later Feb 8, 2025 65:53


It's the UConn Popcast, and when did we really start dreaming about the promise, and the danger, of artificial intelligence? When ChatGPT was released in 2022? When IBMs Deep Blue defeated Chess world champion Garry Kasparov in 1997? When Stanley Kubrick introduced us to HAL 9000 in 1968? Or perhaps you think it was much earlier. Maybe we have had the dream of AI since the development of the first computers by Von Neumann, or even earlier, by Babbage. Or maybe you think the dawning of the age of science itself is ground zero for our thoughts of artificial intelligence. Kevin LaGrandeur traces our dreams - and fears - of artificial intelligence back way further than this. LaGrandeur argues that ideas of artificial slaves can be found in the writing of Aristotle, in the Renaissance-era idea of the Homunculus, in the Jewish legend of the Golem. LaGrandeur, a longtime professor at the New York Institute of Technology and now an independent scholar and Director of Research at the Global AI Ethics Institute, has more than 25 years of experience teaching, writing and speaking about technology and society. We were thrilled to be able to have a wide-ranging conversation with Professor LaGrandeur about his pathbreaking research on Androids and intelligent networks in early modern culture, and his current work on the ethics and implications of AI. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Technology
Our History with AI is (much) Longer than You Think (with Kevin LaGrandeur)

New Books in Technology

Play Episode Listen Later Feb 8, 2025 65:53


It's the UConn Popcast, and when did we really start dreaming about the promise, and the danger, of artificial intelligence? When ChatGPT was released in 2022? When IBMs Deep Blue defeated Chess world champion Garry Kasparov in 1997? When Stanley Kubrick introduced us to HAL 9000 in 1968? Or perhaps you think it was much earlier. Maybe we have had the dream of AI since the development of the first computers by Von Neumann, or even earlier, by Babbage. Or maybe you think the dawning of the age of science itself is ground zero for our thoughts of artificial intelligence. Kevin LaGrandeur traces our dreams - and fears - of artificial intelligence back way further than this. LaGrandeur argues that ideas of artificial slaves can be found in the writing of Aristotle, in the Renaissance-era idea of the Homunculus, in the Jewish legend of the Golem. LaGrandeur, a longtime professor at the New York Institute of Technology and now an independent scholar and Director of Research at the Global AI Ethics Institute, has more than 25 years of experience teaching, writing and speaking about technology and society. We were thrilled to be able to have a wide-ranging conversation with Professor LaGrandeur about his pathbreaking research on Androids and intelligent networks in early modern culture, and his current work on the ethics and implications of AI. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

Conspiracy Clearinghouse
The Fermi Paradox: Here Comes Nobody

Conspiracy Clearinghouse

Play Episode Listen Later Jan 29, 2025 48:57


EPISODE 129 | The Fermi Paradox: Here Comes Nobody If we are not unique as a species, an intelligent one that builds civilizations, then there must be lots and lots of other civilizations out there in the galaxy and the rest of the universe beyond. But if so, where the hell is everybody? That's the question at the heart of the Fermi Paradox. And this has kicked off a chain of reasoning and speculation that crosses disciplinary boundaries, and lets us start to envision, not just what might be out there, but where we ourselves want to go as a global civilization. Like what we do? Then buy us a beer or three via our page on Buy Me a Coffee. You can also SUBSCRIBE to this podcast. Review us here or on IMDb!  SECTIONS 02:54 - First we feel, then we fall - The Fermi Paradox, the Drake Equation, the Wow! Signal, space is big 09:22 - End here. Us then. Finn, again! - Abiogenesis, the Pulse-Transient Theory of Industrial Civilization, musings on the galactic situation 18:48 - They lived and laughed and loved and left - The Great Filter, Von Neumann probes, the Berserker Hypothesis; loud, quiet and grabby aliens; the Dark Forest Hypothesis, the technological singularity, the Jevons Paradox and induced demand 27:16 - The cross of your own cruelfiction - The Zoo Hypothesis, the Planetarium Hypothesis, the Deathworld Scenario, A Field Guide to Aliens, Calculating God and other science fiction, the Aestivation Hypothesis  34:34 - Three quarks for Muster Mark! - The Kardashev Scale, Sagan's addition, megastructures, Barrow's anti-Kardashev scale, Galántai's variation, the Urbanization Hypothesis, Kardashev's six scenarios, what to look for 43:04 - He is cured by faith who is sick of fate - They're here, Greer and UAP folks, MJ-12, the SETI Paradox, we are looking, light is fast but has a limit 47:38 - Let us leave theories there and return to here's hear - Joseph Campbell and where do we go from here? Music by Fanette Ronjat More Info Finnegans Wake glosses A skeleton key to Finnegans Wake by Joseph Campbell ‘It never ends': the book club that spent 28 years reading Finnegans Wake in The Guardian Finnegan's Wake at 80: In Defense of the Difficult at LitHub Why Finnegans Wake Is Better than Ulysses Fermi's Paradox on This American Life SETI Institute website Humanity Responds to 'Alien' Wow Signal, 35 Years Later This Is How We Know There Are Two Trillion Galaxies In The Universe in Forbes The Great Filter: a possible solution to the Fermi Paradox in Astronomy A list of solutions to the Fermi Paradox on It's only chemo Are We Alone in the Universe? Article looking at the Drake Equation How Many Aliens Are There? A look at the Drake Equation Drake Equation: Estimating the Odds of Finding E.T. on Space.com Template for calculating answers to the Drake Equation on PBS The Olduvai Theory: Toward a Re-Equalizing of the World Standard of Living The Berserker Hypothesis: The Darkest Explanation Of The Fermi Paradox Grabby Aliens website Dark Forest theory: A terrifying explanation of why we haven't heard from aliens yet The Dark Forest Hypothesis is Absurd 'Zoo hypothesis' may explain why we haven't seen any space aliens Where Is Everyone? 4 Possible Explanations for the Fermi Paradox at Singularity Hub The Kardashev scale: Classifying alien civilizations on Space.com Kardashev Scale: What is it and where is Earth listed? on BBC Science Focus Forecasting the progression of human civilization on the Kardashev Scale through 2060 with a machine learning approach reports in Scientific Reports SETI: Musings on the Barrow Scale A non-anthropocentric solution to the Fermi paradox in the International Journal of Astrobiology Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox? in the Journal of the Royal Society Planetary scientists suggest a solution to the Fermi paradox: Superlinear scaling leading to a singularity on Phys.org The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge Novels with a focus on the Fermi Paradox / Great Filter? in r/printSF Beyond “Fermi's Paradox” XVII: What is the “SETI-Paradox” Hypothesis? SETI urged to fess up over alien signals Steven Greer's website   Follow us on social: Facebook Twitter Bluesky Other Podcasts by Derek DeWitt DIGITAL SIGNAGE DONE RIGHT - Winner of a 2022 Gold Quill Award, 2022 Gold MarCom Award, 2021 AVA Digital Award Gold, 2021 Silver Davey Award, 2020 Communicator Award of Excellence, and on numerous top 10 podcast lists.  PRAGUE TIMES - A city is more than just a location - it's a kaleidoscope of history, places, people and trends. This podcast looks at Prague, in the center of Europe, from a number of perspectives, including what it is now, what is has been and where it's going. It's Prague THEN, Prague NOW, Prague LATER 

Code-Garage
Circuits #8 - L'architecture Von Neumann

Code-Garage

Play Episode Listen Later Jan 22, 2025 11:25


Tout simplement l'architecture à la base de notre informatique moderne, inventée en 1945, par John Von Neumann !

Creative On Purpose
Casey von Neumann - Creative Productivity Club

Creative On Purpose

Play Episode Listen Later Jan 9, 2025 36:38


Creativity is a natural human impulse shared by all human beings (even those who deny this identity). How can we better leverage our creative instinct to be more productive and get closer to what we want in life, work, and our life's work? I can think of no one better to help answer this than his week's guest, Casey von Neumann. Join us for this lively, insightful, and inspiring discussion!Learn more about the difference Casey makes here and check out her Creative Productivity Club.Creative on Purpose features insightful conversations with inspiring difference-makers.Join the community and conversation by subscribing now.Go Further* Click here to learn about the Solopreneur Success Circle.* Click here to learn about the Close the Gap 90-Day Solopreneur Success Accelerator. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit creativeonpurpose.substack.com/subscribe

Disintegrator
21. LIFE (w/ Blaise Agüera y Arcas)

Disintegrator

Play Episode Listen Later Nov 14, 2024 62:39


Blaise Agüera y Arcas is one of most important people in AI, and apart from his leadership position as CTO of Technology & Society at Google, he has one of those resumes or affiliations lists that seems to span a lot of very fundamental things. He's amazing; the thoughtfulness and generosity with which he communicates on this episode gently embraced our brains while lazering them to mush. We hope you have the same experience. References include:Blaise's own books Who Are We Now?, Ubi Sunt, and the upcoming What Is Intelligence?He references James C. Scott's Seeing Like a State, which we strongly recommend, Benjamin Peters' How Not to Network a Nation, and Red Plenty by Francis Spufford.Strong recommendation also to Benjamin Labatut's When We Cease to Understand the World.Roberto references Luciana Parisi's Abstract Sex (our favorite book!) and the work of Lynn Margulis with respect to biology and reproduction.Blaise references James E. Lovelock's project “Daisyworld” with respect to the Gaia hypothesis.He also references the Active Inference thesis, e.g. that of Karl J. Friston, and the work of Dan Sperber and Hugo Mercer on reason.The cellular automata work referenced here involves the Von Neumann cellular automaton and the Wolfram neural cellular automaton.Wish us a happy 1 year anniversary of the pod!

Supreme Court Opinions
Culley v. Marshall

Supreme Court Opinions

Play Episode Listen Later Oct 2, 2024 47:37


Welcome to Supreme Court Opinions. In this episode, you'll hear the Court's opinion in Culley v Marshall. In this case, the court considered this issue: What test must a district court apply when determining whether and when a post-deprivation hearing is required under the Due Process Clause? The case was decided on May 9, 2024.  The Supreme Court held that in civil forfeiture cases involving personal property, the Due Process Clause requires a timely forfeiture hearing but does not require a separate preliminary hearing. Justice Brett Kavanaugh authored the 6-3 majority opinion of the Court. The Due Process Clause of the Fourteenth Amendment generally requires notice and a hearing before the government seizes property. However, the Court's precedents differentiate between real property, which can be neither moved nor concealed, personal property risks being removed, destroyed, or concealed before a civil forfeiture hearing. Thus, as the Court recognized in United States v $8,850 and United States v Von Neumann, a timely post-seizure forfeiture hearing provides the constitutionally required process after seizing personal property. For such personal property, a separate preliminary hearing before the forfeiture hearing is not required. In contrast, in United States v James Daniel Good Real Property, the Court held that the government must ordinarily provide notice and a hearing before seizing real property that is subject to civil forfeiture. Here, the property subject to forfeiture is a vehicle—personal property—so a timely post-seizure forfeiture hearing is all the Due Process Clause requires. Justice Neil Gorsuch authored a concurring opinion, in which Justice Clarence Thomas joined, agreeing in large part with the majority's reasoning and conclusions but writing separately to highlight some of the many larger questions this decision leaves unresolved about whether, and to what extent, contemporary civil forfeiture practices can be squared with the Constitution's promise of due process. Justice Sonia Sotomayor authored a dissenting opinion, in which Justices Elena Kagan and Ketanji Brown Jackson joined. Justice Sotomayor argued that the majority's opinion is too broad and prevents “lower courts from addressing myriad abuses of the civil forfeiture system.” She, on the other hand, “would have decided only which due process test governs whether a retention hearing is required and left it to the lower courts to apply that test to different civil forfeiture schemes.” The opinion is presented here in its entirety, but with citations omitted. If you appreciate this episode, please subscribe. --- Support this podcast: https://podcasters.spotify.com/pod/show/scotus-opinions/support

The Science in The Fiction
Ep 34: David Brin on First Contact in 'Existence'

The Science in The Fiction

Play Episode Listen Later Jul 20, 2024 64:12


Marty and Holly speak with David Brin, science fiction icon, scientist, futurist and civilizational optimist.  We discuss his particular view of first contact with extraterrestrial intelligence, as portrayed in his 2012 novel 'Existence', along with his predictions about how artificial intelligence and virtual reality will change our world in the near future.  We discuss the UFO phenomenon (a sophisticated form of cat lasers for us to chase) and the unspeakably rude behaviour of these hypothetical silvery teaser punks.  David speaks directly to the artificial intelligences and possibly alien intelligences who may be inveigled in our internet.  We talk about Cixin Liu's 'The Three Body Problem' (there is no three body problem), the likely prevalence of life in the universe (90% of star systems), the Fermi Paradox, SETI, METI, and various forms that first contact with alien civilizations may take, among them Von Neumann machines and artificial alien intelligences stored in 'envoy eggs' orbiting our planet for millions of years. David tells us how to make the most powerful telescope in the universe, by turning the Kuiper Belt into a solar system sized lens.  Finally, he implores us to fight back against the ingrate habit of cynicism and pessimism rotting our global civilization today, and declares "I'm proud as hell and nothing can stop us! ... Be citizens of wonder, help save a good civilization."David Brin's Webpage:https://www.davidbrin.com/'Existence' by David Brin:https://www.davidbrin.com/existence.htmlVideo Trailer for David Brin's 'Existence':https://www.youtube.com/watch?v=ANVT0hYbAfEDavid Brin's 'Colony High' Series:https://www.davidbrin.com/colonyhigh.htmlDavid Brin's 'Out of Time' Series:https://www.davidbrin.com/outoftime.htmlDavid Brin's Advice to New Writers:https://www.davidbrin.com/nonfiction/advice.htmlDavid Brin on UFO's:https://www.forbes.com/sites/calumchace/2023/01/25/why-are-ufos-still-blurry-a-conversation-with-david-brin/David Brin on Why METI is a Bad Idea:https://www.davidbrin.com/nonfiction/meti.htmlNASA Innovative Advanced Concepts:https://www.nasa.gov/stmd-the-nasa-innovative-advanced-concepts-niac/The B612 Foundation:https://b612foundation.org/An Invitation to Extraterrestrial Intelligence:https://ieti.org/Buzzsprout (podcast host):https://thescienceinthefiction.buzzsprout.comEmail: thescienceinthefiction@gmail.comFacebook: https://www.facebook.com/groups/743522660965257/Twitter:https://twitter.com/MartyK5463

Yo Tenía Un Juego
57 - John von Neumann y Klára Dán un matrimonio de listos

Yo Tenía Un Juego

Play Episode Listen Later Jul 6, 2024 67:56


John von Neumann y Klára Dán no solo eran inteligentes, sino extremadamente brillantes. Von Neumann sentó las bases de la ciencia computacional, la inteligencia artificial y otros campos. Por su parte, Klára Dán, campeona de patinaje artístico a los 14 años y sin estudios superiores, fue la encargada de supervisar, escribir y programar el software para la primera predicción meteorológica por computadora de la historia. Su historia es la bomba, pero nuclear. Contacta con nosotros en: www.yoteniaunjuego.com YouTube: https://www.youtube.com/yoteniaunjuego Instagram: @yoteniaunjuego Telegram: https://t.me/+5pJsdDcxPWM3MWJk Twitter: @yoteniaunjuego Facebook: https://www.facebook.com/yoteniaunjuego E-mail: yoteniaunjuego@gmail.com Intro: All Of My Angels (Machinae Supremacy) Outro: Pieces (Machinae Supremacy)

Robinson's Podcast
210 - David Albert & Tim Maudlin: A Discussion of Niels Bohr, Measurement, & Quantum Mechanics

Robinson's Podcast

Play Episode Listen Later Jun 2, 2024 123:39


Patreon: https://bit.ly/3v8OhY7 David Albert is the Frederick E. Woodbridge Professor of Philosophy at Columbia University, director of the Philosophical Foundations of Physics program at Columbia, and a faculty member of the John Bell Institute for the Foundations of Physics. Tim Maudlin is Professor of Philosophy at NYU and Founder and Director of the JBI. This is David's seventh appearance on Robinson's Podcast. He last appeared on episode 189 with Barry Loewer to talk about the Mentaculus, their joint project on the foundations of statistical mechanics. This is Tim's sixth appearance on the show. He last appeared on episode 188 with Sheldon Goldstein to discuss Bohmian mechanics. Tim and David last joined Robinson together for episode 67, which gave an overview of the foundations of quantum mechanics. In this episode, Robinson, David, and Tim talk about the measurement problem, the role of philosophy in physics, various thought experiments, like Schrödinger's cat and Wigner's friend, and Niels Bohr's effects both on quantum mechanics and the philosophy of science. If you're interested in the foundations of physics, then please check out the JBI, which is devoted to providing a home for research and education in this important area. Any donations are immensely helpful at this early stage in the institute's life. A Guess at the Riddle: https://a.co/d/6qcsidl Tim's Website: www.tim-maudlin.site The John Bell Institute: https://www.johnbellinstitute.org OUTLINE 00:00 Introduction 04:04 Einstein, Bell, and Pearl on the Measurement Problem 13:00 On “Measurement” in Quantum Mechanics 25:34 What IS the Measurement Problem? 34:42 John Bell on the Measurement Problem 40:32 An Example of the Measurement Problem 43:08 Von Neumann on the Measurement Problem 45:38 Niels Bohr and the Measurement Problem 57:54 Niels Bohr's Drastic Revision of Physics 1:08:36 Quantum Measurement and the Philosophy of Physics 1:22:52 On Schrodinger's Cat and Wigner's Friend 1:38:34 On Consciousness and Quantum Mechanics 1:45:40 The Measurement Problem, Solved? 1:51:04 On the Role of Philosophy in Physics Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, and everyone in-between.  --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support

Eye On A.I.
#184 André van Schaik: Building An Artificial Brain With 100 Billion Neurons

Eye On A.I.

Play Episode Listen Later May 1, 2024 46:02


Dive into the cutting-edge realm of neuromorphic computing with André van Schaik, a professor of electrical engineering at the Western Sydney University, and director of the International Centre for Neuromorphic Systems, in Penrith, New South Wales, Australia In this episode of Eye on AI, André unveils the capabilities of DeepSouth, an innovative brain-scaled neuromorphic computing system designed to simulate up to 100 billion neurons in real time. Discover how DeepSouth leverages spiking neurons and synapses to process information more efficiently than traditional AI models, and how this technology could transform our understanding of brain computation and unlock new AI architectures. The conversation explores the unique hardware setup of DeepSouth, utilizing FPGAs (Field-Programmable Gate Arrays) for a flexible, reconfigurable approach that mimics the asynchronous and spiking communication of biological neurons.  André discusses the initial testing phase focusing on balanced excitation-inhibition networks, reflecting common neural activities in the human cortex, and outlines the system's potential to facilitate large-scale simulations previously unachievable due to computational constraints. André's insights are invaluable for anyone interested in the intersection of neuroscience, artificial intelligence, and computational technology. Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs and discussions in the world of artificial intelligence.     This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere. If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Preview and Introduction (02:19) André van Schaik's background (03:39) What is neuromorphic computing? (05:45) Differences between Von Neumann and neuromorphic architectures (09:27) How DeepSouth simulates neurons (12:20) What are FPGAs? (16:42) Current status of DeepSouth (19:04) Running neural network architectures on DeepSouth (22:33) DeepSouth as an open source, commercial hardware (24:40) Potential for cheaper model training (28:35) Number of neurons and connections in DeepSouth (30:01) Power consumption comparison (34:21) Mimicking brain structures in DeepSouth (35:42) André van Schaik's background in neuroscience (39:12) Goals of understanding brain activity vs solving problems (41:44) Interest from AI research community (43:03) Summary of DeepSouth's goals  

All Shows Feed | Horse Radio Network
Training Buzz: Systematic Training with Felicitas von Neumann-Cosel - Dressage Today Podcast

All Shows Feed | Horse Radio Network

Play Episode Listen Later Apr 28, 2024 8:50


Welcome to the Training Buzz sponsored by Purina. Today we hear from accomplished Grand Prix rider and trainer Felicitas von Neumann-Cosel. At a clinic, before taking to the saddle, she talks about the importance of systematic training and understanding the horse's body parts before you can get the horse to stretch and use his back properly.Members of Equestrian+ can watch more videos with Felicitas von Neuman Cose here. Not a member? Sign up for a free trial with subscription. Enter DTPODCAST at checkout to save 15% on your first month.Attention horse owners. Are you looking to help your horse recover with ease, after a strenuous workout? Would you like to nourish your horse and their digestive system? If so, try the new Purina RepleniMashProduct. It's much more than a mash. RepleniMash promotes hydration, replenishes electrolytes and supports gastric comfort. Put Purina's research to the test. Stop into your local Purina retailer and grab a bag of Purina Replenimash product.Website: https://dressagetoday.comVideo Subscription Site: https://www.equestrianplus.comSocial Media Links:Facebook: https://www.facebook.com/DressageTodayInstagram: @DressageTodayPinterest: @DressageToday

Dressage Today Podcast
Training Buzz: Systematic Training with Felicitas von Neumann-Cosel

Dressage Today Podcast

Play Episode Listen Later Apr 28, 2024 8:50


Welcome to the Training Buzz sponsored by Purina. Today we hear from accomplished Grand Prix rider and trainer Felicitas von Neumann-Cosel. At a clinic, before taking to the saddle, she talks about the importance of systematic training and understanding the horse's body parts before you can get the horse to stretch and use his back properly.Members of Equestrian+ can watch more videos with Felicitas von Neuman Cose here. Not a member? Sign up for a free trial with subscription. Enter DTPODCAST at checkout to save 15% on your first month.Attention horse owners. Are you looking to help your horse recover with ease, after a strenuous workout? Would you like to nourish your horse and their digestive system? If so, try the new Purina RepleniMashProduct. It's much more than a mash. RepleniMash promotes hydration, replenishes electrolytes and supports gastric comfort. Put Purina's research to the test. Stop into your local Purina retailer and grab a bag of Purina Replenimash product.Website: https://dressagetoday.comVideo Subscription Site: https://www.equestrianplus.comSocial Media Links:Facebook: https://www.facebook.com/DressageTodayInstagram: @DressageTodayPinterest: @DressageToday

Zimmerman en Space
Europa Clipper en Voyager 1 (of andersom)

Zimmerman en Space

Play Episode Listen Later Apr 28, 2024 20:21


Twee onderwerpen in één wat lange aflevering. Hopelijk luistert u 'm uit, maar dat hoeft natuurlijk helemaal niet. Onderwerp 1 is de indrukwekkende reparatie-op-afstand van het Voyager 1 ruimtevaartuig. En onderwerp 2 is de "souvenir" die mee zal reizen met het nog te lanceren Europa-Clipper ruimtevaartuig,Voyager:https://voyager.jpl.nasa.gov/Voyager status:https://voyager.jpl.nasa.gov/mission/status/NAND to Tetris:https://www.nand2tetris.org/En anders veel plezier hiermee:https://nandgame.com/Von Neumann architectuur:https://nl.wikipedia.org/wiki/Von_Neumann-architectuurDeep Space Network:https://eyes.nasa.gov/dsn/dsn.htmlVoyager Telecommunications:https://voyager.gsfc.nasa.gov/Library/DeepCommo_Chapter3--141029.pdfComputers in Spaceflight (pdf):https://ntrs.nasa.gov/api/citations/19880069935/downloads/19880069935_Optimized.pdfEuropa Clipper:https://europa.nasa.gov/Gedicht Ada Limón:https://www.youtube.com/watch?v=EgWbeDNPD6o#GoEuropaClipperDe Zimmerman en Space podcast is gelicenseerd onder een Creative Commons CC0 1.0 licentie.http://creativecommons.org/publicdomain/zero/1.0

Game Theory
107. John Von Neumann - Father of Game Theory, Nuclear Scientist, Super Genius

Game Theory

Play Episode Listen Later Mar 28, 2024 59:46


In this episode, Nick and Chris discuss their hiatus and receive feedback on their Match Day episode. They then introduce John von Neumann, a mathematician, physicist, computer scientist, and polymath who made significant contributions to game theory. We discuss his biography, academic career, and collaborations with other intellectual giants. They highlight his work on the Manhattan Project and his obsession with game theory. The episode concludes with a humorous anecdote about von Neumann's clap back to his wife. This conversation explores the perspectives and contributions of John von Neumann, a mathematician and physicist known for his work in game theory and nuclear deterrence. Von Neumann's view of chess as a well-defined form of computation is discussed, highlighting the distinction between strategy and tactics. We also delves into the mechanical properties of the universe and the role of bluffing and deception in chess and real life. Von Neumann's life's work in game theory, including the mini max theory and the cake distribution problem, is explored. Additionally, his involvement in missile development and his impact on national defense strategy are examined. The conversation concludes by addressing some unsavory aspects of von Neumann's life. Takeaways John von Neumann was a brilliant mathematician, physicist, and computer scientist who made significant contributions to game theory. He collaborated with other intellectual giants, such as Einstein and Bohr, and played a key role in the Manhattan Project. Von Neumann's work on game theory revolutionized the field and has applications in economics, decision-making, and military strategy. His obsession with game theory led him to develop groundbreaking concepts and models. Despite his brilliance, von Neumann had a humorous side, as seen in his clap back to his wife. Chess can be seen as a well-defined form of computation, while real life involves bluffing and deception. Game theory provides a framework for decision-making and optimizing strategies in various situations. Von Neumann's work in game theory and nuclear deterrence had a significant impact on national defense strategies. The distinction between strategy and tactics is crucial in understanding complex systems and decision-making. Von Neumann's contributions to mathematics and physics continue to shape our understanding of the world. Chapters 00:00 Introduction and Welcome Back 01:04 Discussion on Medical Match Day 05:49 Feedback on Match Day Episode 07:11 Introduction to John von Neumann 09:17 Biographical Information on John von Neumann 11:31 Contributions of John von Neumann 20:27 Collaboration with Other Intellectual Giants 24:29 Casual Conversations with Einstein and Bohr 25:22 Obsession with Game Theory 26:15 Von Neumann's Clap Back 26:51 Von Neumann's Perspective on Chess and Games 27:43 The Intellectual Period and the Predictability of the Universe 29:06 Mechanical Properties of the Universe 30:03 Chess as a Well-Defined Form of Computation 31:28 Bluffing and Deception in Chess and Real Life 33:09 The Role of Game Theory in Decision-Making 34:35 Von Neumann's Life's Work: Mini Max Theory 37:07 The Cake Distribution Problem 41:57 Von Neumann's Work on Nuclear Deterrence 46:01 Von Neumann's Role in Missile Development 51:45 Von Neumann's Distinction Between Strategy and Tactics 57:23 Unsavory Aspects of Von Neumann's Life Links: John von Neumann Wiki: https://en.wikipedia.org/wiki/John_von_Neumann Minimax Theorem: https://en.wikipedia.org/wiki/Minimax_theorem#cite_note-1 Theory of Games and Economic Behavior: https://press.princeton.edu/books/paperback/9780691130613/theory-of-games-and-economic-behavior Klara Dan von Neumann: https://en.wikipedia.org/wiki/Kl%C3%A1ra_D%C3%A1n_von_Neumann#:~:text=Kl%C3%A1ra%20D%C3%A1n%20von%20Neumann%20(born,style%20code%20on%20a%20computer. Reddit Thread on JVN's Contribution to the Nash Equilibrium https://www.reddit.com/r/math/comments/kkvz9e/how_exactly_did_nashs_paper_on_game_theory_differ/?rdt=62998&onetap_auto=true --- Send in a voice message: https://podcasters.spotify.com/pod/show/gametheory/message

SILDAVIA
John von Newmann

SILDAVIA

Play Episode Listen Later Mar 12, 2024 11:10


La actualidad no podría ser tal como es sin que sus precursores establecieran principios o inventaran artefactos que siguen usándose hoy en día. El dispositivo mediante el que me escuchas no sería tal como es sin la influencia de John von Newmann. Te cuento su historia. ¿Sabíais que este matemático húngaro-estadounidense fue el responsable de diseñar los ordenadores modernos, de crear la teoría de juegos y de participar en el proyecto de la bomba atómica? Pues sí, von Neumann fue un hombre polifacético que dejó su huella en múltiples campos del conocimiento. ¡Vamos a conocerlo un poco mejor! John von Neumann nació en Budapest en 1903, en el seno de una familia judía adinerada. Desde niño mostró una inteligencia prodigiosa, capaz de memorizar páginas enteras de libros y de hablar varios idiomas. Estudió matemáticas, física y química en las mejores universidades de Europa, y pronto se convirtió en un experto en análisis funcional, teoría de conjuntos y física cuántica. En 1930 se trasladó a Estados Unidos, donde obtuvo la nacionalidad y comenzó a trabajar en la Universidad de Princeton y en el Instituto de Estudios Avanzados. Allí conoció a Albert Einstein y a otros grandes científicos, con los que colaboró en diversos proyectos. Uno de ellos fue el desarrollo de la computación, basándose en los trabajos de Alan Turing. Von Neumann ideó la arquitectura que lleva su nombre, que consiste en un ordenador con una memoria capaz de almacenar tanto datos como instrucciones, y una unidad central que ejecuta las operaciones. Esta arquitectura es la que se usa hoy en día en casi todos los dispositivos electrónicos, desde los PC hasta los smartphones. Pero von Neumann no se conformó con eso. También aplicó sus conocimientos matemáticos al estudio del comportamiento humano, creando la teoría de juegos. Esta teoría trata de modelar situaciones en las que varios agentes toman decisiones que se afectan mutuamente, buscando el mejor resultado posible. La teoría de juegos tiene aplicaciones en economía, política, biología, psicología y muchas otras disciplinas. Von Neumann fue el primero en demostrar el famoso teorema del minimax, que establece que en todo juego de suma cero (es decir, donde lo que gana uno lo pierde otro) existe una estrategia óptima para cada jugador. Durante la Segunda Guerra Mundial, von Neumann participó activamente en el esfuerzo bélico estadounidense, asesorando al gobierno y al ejército en temas como la balística, la hidrodinámica y la criptografía. También formó parte del Proyecto Manhattan, el programa secreto que desarrolló la bomba atómica. Von Neumann contribuyó al diseño del detonador y a las simulaciones numéricas del proceso de fisión nuclear. Después de la guerra, siguió trabajando en el campo de las armas nucleares, defendiendo la construcción de la bomba de hidrógeno y la estrategia de disuasión. Von Neumann fue un hombre brillante pero también controvertido. Algunos le criticaron por su implicación en la carrera armamentística y por su apoyo al macartismo. Otros le admiraron por su capacidad creativa y su versatilidad. Lo cierto es que von Neumann fue uno de los científicos más influyentes del siglo XX, cuyas ideas siguen vigentes hoy en día. Su legado es enorme y abarca desde la informática hasta la inteligencia artificial, pasando por la teoría del caos, los autómatas celulares y la cibernética. Von Neumann murió en 1957, a los 53 años, víctima de un cáncer de páncreas. Su funeral fue multitudinario y contó con la presencia del presidente Eisenhower y otros altos cargos. Su tumba se encuentra en el cementerio de Princeton, junto a la de otros ilustres colegas. Su nombre está inscrito en numerosos premios, instituciones y conceptos científicos. Su vida fue una aventura intelectual sin parangón. Espero que os haya gustado este repaso por la trayectoria de John von Neumann, el matemático que revolucionó la informática y mucho más. Si queréis saber más sobre él, os recomiendo que leáis su biografía, escrita por Norman Macrae, o que veáis el documental The Bomb and John von Neumann, de la BBC. Puedes leer más y comentar en mi web, en el enlace directo: https://luisbermejo.com/elegir-un-nombre-zz-podcast-05x28/ Puedes encontrarme y comentar o enviar tu mensaje o preguntar en: WhatsApp: +34 613031122 Paypal: https://paypal.me/Bermejo Bizum: +34613031122 Web: https://luisbermejo.com Facebook: https://www.facebook.com/ZZPodcast/ X (twitters): https://x.com/LuisBermejo y https://x.com/zz_podcast Instagrams: https://www.instagram.com/luisbermejo/ y https://www.instagram.com/zz_podcast/ Canal Telegram: https://t.me/ZZ_Podcast Canal WhatsApp: https://whatsapp.com/channel/0029Va89ttE6buMPHIIure1H Grupo Signal: https://signal.group/#CjQKIHTVyCK430A0dRu_O55cdjRQzmE1qIk36tCdsHHXgYveEhCuPeJhP3PoAqEpKurq_mAc Grupo Whatsapp: https://chat.whatsapp.com/FQadHkgRn00BzSbZzhNviThttps://chat.whatsapp.com/BNHYlv0p0XX7K4YOrOLei0

Universe Today Podcast
[Interview] How Close Are We To Self-Replicating Robots Conquering Space

Universe Today Podcast

Play Episode Listen Later Mar 4, 2024


Self-replicating space robots seems as an obvious way to explore the Universe. How close are we to such a scenario and what do we need to fill outer space with Von Neumann probes? Figuring out with Professor Alex Ellery.

Universe Today Podcast
[Interview] How Close Are We To Self-Replicating Robots Conquering Space

Universe Today Podcast

Play Episode Listen Later Mar 4, 2024 61:46


Self-replicating space robots seems as an obvious way to explore the Universe. How close are we to such a scenario and what do we need to fill outer space with Von Neumann probes? Figuring out with Professor Alex Ellery.

Bittensor Guru
Episode 10 - Cellular Automata

Bittensor Guru

Play Episode Listen Later Jan 9, 2024 17:45


Thank you to everyone who helped Bittensor Guru reach 7% of the network with nearly 369,000+ Tao! In this episode, I share a dream I had on Christmas morning and some synchronicities that are leading to our enthusiastic development of the cellular automata subnet on Bittensor.  https://en.wikipedia.org/wiki/John_von_Neumann https://www.rule30prize.org/ https://cs.stanford.edu/people/eroberts/courses/soco/projects/2001-02/cellular-automata/beyond/ca.html https://www.youtube.com/watch?v=C2vgICfQawE&t=261s - 4:21 for 1221 synchronicity https://x.com/RobertKennedyJr/status/1743493173845115214?s=20 https://www.youtube.com/watch?v=D18GrKTAmog https://www.amazon.com/Time-Loops-Precognition-Retrocausation-Unconscious/dp/1938398920    

Data Engineering Podcast
Pushing The Limits Of Scalability And User Experience For Data Processing WIth Jignesh Patel

Data Engineering Podcast

Play Episode Listen Later Jan 7, 2024 50:26


Summary Data processing technologies have dramatically improved in their sophistication and raw throughput. Unfortunately, the volumes of data that are being generated continue to double, requiring further advancements in the platform capabilities to keep up. As the sophistication increases, so does the complexity, leading to challenges for user experience. Jignesh Patel has been researching these areas for several years in his work as a professor at Carnegie Mellon University. In this episode he illuminates the landscape of problems that we are faced with and how his research is aimed at helping to solve these problems. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Jignesh Patel about the research that he is conducting on technical scalability and user experience improvements around data management Interview Introduction How did you get involved in the area of data management? Can you start by summarizing your current areas of research and the motivations behind them? What are the open questions today in technical scalability of data engines? What are the experimental methods that you are using to gain understanding in the opportunities and practical limits of those systems? As you strive to push the limits of technical capacity in data systems, how does that impact the usability of the resulting systems? When performing research and building prototypes of the projects, what is your process for incorporating user experience into the implementation of the product? What are the main sources of tension between technical scalability and user experience/ease of comprehension? What are some of the positive synergies that you have been able to realize between your teaching, research, and corporate activities? In what ways do they produce conflict, whether personally or technically? What are the most interesting, innovative, or unexpected ways that you have seen your research used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on research of the scalability limits of data systems? What is your heuristic for when a given research project needs to be terminated or productionized? What do you have planned for the future of your academic research? Contact Info Website (https://jigneshpatel.org/) LinkedIn (https://www.linkedin.com/in/jigneshmpatel/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Carnegie Mellon Universe (https://www.cmu.edu/) Parallel Databases (https://en.wikipedia.org/wiki/Parallel_database) Genomics (https://en.wikipedia.org/wiki/Genomics) Proteomics (https://en.wikipedia.org/wiki/Proteomics) Moore's Law (https://en.wikipedia.org/wiki/Moore%27s_law) Dennard Scaling (https://en.wikipedia.org/wiki/Dennard_scaling) Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) Quantum Computing (https://en.wikipedia.org/wiki/Quantum_computing) Voltron Data (https://voltrondata.com/) Podcast Episode (https://www.dataengineeringpodcast.com/voltron-data-apache-arrow-episode-346/) Von Neumann Architecture (https://en.wikipedia.org/wiki/Von_Neumann_architecture) Two's Complement (https://en.wikipedia.org/wiki/Two%27s_complement) Ottertune (https://ottertune.com/) Podcast Episode (https://www.dataengineeringpodcast.com/ottertune-database-performance-optimization-episode-197/) dbt (https://www.getdbt.com/) Informatica (https://www.informatica.com/) Mozart Data (https://mozartdata.com/) Podcast Episode (https://www.dataengineeringpodcast.com/mozart-data-modern-data-stack-episode-242/) DataChat (https://datachat.ai/) Von Neumann Bottleneck (https://www.techopedia.com/definition/14630/von-neumann-bottleneck) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

Broken Silicon
235. PlayStation 6 AI, Nvidia, AMD Hawk Point, Intel Meteor Lake | Game AI Developer

Broken Silicon

Play Episode Listen Later Dec 12, 2023 167:25


A Gaming AI Dev joins to discuss what hardware you'll need to power next gen games! [SPON: Use ''brokensilicon30'' for $30 OFF $500+ Flexispot Orders: https://bit.ly/3RcyPla ] [SPON: “brokensilicon” at CDKeyOffer Black Friday: https://www.cdkeyoffer.com/cko/Moore10 ] [SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: https://bit.ly/3wKx6v1 ] #blackfriday #windows11 0:00 Getting to know our guest, how to get into AI 5:15 What is Pygmalion building to change gaming? 11:31 The Next 2D - 3D Moment for Gaming could be Neural Engine AI 20:44 AMD Hawk Point and the Importance of TOPs in APUs 27:30 Intel Meteor Lake's NPU – Does it matter if it's weaker than AMD? 33:03 AMD vs Qualcomm Snapdragon Elite X 40:45 Intel's AVX-512 & NPU Adoption Problem with AI... 53:01 Predicting how soon we'll get Next Gen AI in Games 1:00:45 Can the PS5 run Next Gen AI? …what about the XSS? 1:16:26 How might the PlayStation 6 do AI? 1:27:19 AMD's Advancing AI Event & ROCm, Nvidia's AI Advantage 1:50:20 Intel AI – Are they behind? Will RDNA 4 be big for AI? 2:03:19 Will future APUs be as strong as H100? When will the AI bubble pop? 2:15:22 Will AI hurt Gaming long term? 2:33:14 AI Ethics and AI's impact on Artists $400 RX 6800: https://amzn.to/3uRsLqX Main Domain (pygmalion.ai is not them): https://pygmalion.chat/ Discord with Active Devs: https://discord.com/invite/pygmalionai AI engine github: https://github.com/PygmalionAI/aphrodite-engine Guest's github (very new): https://github.com/IsaiahGossner Main github for the project: https://github.com/PygmalionAI Their hugging face, where actual models are stored: https://huggingface.co/PygmalionAI https://www.amd.com/en/newsroom/press-releases/2023-12-6-amd-showcases-growing-momentum-for-amd-powered-ai-.html https://www.servethehome.com/wp-content/uploads/2023/12/AMD-Instinct-MI300-Launch_Page_50.jpg https://www.servethehome.com/wp-content/uploads/2023/12/AMD-Instinct-MI300-Launch_Page_49.jpg Bryan Heemskerk AI Episode: https://youtu.be/NDEka3tBE1g?si=pd6_xNPgMxo7Jltd https://www.youtube.com/watch?v=mCxHcvtpfAk&ab_channel=Moore%27sLawIsDead https://en.wikipedia.org/wiki/Von_Neumann_architecture https://www.trendhunter.com/trends/playstation-6-concept

The Nonlinear Library
LW - New LessWrong feature: Dialogue Matching by jacobjacob

The Nonlinear Library

Play Episode Listen Later Nov 16, 2023 5:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New LessWrong feature: Dialogue Matching, published by jacobjacob on November 16, 2023 on LessWrong. The LessWrong team is shipping a new experimental feature today: dialogue matching! I've been leading work on this (together with Ben Pace, kave, Ricki Heicklen, habryka and RobertM), so wanted to take some time to introduce what we built and share some thoughts on why I wanted to build it. New feature! There's now a dialogue matchmaking page at lesswrong.com/dialogueMatching Here's how it works: You can check a user you'd potentially be interested in having a dialogue with, if they were too They can't see your checks unless you match It also shows you some interesting data: your top upvoted users over the last 18 months, how much you agreed/disagreed with them, what topics they most frequently commented on, and what posts of theirs you most recently read. Next, if you find a match, this happens: You get a tiny form asking for topic ideas and format preferences, and then we create a dialogue that summarises your responses and suggests next steps based on them. Currently, we're mostly sourcing auto-suggested topics from Ben's neat poll where people voted on interesting disagreement they'd want to see debated, and also stated their own views. I'm pretty excited to further explore this and other ways for auto-suggesting good topics. My hypothesis is that we're in a bit of a dialogue overhang: there are important conversations out there to be had, but that aren't happening. We just need to find them. This feature is an experiment in making it easier to do many of the hard steps in having a dialogue: finding a partner, finding a topic, and coordinating on format. To try the Dialogue Matching feature, feel free to on head over to lesswrong.com/dialogueMatching ! Me and the team are super keen to hear any and all feedback. Feel free to share in comments below or using the intercom button in the bottom right corner :) Why build this? A retreat organiser I worked with long ago told me: "the most valuable part of an event usually aren't the big talks, but the small group or 1-1 conversations you end up having in the hallways between talks." I think this points at something important. When Lightcone runs events, we usually optimize the small group experience pretty hard. In fact, when building and renovating our campus Lighthaven, we designed it to have lots of little nooks and spaces in order to facilitate exactly this kind of interaction. With dialogues, I feel like we're trying to enable an interaction on LessWrong that's also more like a 1-1, and less like a broadcasting talk to an audience. But we're doing so with two important additions: Readable artefacts. Usually the results of a 1-1 are locked in with the people involved. Sometimes that's good. But other times, Dialogues enable a format where good stuff that came out of it can be shared with others. Matchmaking at scale. Being a good event organiser involves a lot of effort to figure out who might have valuable conversations, and then connecting them. This can often be super valuable (thought experiment: imagine introducing Von Neumann and Morgenstern), but takes a lot of personalised fingertip feel and dinner host mojo. Using dialogue matchmaking, I'm curious about a quick experiment to try to doing this at scale, in an automated way. Overall, I think there's a whole class of valuable content here that you can't even get out at all outside of a dialogue format. The things you say in a talk are different from the things you'd share if you were being interviewed on a podcast, or having a conversation with a friend. Suppose you had been mulling over a confusion about AI. Your thoughts are nowhere near the point where you could package them into a legible, ordered talk and then go present them. So, what do you do? I think...

All Shows Feed | Horse Radio Network
Training Buzz: Proper contact with Isabelle von Neumann-Cosel - Dressage Today Podcast

All Shows Feed | Horse Radio Network

Play Episode Listen Later Nov 12, 2023 9:11


Today's Training Buzz features Isabelle von Neumann-Cosel. She is the sister of well-known U.S. dressage rider Felicitas von Neumann-Cosel and the cousin of Susanne von Dietze, columnist for Practical Horseman. Isabelle works as a journalist, author, dressage rider and trainer with a special interest in biomechanics, seat position and the functioning of the aids for riders at every level. In this clip, Isabelle discusses proper contact and how a lack of positive tension will create boredom or insecurity in some horses who want to feel the rider's support.Members of Dressage Today OnDemand can watch the full video (and many more with Isabelle von Neumann-Cosel) here. Not a member? Sign up for a free trial with subscription. Enter DTPODCAST at checkout to save 15%.Website: https://dressagetoday.comVideo Subscription Site: https://ondemand.dressagetoday.com/catalogSocial Media Links:Facebook: https://www.facebook.com/DressageTodayInstagram: @DressageTodayTwitter: @DressageTodayPinterest: @DressageTodayEmail: sruff@equinenetwork.com

Dressage Today Podcast
Training Buzz: Proper contact with Isabelle von Neumann-Cosel

Dressage Today Podcast

Play Episode Listen Later Nov 12, 2023 9:11


Today's Training Buzz features Isabelle von Neumann-Cosel. She is the sister of well-known U.S. dressage rider Felicitas von Neumann-Cosel and the cousin of Susanne von Dietze, columnist for Practical Horseman. Isabelle works as a journalist, author, dressage rider and trainer with a special interest in biomechanics, seat position and the functioning of the aids for riders at every level. In this clip, Isabelle discusses proper contact and how a lack of positive tension will create boredom or insecurity in some horses who want to feel the rider's support.Members of Dressage Today OnDemand can watch the full video (and many more with Isabelle von Neumann-Cosel) here. Not a member? Sign up for a free trial with subscription. Enter DTPODCAST at checkout to save 15%.Website: https://dressagetoday.comVideo Subscription Site: https://ondemand.dressagetoday.com/catalogSocial Media Links:Facebook: https://www.facebook.com/DressageTodayInstagram: @DressageTodayTwitter: @DressageTodayPinterest: @DressageTodayEmail: sruff@equinenetwork.com

A Ciencia Cierta
Interpretaciones de la Mecánica Cuántica. A Ciencia Cierta 18/9/2023

A Ciencia Cierta

Play Episode Listen Later Sep 18, 2023 159:05


- ENLACE votación Premios Ivoox 2023: https://go.ivoox.com/wv/premios23?p=286369 Hace aproximadamente un siglo, físicos como Schrödinger, Heisenberg, Dirac o Von Neumann desarrollaron el formalismo de la Teoría Cuántica, la Teoría con mayor capacidad para predecir resultados experimentales que hemos sido capaces de crear. Pero casi 100 años después seguimos sin ponernos de acuerdo con la interpretación de esta Teoría, sobre qué nos dice realmente la Mecánica Cuántica acerca de cómo es realmente el mundo que nos rodea. En este programa analizamos por qué existen varias interpretaciones de una Teoría tan predictiva, y además desarrollamos en profundidad algunas de las más importantes, como la Interpretación de Copenhague, Varios Mundos, Bohmiana, Estadística, Colapso Objetivo, Relacional, etc. Todo ello de la mano de Vicent Picó, Avelino Vicente y Eugenio Roldán. Y RECUERDA: Entra en babbel.com/empezar y usa el código CIENCIACIERTA para conseguir tus tres meses gratis. !Gracias! Escucha el episodio completo en la app de iVoox, o descubre todo el catálogo de iVoox Originals

The Nonlinear Library
LW - Alignment Megaprojects: You're Not Even Trying to Have Ideas by NicholasKross

The Nonlinear Library

Play Episode Listen Later Jul 13, 2023 3:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment Megaprojects: You're Not Even Trying to Have Ideas, published by NicholasKross on July 12, 2023 on LessWrong. Consider the state of funding for AI alignment. Is the field more talent-constrained, or funding-constrained? I think most existing researchers, if they take the AI-based extinction-risk seriously, think it's talent constrained. I think the bar-for-useful-contribution could be so high, that we loop back around to "we need to spend more money (and effort) on finding (and making) more talent". And the programs to do those may themselves be more funding-constrained than talent-constrained. Like, the 20th century had some really good mathematicians and physicists, and the US government spared little expense towards getting them what they needed, finding them, and so forth. Top basketball teams will "check up on anyone over 7 feet that's breathing". Consider how huge Von Neumann's expense account must've been, between all the consulting and flight tickets and car accidents. Now consider that we don't seem to have Von Neumanns anymore. There are caveats to at least that second point, but the overall problem structure still hasn't been "fixed". Things an entity with absurdly-greater funding (e.g. the US Department of Defense) could probably do, with their absurdly-greater funding and probably coordination power: Indefinitely-long-timespan basic minimum income for everyone who Coordinating, possibly by force, every AI alignment researcher and aspiring alignment researcher on Earth to move to one place that doesn't have high rents like the Bay. Possibly up to and including creating that place and making rent free for those who are accepted in. Enforce a global large-ML-training shutdown. An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular. Genetic engineering, focused-training-from-a-young-age, or other extreme "talent development" setups. Deeper, higher-budget investigations into how "unteachable" things like security mindset really are, and how deeply / quickly you can teach them. All of these at once. I think the big logistical barrier here is something like "LTFF is not the U,S government", or more precisely "nothing as crazy as these can be done 'on-the-margin' or with any less than the full funding". However, I think some of these could be scaled down into mere megaprojects or less. Like, if the training infrastructure is bottlenecked on trainers, then we need to fund indirect "training" work just to remove the bottleneck on the bottleneck of the problem. (Also, the bottleneck is going to move at least when you solve the current bottleneck, and also "on its own" as the entire world changes around you). Also... this might be the first list of ideas-in-precisely-this-category, on all of LessWrong/the EA Forum. (By which I mean "technical AI alignment research projects that you could fund, without having to think about the alignment problem itself in much detail beyond agreeing with 'doom could actually happen in my lifetime', if funding really wasn't the constraint".) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Kernel
Meltio, tecnología de Star Trek

Kernel

Play Episode Listen Later May 31, 2023 37:11


Con las máquinas y robots de Meltio; empresas, gobiernos y ejércitos podrán fabricar lo que necesiten, cuando necesiten y donde necesiten. Patrocinador: Hogar 5G de Vodafone te proporciona conexión a internet de alta velocidad en todos tus hogares, con la máxima movilidad. ¿Te vas de vacaciones? ¿Estarás en tu segunda residencia? Te llevas el router y listo. Funciona en segundos y sin necesidad de un instalador. — No esperes más, descubre toda la información en vodafone.es/hogar-5g. Meltio está revolucionando la industria con un método pionero de impresión 3D de metal. Sus máquinas son capaces de imprimir repuestos mecánicos, aparatos médicos y de laboratorio, en unidades individuales o escalas masivas de millones. Todo me recuerda a un versión prototipo de los replicadores de Star Trek, o las sondas de Von Neumann, que llevamos décadas viendo como ciencia ficción. Su fundador y CEO, Ángel Llavero, me explica cómo funcionan sus máquinas que venden a ejércitos y multinacionales para que puedan reinventar sus operaciones. Un cambio radical que tendrá un impacto global. Con sus máquinas y robots, empresas, gobiernos y ejércitos podrán fabricar lo que necesiten, cuando necesiten y donde necesiten. - Página de Meltio - La Marina estadounidense elige la tecnología de impresión 3D en metal de la española Meltio - Qué tiene Meltio que todo el mundo la quiere - Haas Automation, Inc. - Herramientas para máquinas CNC - Meltio en LinkedIn - La Junta valora la innovación de alto nivel en tecnología de impresión 3D en metal que impulsa Meltio en Linares (Jaén) - Meltio lanza nuevas soluciones para facilitar el uso y la fiabilidad de su tecnología de impresión 3D de metal - Meltio en YouTube - Linares (Jaén) en Wikipedia Kernel es el podcast semanal donde Álex Barredo debate con buenos invitados sobre las plataformas y compañías tecnológicas que afectan a nuestra vida diaria. Enlaces: - Newsletter diaria: https://newsletter.mixx.io - Twitter: https://twitter.com/mixx_io - o sigue a Álex directamente en: https://twitter.com/somospostpc - Envíame un email: alex@barredo.es - Telegram: https://t.me/mixx_io - Web: https://mixx.io

Troubled Minds Radio
Aliens Among Us - Exploring the Galaxy and UFO Crash Retrievals

Troubled Minds Radio

Play Episode Listen Later May 19, 2023 156:10


Today at the Salt Conference in NYC, Dr. Gary Nolan described an alien presence currently on Earth, UFO crash reverse engineering and the development of Von Neumann probes to explore the galaxy. How feasible is all of this?LIVE ON Digital Radio! http://bit.ly/3m2Wxom or http://bit.ly/40KBtlW http://www.troubledminds.org Support The Show! https://rokfin.com/creator/troubledminds https://patreon.com/troubledmindshttps://www.buymeacoffee.com/troubledminds https://troubledfans.comFriends of Troubled Minds! - https://troubledminds.org/friends Show Schedule Sun-Mon-Tues-Wed-Thurs 7-10pst iTunes - https://apple.co/2zZ4hx6Spotify - https://spoti.fi/2UgyzqMStitcher - https://bit.ly/2UfAiMXTuneIn - https://bit.ly/2FZOErSTwitter - https://bit.ly/2CYB71U----------------------------------------https://troubledminds.org/aliens-among-us-exploring-the-galaxy-and-ufo-crash-retrievals/https://www.reddit.com/r/UFOs/comments/13lcgta/2023_ufo_edit_dr_garry_nolan_talks_at_salt/https://futurism.com/von-neumann-probehttps://www.inverse.com/science/von-neumann-probeshttps://en.wikipedia.org/wiki/Self-replicating_spacecrafthttps://www.cambridge.org/core/journals/international-journal-of-astrobiology/article/von-neumann-probes-rationale-propulsion-interstellar-transfer-timing/5202679D74645D3707248FE5D5FA0124https://www.universetoday.com/154949/why-would-an-alien-civilization-send-out-von-neumann-probes-lots-of-reasons-says-a-new-study/https://www.salt.org/https://twitter.com/UAPJames/status/1659299977762308098https://scanalyst.fourmilab.ch/t/copernicus-space-corp-exploring-the-galaxy-with-tiny-robotic-probes/2487https://www.popularmechanics.com/military/aviation/a43918091/new-shape-of-modern-ufos/https://www.hindustantimes.com/world-news/thought-we-were-under-attack-ex-us-air-force-captain-on-ufo-citing-at-nuclear-missile-base-101684224229916.htmlhttps://www.livemint.com/news/world/exus-air-force-captain-says-ufo-attacked-nuclear-missile-base-damaged-weapons-11684250846839.htmlhttps://www.vice.com/en/article/n7nzkq/stanford-professor-garry-nolan-analyzing-anomalous-materials-from-ufo-crasheshttps://www.the-sun.com/tech/4249299/ufo-encounter-symptoms-garry-nolan-brains/https://en.wikipedia.org/wiki/Garry_NolanThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4953916/advertisement

Universe Today Podcast
[NIAC 2023] Self-Building Radio Telescope On The Far Side of The Moon

Universe Today Podcast

Play Episode Listen Later Apr 27, 2023


The FarView Observatory is a NIAC project that's a giant self-building radio telescope on the far side of the Moon. In this interview, I'm discussing the details of the project with Dr Ronald Polidan who's managing the project. We also talk about the role of the Moon in the future of lunar exploration and how close we are to sending Von Neumann probes all over the Universe.

Universe Today Podcast
[NIAC 2023] Self-Building Radio Telescope On The Far Side of The Moon

Universe Today Podcast

Play Episode Listen Later Apr 27, 2023 63:57


The FarView Observatory is a NIAC project that's a giant self-building radio telescope on the far side of the Moon. In this interview, I'm discussing the details of the project with Dr Ronald Polidan who's managing the project. We also talk about the role of the Moon in the future of lunar exploration and how close we are to sending Von Neumann probes all over the Universe.

Infinite Loops
Ananyo Bhattacharya — John von Neumann: The Man from the Future (EP.151)

Infinite Loops

Play Episode Listen Later Mar 16, 2023 87:23


Ananyo Bhattacharya is the author of The Man from the Future: The Visionary Life of John von Neumann, a brilliant biography of one of the most prolific and influential scientists to have ever lived. He joins the show to discuss von Neumann's contributions to quantum physics, game theory, the Manhattan Project, and much more! Important Links: Ananyo's Twitter The Man from the Future Show Notes: How did John von Neumann even exist? Would von Neumann's discoveries have happened without him? The Martians of Hungary The migrant mentality Innovation in the face of extinction Science, genius & the herd mentality Von Neumann's contribution to quantum physics Game theory, Minimax and zero-sum games von Neumann: quant in the streets; romantic in the sheets The eccentricity of brilliance Von Neumann and the Manhattan Project The godfather of the open-source movement Von Neumann as a project manager How writing the book changed Ananyo's understanding of von Neumann Ananyo's next projects MUCH more! Books Mentioned: The Man from the Future: The Visionary Life of John von Neumann; **by **Ananyo Bhattacharya The Beginning of Infinity: Explanations That Transform the World; by David Deutsch The Genius of the Beast: A Radical Re-Vision of Capitalism; by Howard Bloom Theory of Games and Economic Behaviour; by John von Neumann and Oskar Morgenstern

The Wright Show
The Many Worlds of John von Neumann (Robert Wright & Ananyo Bhattacharya)

The Wright Show

Play Episode Listen Later Mar 14, 2023 60:00


Ananyo's book, The Man from the Future, about game theory inventor and polymath John von Neumann ... Early life as a mathematical wunderkind ... Von Neumann's foundational contributions to quantum physics ... Was entanglement as "spooky" to von Neumann as it was to Einstein? ... Building—and reckoning with—the atomic bomb ... Did von Neumann really want to nuke the USSR? ... Why was early game theory so zero-sum–focused? ... Influence on Turing, the open source movement, and modern computing ... How brushes with totalitarianism shaped von Neumann's views ... Programming pioneer Klára Dán von Neumann ... The duality of von Neumann's social life ... Did von Neumann grok incompleteness before Gödel? ... Unfinished work comparing computers and the brain ... Von Neumann's deathbed conversion: Pascal's wager or something more? ...

The Wright Show
The Many Worlds of John von Neumann (Robert Wright & Ananyo Bhattacharya)

The Wright Show

Play Episode Listen Later Mar 14, 2023 118:00


0:25 Ananyo's book, The Man from the Future, about game theory inventor and polymath John von Neumann 6:38 Early life as a mathematical wunderkind 10:16 Von Neumann's foundational contributions to quantum physics 24:39 Was entanglement as "spooky" to von Neumann as it was to Einstein? 33:46 Building—and reckoning with—the atomic bomb 47:22 Did von Neumann really want to nuke the USSR? 52:30 Why was early game theory so zero-sum–focused? 1:05:44 Influence on Turing, the open source movement, and modern computing 1:22:27 How brushes with totalitarianism shaped von Neumann's views 1:28:56 Programming pioneer Klára Dán von Neumann 1:33:26 The duality of von Neumann's social life 1:39:58 Did von Neumann grok incompleteness before Gödel? 1:42:45 Unfinished work comparing computers and the brain 1:50:23 Von Neumann's deathbed conversion: Pascal's wager or something more? Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Ananyo Bhattacharya (The Man from the Future). Recorded January 17, 2023. Comments on BhTV: http://bloggingheads.tv/videos/65817 Twitter: https://twitter.com/NonzeroPods Facebook: https://facebook.com/bloggingheads/ Podcasts: https://bloggingheads.tv/subscribe This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit nonzero.substack.com/subscribe

Bloggingheads.tv
The Many Worlds of John von Neumann (Robert Wright & Ananyo Bhattacharya)

Bloggingheads.tv

Play Episode Listen Later Mar 14, 2023 60:00


Ananyo's book, The Man from the Future, about game theory inventor and polymath John von Neumann ... Early life as a mathematical wunderkind ... Von Neumann's foundational contributions to quantum physics ... Was entanglement as "spooky" to von Neumann as it was to Einstein? ... Building—and reckoning with—the atomic bomb ... Did von Neumann really want to nuke the USSR? ... Why was early game theory so zero-sum–focused? ... Influence on Turing, the open source movement, and modern computing ... How brushes with totalitarianism shaped von Neumann's views ... Programming pioneer Klára Dán von Neumann ... The duality of von Neumann's social life ... Did von Neumann grok incompleteness before Gödel? ... Unfinished work comparing computers and the brain ... Von Neumann's deathbed conversion: Pascal's wager or something more? ...

The History of Computing
AI Hype Cycles And Winters On The Way To ChatGPT

The History of Computing

Play Episode Listen Later Feb 22, 2023 23:37


Carlota Perez is a researcher who has studied hype cycles for much of her career. She's affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries.  Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master's at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979.  Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries.  Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There's certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner's graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases.  * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn't even usable, but shows promise.  * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there's enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There's also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum's "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist.  ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore.  Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called  "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn't much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s.  These hype cycles weren't just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn't cash. For example, the New York Times claimed Rosenblatt's perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy's ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp's “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn't live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM's Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs.  Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI.  This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included:  * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that's more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don't have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That's when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn't lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn't always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.

The Nonlinear Library
LW - Consequentialists: One-Way Pattern Traps by David Udell

The Nonlinear Library

Play Episode Listen Later Jan 17, 2023 24:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consequentialists: One-Way Pattern Traps, published by David Udell on January 16, 2023 on LessWrong. Generated during MATS 2.1. A distillation of my understanding of Eliezer-consequentialism. Thanks to Jeremy Gillen, Ben Goodman, Paul Colognese, Daniel Kokotajlo, Scott Viteri, Peter Barnett, Garrett Baker, and Olivia Jimenez for discussion and/or feedback; to Eliezer Yudkowsky for briefly chatting about relevant bits in planecrash; to Quintin Pope for causally significant conversation; and to many others that I've bounced my thoughts on this topic off of. Introduction What is Eliezer-consequentialism? In a nutshell, I think it's the way that some physical structures monotonically accumulate patterns in the world. Some of these patterns afford influence over other patterns, and some physical structures monotonically accumulate patterns-that-matter in particular -- resources. We call such a resource accumulator a consequentialist -- or, equivalently, an "agent," an "intelligence," etc. A consequentialist understood in this way is (1) a coherent profile of reflexes (a set of behavioral reflexes that together monotonically take in resources) plus (2) an inventory (some place where accumulated resources can be stored with better than background-chance reliability.) Note that an Eliezer-consequentialist is not necessarily a consequentialist in the normative ethics sense of the term. By consequentialists we'll just mean agents, including wholly amoral agents. I'll freely use the terms 'consequentialism' and 'consequentialist' henceforth with this meaning, without fretting any more about this confusion. Path to Impact I noticed hanging around the MATS London office that even full-time alignment researchers disagree quite a bit about what consequentialism involves. I'm betting here that my Eliezer-model is good enough that I've understood his ideas on the topic better than many others have, and can concisely communicate this better understanding. Since most of the possible positive impact of this effort lives in the fat tail of outcomes where it makes a lot of Eliezerisms click for a lot of alignment workers, I'll make this an effortpost. The Ideas to be Clarified I've noticed that Eliezer seems to think the von Neumann-Morgenstern (VNM) theorem is obviously far reaching in a way that few others do. Understand the concept of VNM rationality, which I recommend learning from the Wikipedia article... Von Neumann and Morgenstern showed that any agent obeying a few simple consistency axioms acts with preferences characterizable by a utility function. MIRI Research Guide (2015) Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple. I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself. When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet? And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identifie...

ITSPmagazine | Technology. Cybersecurity. Society
Von Neumann Probes With Professor Alex Ellery | Stories From Space Podcast With Matthew S Williams

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Dec 31, 2022 36:45


Giant Robots Smashing Into Other Giant Robots
448: AIEDC with Leonard S. Johnson

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Nov 10, 2022 53:34


Leonard S. Johnson is the Founder and CEO of AIEDC, a 5G Cloud Mobile App Maker and Service Provider with Machine Learning to help small and midsize businesses create their own iOS and Android mobile apps with no-code or low-code so they can engage and service their customer base, as well as provide front and back office digitization services for small businesses. Victoria talks to Leonard about using artificial intelligence for good, bringing the power of AI to local economics, and truly democratizing AI. The Artificial Intelligence Economic Development Corporation (AIEDC) (https://netcapital.com/companies/aiedc) Follow AIEDC on Twitter (https://twitter.com/netcapital), Instagram (https://www.instagram.com/netcapital/), Facebook (https://www.facebook.com/Netcapital/), or LinkedIn (https://www.linkedin.com/company/aiedc/). Follow Leonard on Twitter (https://twitter.com/LeonardSJ) and LinkedIn (https://www.linkedin.com/in/leonardsjohnson84047/). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is The Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with us today is Leonard S. Johnson or LS, Founder and CEO AIEDC, a 5G Cloud Mobile App Maker and Service Provider with Machine Learning to help small and midsize businesses create their own iOS and Android mobile apps with no-code or low-code so they can engage and service their customer base, as well as provide front and back office digitization services for small businesses. Leonard, thanks for being with us today. LEONARD: Thank you for having me, Victoria. VICTORIA: I should say LS, thank you for being with us today. LEONARD: It's okay. It's fine. VICTORIA: Great. So tell us a little more about AIEDC. LEONARD: Well, AIEDC stands for Artificial Intelligence Economic Development Corporation. And the original premise that I founded it for...I founded it after completing my postgraduate work at Stanford, and that was 2016. And it was to use AI for economic development, and therefore use AI for good versus just hearing about artificial intelligence and some of the different movies that either take over the world, and Skynet, and watch data privacy, and these other things which are true, and it's very evident, they exist, and they're out there. But at the end of the day, I've always looked at life as a growth strategy and the improvement of what we could do and focusing on what we could do practically. You do it tactically, then you do it strategically over time, and you're able to implement things. That's why I think we keep building collectively as humanity, no matter what part of the world you're in. VICTORIA: Right. So you went to Stanford, and you're from South Central LA. And what about that background led you to pursue AI for good in particular? LEONARD: So growing up in the inner city of Los Angeles, you know, that South Central area, Compton area, it taught me a lot. And then after that, after I completed high school...and not in South Central because I moved around a lot. I grew up with a single mother, never knew my real father, and then my home life with my single mother wasn't good because of just circumstances all the time. And so I just started understanding that even as a young kid, you put your brain...you utilize something because you had two choices. It's very simple or binary, you know, A or B. A, you do something with yourself, or B, you go out and be social in a certain neighborhood. And I'm African American, so high probability that you'll end up dead, or in a gang, or in crime because that's what it was at that time. It's just that's just a situation. Or you're able to challenge those energies and put them toward a use that's productive and positive for yourself, and that's what I did, which is utilizing a way to learn. I could always pick up things when I was very young. And a lot of teachers, my younger teachers, were like, "You're very, very bright," or "You're very smart." And there weren't many programs because I'm older than 42. So there weren't as many programs as there are today. So I really like all of the programs. So I want to clarify the context. Today there's a lot more engagement and identification of kids that might be sharper, smarter, whatever their personal issues are, good or bad. And it's a way to sort of separate them. So you're not just teaching the whole group as a whole and putting them all in one basket, but back then, there was not. And so I just used to go home a lot, do a lot of reading, do a lot of studying, and just knick-knack with things in tech. And then I just started understanding that even as a young kid in the inner city, you see economics very early, but they don't understand that's really what they're studying. They see economics. They can see inflation because making two ends meet is very difficult. They may see gang violence and drugs or whatever it might end up being. And a lot of that, in my opinion, is always an underlining economic foundation. And so people would say, "Oh, why is this industry like this?" And so forth. "Why does this keep happening?" It's because they can't function. And sometimes, it's just them and their family, but they can't function because it's an economic system. So I started focusing on that and then went into the Marine Corps. And then, after the Marine Corps, I went to Europe. I lived in Europe for a while to do my undergrad studies in the Netherlands in Holland. VICTORIA: So having that experience of taking a challenge or taking these forces around you and turning into a force for good, that's led you to bring the power of AI to local economics. And is that the direction that you went eventually? LEONARD: So economics was always something that I understood and had a fascination prior to even starting my company. I started in 2017. And we're crowdfunding now, and I can get into that later. But I self-funded it since 2017 to...I think I only started crowdfunding when COVID hit, which was 2020, and just to get awareness and people out there because I couldn't go to a lot of events. So I'm like, okay, how can I get exposure? But yeah, it was a matter of looking at it from that standpoint of economics always factored into me, even when I was in the military when I was in the Marine Corps. I would see that...we would go to different countries, and you could just see the difference of how they lived and survived. And another side note, my son's mother is from Ethiopia, Africa. And I have a good relationship with my son and his mother, even though we've been apart for over 15 years, divorced for over 15 years or so or longer. But trying to keep that, you can just see this dichotomy. You go out to these different countries, and even in the military, it's just so extreme from the U.S. and any part of the U.S, but that then always focused on economics. And then technology, I just always kept up with, like, back in the '80s when the mobile brick phone came out, I had to figure out how to get one. [laughs] And then I took it apart and then put it back together just to see how it works, so yeah. But it was a huge one, by the way. I mean, it was like someone got another and broke it, and they thought it was broken. And they're like, "This doesn't work. You could take this piece of junk." I'm like, "Okay." [laughs] VICTORIA: Like, oh, great. I sure will, yeah. Now, I love technology. And I think a lot of people perceive artificial intelligence as being this super futuristic, potentially harmful, maybe economic negative impact. So what, from your perspective, can AI do for local economics or for people who may not have access to that advanced technology? LEONARD: Well, that's the key, and that's what we're looking to do with AIEDC. When you look at the small and midsize businesses, it's not what people think, or their perception is. A lot of those in the U.S. it's the backbone of the United States, our economy, literally. And in other parts of the world, it's the same where it could be a one or two mom-and-pop shops. That's where that name comes from; it's literally two people. And they're trying to start something to build their own life over time because they're using their labor to maybe build wealth or somehow a little bit not. And when I mean wealth, it's always relative. It's enough to sustain themselves or just put food on the table and be able to control their own destiny to the best of their ability. And so what we're looking to do is make a mobile app maker that's 5G that lives in the cloud, that's 5G compliant, that will allow small and midsize businesses to create their own iOS or Android mobile app with no-code or low-code, basically like creating an email. That's how simple we want it to be. When you create your own email, whether you use Microsoft, Google, or whatever you do, and you make it that simple. And there's a simple version, and there could be complexity added to it if they want. That would be the back office digitization or customization, but that then gets them on board with digitization. It's intriguing that McKinsey just came out with a report stating that in 2023, in order to be economically viable, and this was very recent, that all companies would need to have a digitization strategy. And so when you look at small businesses, and you look at things like COVID-19, or the COVID current ongoing issue and that disruption, this is global. And you look at even the Ukrainian War or the Russian-Ukrainian War, however you term it, invasion, war, special operation, these are disruptions. And then, on top of that, we look at climate change which has been accelerating in the last two years more so than it was prior to this that we've experienced. So this is something that everyone can see is self-evident. I'm not even focused on the cause of the problem. My brain and the way I think, and my team, we like to focus on solutions. My chairman is a former program director of NASA who managed 1,200 engineers that built the International Space Station; what was it? 20-30 years ago, however, that is. And he helped lead and build that from Johnson Center. And so you're focused on solutions because if you're building the International Space Station, you can only focus on solutions and anticipate the problems but not dwell on them. And so that kind of mindset is what I am, and it's looking to help small businesses do that to get them on board with digitization and then in customization. And then beyond that, use our system, which is called M.I.N.D. So we own these...we own patents, three patents, trademarks, and service marks related to artificial intelligence that are in the field of economics. And we will utilize DEVS...we plan to do that which is a suite of system specifications to predict regional economic issues like the weather in a proactive way, not reactive. A lot of economic situations are reactive. It's reactive to the Federal Reserve raising interest rates or lowering rates, Wall Street, you know, moving money or not moving money. It is what it is. I mean, I don't judge it. I think it's like financial engineering, and that's fine. It's profitability. But then, at the end of the day, if you're building something, it's like when we're going to go to space. When rockets launch, they have to do what they're intended to do. Like, I know that Blue Origin just blew up recently. Or if they don't, they have a default, and at least I heard that the Blue Origin satellite, if it were carrying passengers, the passengers would have been safe because it disembarked when it detected its own problem. So when you anticipate these kinds of problems and you apply them to the local small business person, you can help them forecast and predict better like what weather prediction has done. And we're always improving that collectively for weather prediction, especially with climate change, so that it can get to near real-time as soon as possible or close a window versus two weeks out versus two days out as an example. VICTORIA: Right. Those examples of what you call a narrow economic prediction. LEONARD: Correct. It is intriguing when you say narrow economic because it wouldn't be narrow AI. But it would actually get into AGI if you added more variables, which we would. The more variables you added in tenancies...so if you're looking at events, the system events discretion so discrete event system specification you would specify what they really, really need to do to have those variables. But at some point, you're working on a system, what I would call AGI. But AGI, in my mind, the circles I run in at least or at least most of the scientists I talk to it's not artificial superintelligence. And so the general public thinks AGI...and I've said this to Stephen Ibaraki, who's the founder of AI for Good at Global Summit at the United Nations, and one of his interviews as well. It's just Artificial General Intelligence, I think, has been put out a lot by Hollywood and entertainment and so forth, and some scientists say certain things. We won't be at artificial superintelligence. We might get to Artificial General Intelligence by 2030 easily, in my opinion. But that will be narrow AI, but it will cover what we look at it in the field as cross-domain, teaching a system to look at different variables because right now, it's really narrow. Like natural language processing, it's just going to look at language and infer from there, and then you've got backward propagation that's credit assignment and fraud and detection. Those are narrow data points. But when you start looking at something cross-domain...who am I thinking of? Pedro Domingos who wrote the Master Algorithm, which actually, Xi Jinping has a copy of, the President of China, on his bookshelf in his office because they've talked about that, and these great minds because Stephen Ibaraki has interviewed these...and the founder of Google Brain and all of these guys. And so there's always this debate in the scientific community of what is narrow AI what it's not. But at the end of the day, I just like Pedro's definition of it because he says the master algorithm will be combining all five, so you're really crossing domains, which AI hasn't done that. And to me, that will be AGI, but that's not artificial superintelligence. And artificial superintelligence is when it becomes very, you know, like some of the movies could say, if we as humanity just let it run wild, it could be crazy. VICTORIA: One of my questions is the future of AI more like iRobot or Bicentennial Man? LEONARD: Well, you know, interesting. That's a great question, Victoria. I see most of AI literally as iRobot, as a tool more than anything, except at the end when it implied...so it kind of did two things in that movie, but a wonderful movie to bring up. And I like Will Smith perfectly. Well, I liked him a lot more before -- VICTORIA: I think iRobot is really the better movie. LEONARD: Yeah, so if people haven't seen iRobot, I liked Will Smith, the actor. But iRobot showed you two things, and it showed you, one, it showed hope. Literally, the robot...because a lot of people put AI and robots. And AI by itself is the brain or the mind; I should say hardware are the robots or the brain. Software...AI in and of itself is software. It's the mind itself. That's why we have M.I.N.D Machine Intelligence NeuralNetwork Database. We literally have that. That's our acronym and our slogan and everything. And it's part of our patents. But its machine intelligence is M.I.N.D, and we own that, you know; the company owns it. And so M.I.N.D...we always say AI powered by M.I.N.D. We're talking about that software side of, like, what your mind does; it iterates and thinks, the ability to think itself. Now it's enclosed within a structure called, you know, for the human, it's called a brain, the physical part of it, and that brain is enclosed within the body. So when you look at robots...and my chairman was the key person for robotics for the International Space Station. So when you look at robotics, you are putting that software into hardware, just like your cell phone. You have the physical, and then you have the actual iOS, which is the operating system. So when you think about that, yeah, iRobot was good because it showed how these can be tools, and they were very, in the beginning of the movie, very helpful, very beneficial to humanity. But then it went to a darker side and showed where V.I.K.I, which was an acronym as well, I think was Virtual Interactive Kinetic technology of something. Yeah, I believe it was Virtual Interactive Kinetic inference or technology or something like that, V.I.K.I; I forgot the last I. But that's what it stood for. It was an acronym to say...and then V.I.K.I just became all aware and started killing everyone with robots and just wanted to say, you know, this is futile. But then, at the very, very end, V.I.K.I learned from itself and says, "Okay, I guess this isn't right." Or the other robot who could think differently argued with V.I.K.I, and they destroyed her. And it made V.I.K.I a woman in the movie, and then the robot was the guy. But that shows that it can get out of hand. But it was intriguing to me that they had her contained within one building. This wouldn't be artificial superintelligence. And I think sometimes Hollywood says, "Just take over everything from one building," no. It wouldn't be on earth if it could. But that is something we always have to think about. We have to think about the worst-case scenarios. I think every prudent scientist or business person or anyone should do that, even investors, I mean, if you're investing something for the future. But you also don't focus on it. You don't think about the best-case scenario, either. But there's a lot of dwelling on the worst-case scenario versus the good that we can do given we're looking at where humanity is today. I mean, we're in 2022, and we're still fighting wars that we fought in 1914. VICTORIA: Right. Which brings me to my next question, which is both, what are the most exciting opportunities to innovate in the AI space currently? And conversely, what are the biggest challenges that are facing innovation in that field? LEONARD: Ooh, that's a good question. I think, in my opinion, it's almost the same answer; one is...but I'm in a special field. And I'm surprised there's not a lot of competition for our company. I mean, it's very good for me and the company's sense. It's like when Mark Zuckerberg did Facebook, there was Friendster, and there was Myspace, but they were different. They were different verticals. And I think Mark figured out how to do it horizontally, good or bad. I'm talking about the beginning of when he started Facebook, now called Meta. But I'm saying utilizing AI in economics because a lot of times AI is used in FinTech and consumerism, but not economic growth where we're really talking about growing something organically, or it's called endogenous growth. Because I studied Paul Romer's work, who won the Nobel Prize in 2018 for economic science. And he talked about the nature of ideas. And we were working on something like that in Stanford. And I put out a book in 2017 of January talking about cryptocurrencies, artificial intelligence but about the utilization of it, but not the speculation. I never talked about speculation. I don't own any crypto; I would not. It's only once it's utilized in its PureTech form will it create something that it was envisioned to do by the protocol that Satoshi Nakamoto sort of created. And it still fascinates me that people follow Bitcoin protocol, even for the tech and the non-tech, but they don't know who Satoshi is. But yeah, it's a white paper. You're just following a white paper because I think logically, the world is going towards that iteration of evolution. And that's how AI could be utilized for good in an area to focus on it with economics and solving current problems. And then going forward to build a new economy where it's not debt-based driven or consumer purchase only because that leaves a natural imbalance in the current world structure. The western countries are great. We do okay, and we go up and down. But the emerging and developing countries just get stuck, and they seem to go into a circular loop. And then there are wars as a result of these things and territory fights and so forth. So that's an area I think where it could be more advanced is AI in the economic realm, not so much the consumer FinTech room, which is fine. But consumer FinTech, in my mind, is you're using AI to process PayPal. That's where I think Elon just iterated later because PayPal is using it for finance. You're just moving things back and forth, and you're just authenticating everything. But then he starts going on to SpaceX next because he's like, well, let me use technology in a different way. And I do think he's using AI on all of his projects now. VICTORIA: Right. So how can that tech solve real problems today? Do you see anything even particular about Southern California, where we're both at right now, where you think AI could help predict some outcomes for small businesses or that community? LEONARD: I'm looking to do it regionally then globally. So I'm part of this Southern Cal Innovation Hub, which is just AI. It's an artificial intelligence coordination between literally San Diego County, Orange County, and Los Angeles County. And so there's a SoCal Innovation Hub that's kind of bringing it together. But there are all three groups, like; I think the CEO in Orange County is the CEO of Leadership Alliance. And then in San Diego, there's another group I can't remember their name off the top of my head, and I'm talking about the county itself. So each one's representing a county because, you know. And then there's one in Northern California that I'm also associated with where if you look at California as its own economy in the U.S., it's still pretty significant as an economic cycle in the United States, period. That's why so many politicians like California because they can sway the votes. So yeah, we're looking to do that once, you know, we are raising capital. We're crowdfunding currently. Our total raise is about 6 million. And so we're talking to venture capitalists, private, high net worth investors as well. Our federal funding is smaller. It's just like several hundred thousand because most people can only invest a few thousand. But I always like to try to give back. If you tell people...if you're Steve Jobs, like, okay, I've got this Apple company. In several years, you'll see the potential. And people are like, ah, whatever, but then they kick themselves 15 years later. [laughs] Like, oh, I wish I thought about that Apple stock for $15 when I could. But you give people a chance, and you get the word out, and you see what happens. Once you build a system, you share it. There are some open-source projects. But I think the open source, like OpenAI, as an example, Elon Musk funds that as well as Microsoft. They both put a billion dollars into it. It is an open-source project. OpenAI claims...but some of the research does go back to Microsoft to be able to see it. And DeepMind is another research for AI, but they're owned by Google. And so, I'm also very focused on democratizing artificial intelligence for the benefit of everyone. I really believe that needs to be democratized in a sense of tying it to economics and making it utilized for everyone that may need it for the benefit of humanity where it's profitable and makes money, but it's not just usurping. MID-ROLL AD: As life moves online, brick-and-mortar businesses are having to adapt to survive. With over 18 years of experience building reliable web products and services, thoughtbot is the technology partner you can trust. We provide the technical expertise to enable your business to adapt and thrive in a changing environment. We start by understanding what's important to your customers to help you transition to intuitive digital services your customers will trust. We take the time to understand what makes your business great and work fast yet thoroughly to build, test, and validate ideas, helping you discover new customers. Take your business online with design‑driven digital acceleration. Find out more at tbot.io/acceleration or click the link in the show notes for this episode. VICTORIA: With that democratizing it, is there also a need to increase the understanding of the ethics around it and when there are certain known use cases for AI where it actually is discriminatory and plays to systemic problems in our society? Are you familiar with that as well? LEONARD: Yes, absolutely. Well, that's my whole point. And, Victoria, you just hit the nail on the head. Truly democratizing AI in my mind and in my brain the way it works is it has opened up for everyone. Because if you really roll it back, okay, companies now we're learning...we used to call it several years ago UGC, User Generated Content. And now a lot of people are like, okay, if you're on Facebook, you're the product, right? Or if you're on Instagram, you're the product. And they're using you, and you're using your data to sell, et cetera, et cetera. But user-generated content it's always been that. It's just a matter of the sharing of the economic. That's why I keep going back to economics. So if people were, you know, you wouldn't have to necessarily do advertising if you had stakeholders with advertising, the users and the company, as an example. If it's a social media company, just throwing it out there, so let's say you have a social media...and this has been talked about, but I'm not the first to introduce this. This has been talked about for over ten years, at least over 15 years. And it's you share as a triangle in three ways. So you have the user and everything else. So take your current social media, and I won't pick on Facebook, but I'll just use them, Facebook, Instagram, or Twitter. Twitter's having issues recently because Elon is trying to buy them or get out of buying them. But you just looked at that data, and then you share with the user base. What's the revenue model? And there needs to be one; let me be very clear. There has to be incentive, and there has to be profitability for people that joined you earlier, you know, joined the corporation, or become shareholders, or investors, or become users, or become customers. They have to be able to have some benefit, not extreme greater than everyone else but a great benefit from coming in earlier by what they contributed at the time. And that is what makes this system holistic in my opinion, like Reddit or any of these bloggers. But you make it where they use their time and the users, and you share it with the company and then the data and so forth, and whatever revenue economic model you have, and it's a sort of a three-way split. It's just not always equal. And that's something that I think in economics, we're still on a zero-sum game, I win, you lose sort of economic model globally. That's why there's a winner of a war and a loser of a war. But in reality, as you know, Victoria, there are no winners of any war. So it's funny, [laughs] I was just saying, well, you know, because of the economic mode, but Von Neumann, who talked about that, also talked about something called a non-zero-sum game when he talked about it in mathematics that you can win, and I can win; we just don't win equally because they never will match that. So if I win, I may win 60; you win 40. Or you may win 60, I win 40, and we agree to settle on that. It's an agreement versus I'm just going to be 99, and you'll be 1%, or I'm just going to be 100, and you're at 0. And I think that our economic model tends to be a lot of that, like, when you push forth and there needs to be more of that. When you talk about the core of economics...and I go way back, you know, prior to the Federal Reserve even being started. I just look at the world, and it's always sort of been this land territorial issue of what goods are under the country. But we've got technology where we can mitigate a lot of things and do the collective of help the earth, and then let's go off to space, all of space. That's where my brain is focused on. VICTORIA: Hmm. Oh yeah, that makes sense to me. I think that we're all going to have to evolve our economic models here in the future. I wonder, too, as you're building your startup and you're building your company, what are some of the technology trade-offs you're having to make in the stack of the AI software that you're building? LEONARD: Hmm. Good question. But clarify, this may be a lot deeper dive because that's a general question. And I don't want to...yeah, go ahead. VICTORIA: Because when you're building AI, and you're going to be processing a lot of data, I know many data scientists that are familiar with tools like Jupyter Notebooks, and R, and Python. And one issue that I'm aware of is keeping the environments the same, so everything that goes into building your app and having those infrastructure as code for your data science applications, being able to afford to process all that data. [laughs] And there are just so many factors that go into building an AI app versus building something that's more easy, like a web-based user form. So just curious if you've encountered those types of trade-offs or questions about, okay, how are we going to actually build an app that we can put out on everybody's phone and that works responsibly? LEONARD: Oh, okay. So let me be very clear, but I won't give too much of the secret sauce away. But I can define this technically because this is a technical audience. This is not...so what you're really talking about is two things, and I'm clear about this, though. So the app maker won't really read and write a lot of data. It'll just be the app where people could just get on board digitalization simple, you know, process payments, maybe connect with someone like American Express square, MasterCard, whatever. And so that's just letting them function. That's sort of small FinTech in my mind, you know, just transaction A to B, B to A, et cetera. And it doesn't need to be peer-to-peer and all of the crypto. It doesn't even need to go that level yet. That's just level one. Then they will sign up for a service, which is because we're really focused on artificial intelligence as a service. And that, to me, is the next iteration for AI. I've been talking about this for about three or four years now, literally, in different conferences and so forth for people who haven't hit it. But that we will get to that point where AI will become AI as a service, just like SaaS is. We're still at the, you know, most of the world on the legacy systems are still software as a service. We're about to hit AI as a service because the world is evolving. And this is true; they did shut it down. But you did have okay, so there are two case points which I can bring up. So JP Morgan did create something called a Coin, and it was using AI. And it was a coin like crypto, coin like a token, but they called it a coin. But it could process, I think, something like...I may be off on this, so to the sticklers that will be listening, please, I'm telling you I may be off on the exact quote, but I think it was about...it was something crazy to me, like 200,000 of legal hours and seconds that it could process because it was basically taking the corporate legal structure of JP Morgan, one of the biggest banks. I think they are the biggest bank in the U.S. JPMorgan Chase. And they were explaining in 2017 how we created this, and it's going to alleviate this many hours of legal work for the bank. And I think politically; something happened because they just pulled away. I still have the original press release when they put it out, and it was in the media. And then it went away. I mean, no implementation [laughs] because I think there was going to be a big loss of jobs for it. And they basically would have been white-collar legal jobs, most specifically lawyers literally that were working for the bank. And when they were talking towards investment, it was a committee. I was at a conference. And I was like, I was fascinated by that. And they were basically using Bitcoin protocol as the tokenization protocol, but they were using AI to process it. And it was basically looking at...because legal contracts are basically...you can teach it with natural language processing and be able to encode and almost output it itself and then be able to speak with each other. Another case point was Facebook. They had...what was it? Two AI systems. They began to create their own language. I don't know if you remember that story or heard about it, and Facebook shut it down. And this was more like two years ago, I think, when they were saying Facebook was talking, you know, when they were Facebook, not Meta, so maybe it was three years ago. And they were talking, and they were like, "Oh, Facebook has a language. It's talking to each other." And it created its own little site language because it was two AI bots going back and forth. And then the engineers at Facebook said, "We got to shut this down because this is kind of getting out of the box." So when you talk about AI as a service, yes, the good and the bad, and what you take away is AWS, Oracle, Google Cloud they do have services where it doesn't need to cost you as much anymore as it used to in the beginning if you know what you're doing ahead of time. And you're not just running iterations or data processing because you're doing guesswork versus, in my opinion, versus actually knowing exactly specifically what you're looking for and the data set you're looking to get out of it. And then you're talking about just basically putting in containers and clustering it because it gets different operations. And so what you're really looking at is something called an N-scale graph data that can process data in maybe sub seconds at that level, excuse me. And one of my advisors is the head of that anyway at AGI laboratory. So he's got an N graph database that can process...when we implement it, we'll be able to process data at the petabyte level at sub-seconds, and it can run on platforms like Azure or AWS, and so forth. VICTORIA: Oh, that's interesting. So it sounds like cloud providers are making compute services more affordable. You've got data, the N-scale graph data, that can run more transactions more quickly. And I'm curious if you see any future trends since I know you're a futurist around quantum computing and how that could affect capacity for -- LEONARD: Oh [laughs] We haven't even gotten there yet. Yes. Well, if you look at N-scale, if you know what you're doing and you know what to look for, then the quantum just starts going across different domains as well but at a higher hit rate. So there's been some quantum computers online. There's been several...well, Google has their quantum computer coming online, and they've been working on it, and Google has enough data, of course, to process. So yeah, they've got that data, lots of data. And quantum needs, you know, if it's going to do something, it needs lots of data. But then the inference will still be, I think, quantum is very good at processing large, large, large amounts of data. We can just keep going if you really have a good quantum computer. But it's really narrow. You have to tell it exactly what it wants, and it will do it in what we call...which is great like in P or NP square or P over NP which is you want to do it in polynomial time, not non-polynomial, polynomial time which is...now speaking too fast. Okay, my brain is going faster than my lips. Let me slow it down. So when you start thinking about processing, if we as humans, let's say if I was going to process A to Z, and I'm like, okay, here is this equation, if I tell you it takes 1000 years, it's of no use to us, to me and you Victoria because we're living now. Now, the earth may benefit in 1000 years, but it's still of no use. But if I could take this large amount of data and have it process within minutes, you know, worst case hours...but then I'll even go down to seconds or sub-seconds, then that's really a benefit to humanity now, today in present term. And so, as a futurist, yes, as the world, we will continue to add data. We're doing it every day, and we already knew this was coming ten years ago, 15 years ago, 20 years ago, even actually in the '50s when we were in the AI winter. We're now in AI summer. In my words, I call it the AI summer. So as you're doing this, that data is going to continue to increase, and quantum will be needed for that. But then the specific need...quantum is very good at looking at a specific issue, specifically for that very narrow. Like if you were going to do the trajectory to Jupiter or if we wanted to send a probe to Jupiter or something, I think we're sending something out there now from NASA, and so forth, then you need to process all the variables, but it's got one trajectory. It's going one place only. VICTORIA: Gotcha. Well, that's so interesting. I'm glad I asked you that question. And speaking of rockets going off to space, have you ever seen a SpaceX launch from LA? LEONARD: Actually, I saw one land but not a launch. I need to go over there. It's not too far from me. But you got to give credit where credit's due and Elon has a reusable rocket. See, that's where technology is solving real-world problems. Because NASA and I have, you know, my chairman, his name is Alexander Nawrocki, you know, he's Ph.D., but I call him Rocki. He goes by Rocki like I go by LS. But it's just we talk about this like NASA's budget. [laughs] How can you reduce this? And Elon says they will come up with a reusable rocket that won't cost this much and be able to...and that's the key. That was the kind of Holy Grail where you can reuse the same rocket itself and then add some little variables on top of it. But the core, you wouldn't constantly be paying for it. And so I think where the world is going...and let me be clear, Elon pushes a lot out there. He's just very good at it. But I'm also that kind of guy that I know that Tesla itself was started by two Stanford engineers. Elon came on later, like six months, and then he invested, and he became CEO, which was a great investment for Elon Musk. And then CEO I just think it just fit his personality because it was something he loved. But I also have studied for years Nikola Tesla, and I understand what his contributions created where we are today with all the patents that he had. And so he's basically the father of WiFi and why we're able to communicate in a lot of this. We've perfected it or improved it, but it was created by him in the 1800s. VICTORIA: Right. And I don't think he came from as fortunate a background as Elon Musk, either. Sometimes I wonder what I could have done born in similar circumstances. [laughter] And you certainly have made quite a name for yourself. LEONARD: Well, I'm just saying, yeah, he came from very...he did come from a poor area of Russia which is called the Russian territory, to be very honest, Eastern Europe, definitely Eastern Europe. But yeah, I don't know once you start thinking about that [laughs]. You're making me laugh, Victoria. You're making me laugh. VICTORIA: No, I actually went camping, a backpacking trip to the Catalina Island, and there happened to be a SpaceX launch that night, and we thought it was aliens because it looked wild. I didn't realize what it was. But then we figured it was a launch, so it was really great. I love being here and being close to some of this technology and the advancements that are going on. I'm curious if you have some thoughts about...I hear a lot about or you used to hear about Silicon Valley Tech like very Northern California, San Francisco focus. But what is the difference in SoCal? What do you find in those two communities that makes SoCal special? [laughs] LEONARD: Well, I think it's actually...so democratizing AI. I've been in a moment like that because, in 2015, I was in Dubai, and they were talking about creating silicon oasis. And so there's always been this model of, you know, because they were always, you know, the whole Palo Alto thing is people would say it and it is true. I mean, I experienced it. Because I was in a two-year program, post-graduate program executive, but we would go up there...I wasn't living up there. I had to go there maybe once every month for like three weeks, every other month or something. But when you're up there, it is the air in the water. It's just like, people just breathe certain things. Because around the world, and I would travel to Japan, and China, and other different parts of Asia, Vietnam, et cetera and in Africa of course, and let's say you see this and people are like, so what is it about Silicon Valley? And of course, the show, there is the Hollywood show about it, which is pretty a lot accurate, which is interesting, the HBO show. But you would see that, and you would think, how are they able to just replicate this? And a lot of it is a convergence. By default, they hear about these companies' access because the key is access, and that's what we're...like this podcast. I love the concept around it because giving awareness, knowledge, and access allows other people to spread it and democratize it. So it's just not one physical location, or you have to be in that particular area only to benefit. I mean, you could benefit in that area, or you could benefit from any part of the world. But since they started, people would go there; engineers would go there. They built company PCs, et cetera. Now that's starting to spread in other areas like Southern Cal are creating their own innovation hubs to be able to bring all three together. And those three are the engineers and founders, and idea makers and startups. And you then need the expertise. I'm older than 42; I'm not 22. [laughs] So I'm just keeping it 100, keeping it real. So I'm not coming out at 19. I mean, my son's 18. And I'm not coming out, okay, this my new startup, bam, give me a billion dollars, I'm good. And let me just write off the next half. But when you look at that, there's that experience because even if you look at Mark Zuckerberg, I always tell people that give credit where credit is due. He brought a senior team with him when he was younger, and he didn't have the experience. And his only job has been Facebook out of college. He's had no other job. And now he's been CEO of a multi-billion dollar corporation; that's a fact. Sometimes it hurts people's feelings. Like, you know what? He's had no other job. Now that can be good and bad, [laughs] but he's had no other jobs. And so that's just a credit, like, if you can surround yourself with the right people and be focused on something, it can work to the good or the bad for your own personal success but then having that open architecture. And I think he's been trying to learn and others versus like an Elon Musk, who embraces everything. He's just very open in that sense. But then you have to come from these different backgrounds. But let's say Elon Musk, Mark Zuckerberg, let's take a guy like myself or whatever who didn't grow up with all of that who had to make these two ends meet, figure out how to do the next day, not just get to the next year, but get to the next day, get to the next week, get to the next month, then get to the next year. It just gives a different perspective as well. Humanity's always dealing with that. Because we had a lot of great engineers back in the early 1900s. They're good or bad, you know, you did have Nikola Tesla. You had Edison. I'm talking about circa around 1907 or 1909, prior to World War I. America had a lot of industries. They were the innovators then, even though there were innovations happening in Europe, and Africa, and China, as well and Asia. But the innovation hub kind of created as the America, quote, unquote, "industrial revolution." And I think we're about to begin a new revolution sort of tech and an industrial revolution that's going to take us to maybe from 20...we're 2022 now, but I'll say it takes us from 2020 to 2040 in my head. VICTORIA: So now that communities can really communicate across time zones and locations, maybe the hubs are more about solving specific problems. There are regional issues. That makes a lot more sense. LEONARD: Yes. And collaborating together, working together, because scientists, you know, COVID taught us that. People thought you had to be in a certain place, but then a lot of collaboration came out of COVID; even though it was bad globally, even though we're still bad, if people were at home, they start collaborating, and scientists will talk to scientists, you know, businesses, entrepreneurs, and so forth. But if Orange County is bringing together the mentors, the venture capital, or at least Southern California innovation and any other place, I want to say that's not just Silicon Valley because Silicon Valley already has it; we know that. And that's that region. It's San Jose all the way up to...I forgot how far north it's past San Francisco, actually. But it's that region of area where they encompass the real valley of Silicon Valley if you're really there. And you talk about these regions. Yes, I think we're going to get to a more regional growth area, and then it'll go more micro to actually cities later in the future. But regional growth, I think it's going to be extremely important globally in the very near term. I'm literally saying from tomorrow to the next, maybe ten years, regional will really matter. And then whatever you have can scale globally anyway, like this podcast we're doing. This can be distributed to anyone in the world, and they can listen at ease when they have time. VICTORIA: Yeah, I love it. It's both exciting and also intimidating. [laughs] And you mentioned your son a little bit earlier. And I'm curious, as a founder and someone who spent a good amount of time in graduate and Ph.D. programs, if you feel like it's easy to connect with your son and maintain that balance and focusing on your family while you're building a company and investing in yourself very heavily. LEONARD: Well, I'm older, [laughs] so it's okay. I mean, I've mentored him, you know. And me and his mom have a relationship that works. I would say we have a better relationship now than when we were together. It is what it is. But we have a communication level. And I think she was just a great person because I never knew my real father, ever. I supposedly met him when I was two or one; I don't know. But I have no memories, no photos, nothing. And that was just the environment I grew up in. But with my son, he knows the truth of everything about that. He's actually in college. I don't like to name the school because it's on the East Coast, and it's some Ivy League school; that's what I will say. And he didn't want to stay on the West Coast because I'm in Orange County and his mom's in Orange County. He's like, "I want to get away from both of you people." [laughter] And that's a joke, but he's very independent. He's doing well. When he graduated high school, he graduated with 4.8 honors. He made the valedictorian. He was at a STEM school. VICTORIA: Wow. LEONARD: And he has a high GPA. He's studying computer science and economics as well at an Ivy League, and he's already made two or three apps at college. And I said, "You're not Mark, so calm down." [laughter] But anyway, that was a recent conversation. I won't go there. But then some people say, "LS, you should be so happy." What is it? The apple doesn't fall far from the tree. But this was something he chose around 10 or 11. I'm like, whatever you want to do, you do; I'll support you no matter what. And his mom says, "Oh no, I think you programmed him to be like you." [laughs] I'm like, no, I can't do that. I just told him the truth about life. And he's pretty tall. VICTORIA: You must have -- LEONARD: He played basketball in high school a lot. I'm sorry? VICTORIA: I was going to say you must have inspired him. LEONARD: Yeah. Well, he's tall. He did emulate me in a lot of ways. I don't know why. I told him just be yourself. But yes, he does tell me I'm an inspiration to that; I think because of all the struggles I've gone through when I was younger. And you're always going through struggles. I mean, it's just who you are. I tell people, you know, you're building a company. You have success. You can see the future, but sometimes people can't see it, [laughs] which I shouldn't really say, but I'm saying anyway because I do that. I said this the other night to some friends. I said, "Oh, Jeff Bezo's rocket blew up," going, you know, Blue Origin rocket or something. And then I said Elon will tell Jeff, "Well, you only have one rocket blow up. I had three, [laughter] SpaceX had three." So these are billionaires talking to billionaires about, you know, most people don't even care. You're worth X hundred billion dollars. I mean, they're worth 100 billion-plus, right? VICTORIA: Right. LEONARD: I think Elon is around 260 billion, and Jeff is 160 or something. Who cares about your rocket blowing up? But it's funny because the issues are still always going to be there. I've learned that. I'm still learning. It doesn't matter how much wealth you have. You just want to create wealth for other people and better their lives. The more you search on bettering lives, you're just going to have to wake up every day, be humble with it, and treat it as a new day and go forward and solve the next crisis or problem because there will be one. There is not where there are no problems, is what I'm trying to say, this panacea or a utopia where you personally, like, oh yeah, I have all this wealth and health, and I'm just great. Because Elon has had divorce issues, so did Jeff Bezos. So I told my son a lot about this, like, you never get to this world where it's perfect in your head. You're always going to be doing things. VICTORIA: That sounds like an accurate future prediction if I ever heard one. [laughs] Like, there will be problems. No matter where you end up or what you choose to do, you'll still have problems. They'll just be different. [laughs] LEONARD: Yeah, and then this is for women and men. It means you don't give up. You just keep hope alive, and you keep going. And I believe personally in God, and I'm a scientist who actually does. But I look at it more in a Godly aspect. But yeah, I just think you just keep going, and you keep building because that's what we do as humanity. It's what we've done. It's why we're here. And we're standing on the shoulders of giants, and I just always considered that from physicists and everyone. VICTORIA: Great. And if people are interested in building something with you, you have that opportunity right now to invest via the crowdfunding app, correct? LEONARD: Yes, yes, yes. They can do that because the company is still the same company because eventually, we're going to branch out. My complete vision for AIEDC is using artificial intelligence for economic development, and that will spread horizontally, not just vertically. Vertically right now, just focus on just a mobile app maker digitization and get...because there are so many businesses even globally, and I'm not talking only e-commerce. So when I say small to midsize business, it can be a service business, car insurance, health insurance, anything. It doesn't have to be selling a particular widget or project, you know, product. And I'm not saying there's nothing wrong with that, you know, interest rates and consumerism. But I'm not thinking about Shopify, and that's fine, but I'm talking about small businesses. And there's the back office which is there are a lot of tools for back offices for small businesses. But I'm talking about they create their own mobile app more as a way to communicate with their customers, update them with their customers, and that's key, especially if there are disruptions. So let's say that there have been fires in California. In Mississippi or something, they're out of water. In Texas, last year, they had a winter storm, electricity went out. So all of these things are disruptions. This is just in the U.S., And of course, I won't even talk about Pakistan, what's going on there and the flooding and just all these devastating things, or even in China where there's drought where there are these disruptions, and that's not counting COVID disrupts, the cycle of business. It literally does. And it doesn't bubble up until later when maybe the central banks and governments pay attention to it, just like in Japan when that nuclear, unfortunately, that nuclear meltdown happened because of the earthquake; I think it was 2011. And that affected that economy for five years, which is why the government has lower interest rates, negative interest rates, because they have to try to get it back up. But if there are tools and everyone's using more mobile apps and wearables...and we're going to go to the metaverse and all of that. So the internet of things can help communicate that. So when these types of disruptions happen, the flow of business can continue, at least at a smaller level, for an affordable cost for the business. I'm not talking about absorbing costs because that's meaningless to me. VICTORIA: Yeah, well, that sounds like a really exciting project. And I'm so grateful to have this time to chat with you today. Is there anything else you want to leave for our listeners? LEONARD: If they want to get involved, maybe they can go to our crowdfunding page, or if they've got questions, ask about it and spread the word. Because I think sometimes, you know, they talk about the success of all these companies, but a lot of it starts with the founder...but not a founder. If you're talking about a startup, it starts with the founder. But it also stops with the innovators that are around that founder, male or female, whoever they are. And it also starts with their community, building a collective community together. And that's why Silicon Valley is always looked at around the world as this sort of test case of this is how you create something from nothing and make it worth great value in the future. And I think that's starting to really spread around the world, and more people are opening up to this. It's like the crowdfunding concept. I think it's a great idea, like more podcasts. I think this is a wonderful idea, podcasts in and of themselves, so people can learn from people versus where in the past you would only see an interview on the business news network, or NBC, or Fortune, or something like that, and that's all you would understand. But this is a way where organically things can grow. I think the growth will continue, and I think the future's bright. We just have to know that it takes work to get there. VICTORIA: That's great. Thank you so much for saying that and for sharing your time with us today. I learned a lot myself, and I think our listeners will enjoy it as well. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobot.fm. You can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success. Special Guest: Leonard S. Johnson.

The Stephen Wolfram Podcast
History of Science and Technology Q&A (October 20, 2021)

The Stephen Wolfram Podcast

Play Episode Listen Later Oct 7, 2022 77:17


Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: How much of your work on cellular automata was influenced by Ulam's work during the Manhattan Project? - ​How do you approach studying the history of technology to inform your work on current projects? Do you do very targeted studies when starting a project? How much historical context is enough? - ​Is there a list of Wolfram recommended history of science and technology books? - What was it about ancient Greece that allowed for great advancements in math and science and great thinkers? - Could you talk about the history on Ed Fredkin's work and if its similar to your work about Cellular Automata? - In 2001: A Space Odyssey, the alien monolith is a Von Neumann probe. Is it rectangular because of the cellular automata inspiration? - ​What is the book top right with the horse's head? Just curious! - What do you think is the significance of the antikythera mechanism? How close do you think the Greeks were to a technological civilization? - Have you read Asimov's Foundation? Do you think psychohistory could actually be an actual science with real predictive power? Does it need to find pockets of computational reducibility - ​You said the cellular automata experiments you did in the 80s could have been done in Los Alamos, why do you think those weren't done then?

Razib Khan's Unsupervised Learning
Ananyo Bhattacharya: The Life of John von Neumann

Razib Khan's Unsupervised Learning

Play Episode Listen Later Jun 13, 2022 85:33 Very Popular


Who was the smartest human of the 20th century? Though intellectual celebrity probably dictates that the majority would answer Albert Einstein, another candidate is the mathematician John von Neumann. Today on Unsupervised Learning Razib talks to science journalist Ananyo Bhattacharya, author of The Man from the Future: The Visionary Life of John von Neumann, and erstwhile physicist and editor at Nature. They discuss the life and science of a scholar whose mental acuity was so preternatural that he was affectionately labeled a “Martian” by his colleagues. Razib and Bhattacharya discuss the social context of von Neumann's upbringing in the haute bourgeoisie of the late Austro-Hungarian Empire (his family was elevated to the nobility when von Neumann was ten), a milieu that facilitated his insatiable intellectual appetites and provided him an incomparable set of peers that would ensure he never became complacent. Then, Bhattacharya notes that Von Neumann was not exceptional at every intellectual endeavor. He may have made original contributions to mathematics, physics, economics, statistics and computing, but non-polymath mortals may take comfort that he was known to be a mediocre chess player and a life-threatening driver. To sum up, they consider some of the aforementioned contributions that the “Martian” made to human knowledge before dying prematurely from cancer at the age of 53.

Universe Today Podcast
812: Quasar Dust, Grabby Aliens and Who Needs von Neumann probes | Q&A 181

Universe Today Podcast

Play Episode Listen Later Apr 27, 2022 33:22 Very Popular


In this week's questions and answers show, I talk about supermassive black hole nucleosynthesis, the threat of grabby aliens and why would we ever bother building Von Neumann Probes. 00:00 Start 01:00 [Tatooine] Are elements being formed in quasars? 03:50 [Corusant] What will the coverage of the Moon landings be like? 06:39 [Hoth] How close will Comet C/2014 UN271 get to us? 08:29 [Naboo] Why send out Von Neumann Probes? 09:56 [Kamino] Will we actually develop the technology for Von Neumann Probes? 12:34 [Bespin] Why do colliding black holes turn mass into gravitational waves? 13:56 [Mustafar] Should we broadcast signals into space? 18:05 [Alderaan] Would bowl-spin habitats help with lower gravity? 21:07 [Dagobah] What will the first pictures from Webb be? 22:31 [Yarvin] How do we know the Oort Cloud exists? 23:31 [Mandalore] What telescopes can fly with Starship? 26:46 [Geonosis] Could a heavy suit mimic Earth gravity? 28:41 [Corellia] Why do people want to colonize other planets? Want to be part of the questions show? Ask a short question on any video on my channel. I gather a bunch up each week and answer them here.

Universe Today Podcast
812: Quasar Dust, Grabby Aliens and Who Needs von Neumann probes | Q&A 181

Universe Today Podcast

Play Episode Listen Later Apr 27, 2022


In this week's questions and answers show, I talk about supermassive black hole nucleosynthesis, the threat of grabby aliens and why would we ever bother building Von Neumann Probes. 00:00 Start 01:00 [Tatooine] Are elements being formed in quasars? 03:50 [Corusant] What will the coverage of the Moon landings be like? 06:39 [Hoth] How close will Comet C/2014 UN271 get to us? 08:29 [Naboo] Why send out Von Neumann Probes? 09:56 [Kamino] Will we actually develop the technology for Von Neumann Probes? 12:34 [Bespin] Why do colliding black holes turn mass into gravitational waves? 13:56 [Mustafar] Should we broadcast signals into space? 18:05 [Alderaan] Would bowl-spin habitats help with lower gravity? 21:07 [Dagobah] What will the first pictures from Webb be? 22:31 [Yarvin] How do we know the Oort Cloud exists? 23:31 [Mandalore] What telescopes can fly with Starship? 26:46 [Geonosis] Could a heavy suit mimic Earth gravity? 28:41 [Corellia] Why do people want to colonize other planets? Want to be part of the questions show? Ask a short question on any video on my channel. I gather a bunch up each week and answer them here.

Universe Today Podcast
Episode 794: Q&A 174: Could We Mine Jupiter for Hydrogen? And More...

Universe Today Podcast

Play Episode Listen Later Mar 14, 2022 42:58 Very Popular


In this week's episode, I explain how we could use Jupiter as a source of fuel for our fusion reactors, what it means to say there's a scientific consensus, and if gravitational waves can trigger earthquakes. 00:00 Start 01:26 Could we mine Jupiter for hydrogen? 03:24 What does "scientific consensus" mean? 07:04 Could gravitational waves trigger earthquakes? 08:47 Is there really a need to build Dyson Spheres? 10:52 Could aliens know our history? 12:33 When will there be good pictures from JWST? 14:19 What's holding up Starship? 16:21 What happens to ISS if Russia pulls out? 19:35 Would life be better at a K-star? 22:12 Where can we invest in space mining? 23:15 How do we know the source of gravitational waves? 24:22 Could the Solar System leave the Milky Way? 27:06 Are we Von Neumann probes? 28:17 Does SpaceX really land rockets? 29:29 Why aren't there more rovers going to the ice on the Moon and Mars? 30:43 How can Webb get solar power? 31:35 Are there any other uses for ISS? 33:18 Do neutron stars have crusts? 34:32 Will LUVOIR get build? 36:55 Would a diamond spaghettify entering a black hole? 38:11 Does life need a Jupiter to protect it? 39:47 Could we survive just living in spacesuits? Want to be part of the questions show? Ask a short question on any video on my channel. I gather a bunch up each week and answer them here.

Universe Today Podcast
Episode 794: Q&A 174: Could We Mine Jupiter for Hydrogen? And More...

Universe Today Podcast

Play Episode Listen Later Mar 14, 2022


In this week's episode, I explain how we could use Jupiter as a source of fuel for our fusion reactors, what it means to say there's a scientific consensus, and if gravitational waves can trigger earthquakes. 00:00 Start 01:26 Could we mine Jupiter for hydrogen? 03:24 What does "scientific consensus" mean? 07:04 Could gravitational waves trigger earthquakes? 08:47 Is there really a need to build Dyson Spheres? 10:52 Could aliens know our history? 12:33 When will there be good pictures from JWST? 14:19 What's holding up Starship? 16:21 What happens to ISS if Russia pulls out? 19:35 Would life be better at a K-star? 22:12 Where can we invest in space mining? 23:15 How do we know the source of gravitational waves? 24:22 Could the Solar System leave the Milky Way? 27:06 Are we Von Neumann probes? 28:17 Does SpaceX really land rockets? 29:29 Why aren't there more rovers going to the ice on the Moon and Mars? 30:43 How can Webb get solar power? 31:35 Are there any other uses for ISS? 33:18 Do neutron stars have crusts? 34:32 Will LUVOIR get build? 36:55 Would a diamond spaghettify entering a black hole? 38:11 Does life need a Jupiter to protect it? 39:47 Could we survive just living in spacesuits? Want to be part of the questions show? Ask a short question on any video on my channel. I gather a bunch up each week and answer them here.