POPULARITY
Gabby Reece speaks with Dr. Ben Goertzel, a leading figure in artificial intelligence, about the potential of AI to revolutionize various fields, including gene therapy and human consciousness. They explore the balance between technological advancements and our connection to nature, the philosophical implications of longevity, and the future of human experience in an increasingly tech-driven world. The discussion delves into the opportunities and challenges posed by AI, emphasizing the importance of maintaining our humanity amidst rapid technological change. SPONSORS Lume: Control Body Odor ANYWHERE with @lumedeodorant and get 15% off with promo code GABBY at lumepodcast.com/GABBY! #lumepod Puori: Puori is offering 20% off your one-time purchase by going to puori.com/GABBY and using the promo code GABBY at checkout. If you choose their already discounted subscription, that's nearly a third off the price! STEMREGEN: Get 20% off STEMREGEN with the code GABBY20 at https://www.stemregen.co/pages/gabby-reece Fatty 15: Fatty15 is on a mission to replenish your C15 levels and restore your long-term health. You can get an additional 15% off their 90-day subscription Starter Kit by going to fatty15.com/GABBY and using code GABBY at checkout for an additional 15% off your first order. Chapters 00:00 AI and Gene Therapy Innovations 08:30 Understanding AI's Rapid Advancement 14:14 The Singularity and Its Implications 20:25 The Twin Protocol and Data Ownership 28:56 Longevity Supplements and Health Innovations 48:12 Deep Dive with Dr. Ben Goertzel 54:24 Expanding Consciousness and Technology 01:00:10 The Intersection of Nature and Technology 01:06:23 The Evolution of Technology and Human Experience 01:12:35 Philosophical Perspectives on Longevity 01:18:06 The Role of AI in Art and Creativity 01:25:43 Desdemona: A New Era of Robot Interaction 01:48:13 The Illusion of Control 01:55:46 The Role of AI in Society 02:05:22 Health, Longevity, and Supplements 02:10:42 AI and Personalized Medicine 02:16:32 Education and Emotional Intelligence 02:22:23 Psychedelics and Consciousness 02:30:01 The Future of AI and Longevity For more Gabby: Instagram: https://www.instagram.com/gabbyreece/ TikTok: https://www.tiktok.com/@gabbyreeceofficial The Gabby Reece Show Podcast on Youtube: https://www.youtube.com/channel/UCeEINLNlGvIceFOP7aAZk5A Keywords AI, gene therapy, consciousness, technology, human connection, longevity, transhumanism, robotics, nature, future, AI, robotics, creativity, human-robot collaboration, longevity, health, technology, complexity, sovereignty, digital twins, personalized medicine, AI, longevity, health, psychedelics, education, emotional intelligence, brain health, technology Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr Ben Goertzel, CEO of the Artificial Superintelligence Alliance and founder of SingularityNet discusses AI cyberattacks in war and a new grants scheme to accelerate the emergence of human-level artificial general intelligence and ‘superintelligence'.Plus, Happy birthday ChatGPT, National Trust picks 49 good causes to receive Sycamore Gap saplings, Australia passes world-first law banning under-16s from social media accounts.Also in this episode:Rise of the dinosaurs from herbivores to carnivoresWhy has the University of Bath studied the sale of lone bananas as “sad singles”?Three-year-old ‘chef' becomes viral TikTok sensationJudy Garland's iconic ruby slippers go on display in London Hosted on Acast. See acast.com/privacy for more information.
This episode is sponsored by Legal Zoom. Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code Smith10 to get 10% off any LegalZoom business formation product excluding subscriptions and renewals. In this episode of the Eye on AI podcast, we dive into the world of Artificial General Intelligence (AGI) with Ben Goertzel, CEO of SingularityNET and a leading pioneer in AGI development. Ben shares his vision for building machines that go beyond task-specific capabilities to achieve true, human-like intelligence. He explores how AGI could reshape society, from revolutionizing industries to redefining creativity, learning, and autonomous decision-making. Throughout the conversation, Ben discusses his unique approach to AGI, which combines decentralized AI systems and blockchain technology to create open, scalable, and ethically aligned AI networks. He explains how his work with SingularityNET aims to democratize AI, making AGI development transparent and accessible while mitigating risks associated with centralized control. Ben also delves into the philosophical and ethical questions surrounding AGI, offering insights into consciousness, the role of empathy, and the potential for building machines that not only think but also align with humanity's best values. He shares his thoughts on how decentralized AGI can avoid the narrow, profit-driven goals of traditional AI and instead evolve in ways that benefit society as a whole. This episode offers a thought-provoking glimpse into the future of AGI, touching on the technical challenges, societal impact, and ethical considerations that come with creating truly intelligent machines. Ben's perspective will leave you questioning not only what AGI can achieve, but also how we can guide it toward a positive future. Don't forget to like, subscribe, and hit the notification bell to stay tuned for more! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Ben Goertzel (01:21) Overview of "The Consciousness Explosion" (02:28) Ben's Background in AI and AGI (04:39) Exploring Consciousness and AI (08:22) Panpsychism and Views on Consciousness (10:32) The Path to the Singularity (13:28) Critique of Modern AI Systems (18:30) Perspectives on Human-Level AI and Creativity (21:42) Ben's AGI Paradigm and Approach (25:39) OpenCog Hyperon and Knowledge Graphs (31:12) Integrating Perception and Experience in AI (34:02) Robotics in AGI Development (35:06) Virtual Learning Environment for AGI (39:01) Creativity in AI vs. Human Intelligence (44:21) User Interaction with AGI Systems (48:22) Funding AGI Research Through Cryptocurrency (53:03) Final Thoughts on Compassionate AI (55:21) How to Get "The Consciousness Explosion" Book
228. Bölümde Araştırmacı ve Yazar Barış Yalın Uzunlu konuğum oldu. Bu bölümde yapay zekanın güçlü ve zayıf yönlerini, mevcut durumunu ve gelecekteki potansiyel gelişmelerini ele alıyoruz. Sesli asistanlardan Turing Testi'ne, sağduyunun öneminden bilinmeyenin yapay zekaya aktarılmasına kadar geniş bir yelpazede bilgi alacaksınız. (00:00) – Açılış (00:56) – Barış Yalın Uzunlu'yu tanıyoruz. (02:00) – Zeka nedir? Güçlü yapay zeka" ve "zayıf yapay zeka" kavramları arasındaki fark nedir? https://en.wikipedia.org/wiki/Garry_Kasparov https://stockfishchess.org/ https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) (06:42) – Sesli asistanlar zayıf yapay zekaya girer mi? https://tr.wikipedia.org/wiki/Kahin_(Matrix) (09:17) - Yapay Zekanın amacı ve şu anki durumu https://tr.wikipedia.org/wiki/Dartmouth_Konferans%C4%B1 https://www.goodreads.com/book/show/36722634-artificial-unintelligence?ac=1&from_search=true&qid=d045329r1E&rank=1 Turing Testi - https://tr.wikipedia.org/wiki/Turing_testi (12:40) – Sağduyu ve farkındalık kavramlarının önemi ve zorlukları Çince Odası - https://tr.wikipedia.org/wiki/%C3%87ince_odas%C4%B1 (15:20) - Bilinmeyen bir şeyi yapay zekaya nasıl aktarabiliriz? Bu konuda neden ilerleme kaydedilemedi? Cyc - https://en.wikipedia.org/wiki/Cyc Ben Goertzel - https://en.wikipedia.org/wiki/Ben_Goertzel (21:00) - Gelecekte Yapay Zeka / Yapay zekanın gelecekteki potansiyel gelişimleri K Computer - https://en.wikipedia.org/wiki/K_computer (24:50) - Yapay zekanın gerçek amacına ulaşma yolunda atılacak adımlar (26:49) – Son sözler (27:25) – Kitap Önerileri Gödel, Escher, Bach: Bir Ebedi Gökçe Belik - https://www.goodreads.com/book/show/18373643-g-del-escher-bach?ac=1&from_search=true&qid=oqsNVA3pEH&rank=1 What Computers Can't Do: The Limits of Artificial Intelligence - https://www.goodreads.com/book/show/1039575.What_Computers_Can_t_Do?ref=nav_sb_ss_1_62 Aklın Gözü - https://www.goodreads.com/book/show/16078425-akl-n-g-z?from_search=true&from_srp=true&qid=5jjzaiNvsn&rank=1 Bilinçli Makinelere Giden Yol : Yapay Zekânın Dünü, Bugünü, Yarını - https://www.goodreads.com/book/show/63115287-bilin-li-makinelere-giden-yol?ref=nav_sb_ss_1_29 (29:33) - Kapanış Barış Yalın Uzunlu - https://www.linkedin.com/in/baris-yalin-uzunlu-04416a8a/ Sosyal Medya takibi yaptın mı? Twitter - https://twitter.com/dunyatrendleri Instagram - https://www.instagram.com/dunya.trendleri/ Linkedin - https://www.linkedin.com/company/dunyatrendleri/ Youtube - https://www.youtube.com/c/aykutbalcitv Goodreads - https://www.goodreads.com/user/show/28342227-aykut-balc aykut@dunyatrendleri.com Bize bağış yapıp destek olmak için Patreon hesabımız – https://www.patreon.com/dunyatrendleri Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr. Ben Goertzel, CEO and chief scientist at SingularityNET, explains why decentralizing AI is critical to the development of artificial general intelligence. He also explores AI's potential to solve human aging and shares his thoughts on what sentience might look like in an artificial general intelligence.The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung. Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he makes the music for the podcast — hear more at madic.art.Follow Ben Goertzel on X at @bengoertzelCheck out Cointelegraph at cointelegraph.com.Timestamps:(00:00) - Introduction to The Agenda podcast and this week's episode(02:56) - AI's journey from the 1950s to today(06:16) - How AI eventually evolves to the level of superintelligence(07:10) - Does AI really pose a risk to humanity?(12:25) - The importance of decentralizing artificial intelligence(15:37) - In the future, who will “own” the AI?(25:31) - What Hollywood got right and wrong about AI(35:53) - Will AI help humans become immortal?(40:46) - Dr. Goertzel explains The Consciousness ExplosionIf you like what you heard, rate us and leave a review!The views, thoughts and opinions expressed in this podcast are its participants' alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast's participants may or may not own any of the assets mentioned.
Jim talks with Trent McConaghy and Ben Goertzel about the merger of Ben's SingularityNET AGIX token, Trent's Ocean Protocol, and Fetch. They discuss the relative size of the merger, motivations for pulling together the three networks, distinguishing this from a standard corporate merger, how the communities of the projects reacted, leveraging the benefits of scale, changing the ticker symbol, defining AGI vs ASI, forecasts on AGI, considering the arc of self-driving cars, data bottlenecks, the likely shape of superintelligence, what the 3 organizations do, autonomous economic agents, AI creativity, the amount of work happening on crypto networks, the antifragility of crypto, why AGI/ASI emerging from these networks might be plausible, making something work without understanding it, and much more. JRS EP217 Ben Goertzel on a New Framework for AGI JRS EP211 Ben Goertzel on Generative AI vs. AGI Currents 072: Ben Goertzel on Viable Paths to True AGI JRS EP3 Dr. Ben Goertzel – OpenCog, AGI and SingularityNET JRS EP 22 Trent McConaghy on AI & Brain-Computer Interface Accelerationism (bci/acc) "Is It Worth Being Wise?" by Paul Graham "The Unreasonable Effectiveness of Data," Google Research Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author. Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. Trent McConaghy is founder of Ocean Protocol. He has 25 years of deep tech experience with a focus on AI and blockchain. He co-founded Analog Design automation Inc. in 1999, which built AI-powered tools for creative circuit design. It was acquired by Synopsys in 2004. He co-founded Solido Design Automation in 2004, using AI to mitigate process variation and help drive Moore's Law. Solido was later acquired by Siemens. He then went on to launch ascribe in 2013 for NFTs on Bitcoin, then Ocean Protocol in 2017 for decentralized data markets for AI. He currently focuses on Ocean Predictoor for crowd-sourced AI prediction feeds.
Ben Goertzel discusses AGI development, transhumanism, and the potential societal impacts of superintelligent AI. He predicts human-level AGI by 2029 and argues that the transition to superintelligence could happen within a few years after. Goertzel explores the challenges of AI regulation, the limitations of current language models, and the need for neuro-symbolic approaches in AGI research. He also addresses concerns about resource allocation and cultural perspectives on transhumanism. TOC: [00:00:00] AGI Timeline Predictions and Development Speed [00:00:45] Limitations of Language Models in AGI Development [00:02:18] Current State and Trends in AI Research and Development [00:09:02] Emergent Reasoning Capabilities and Limitations of LLMs [00:18:15] Neuro-Symbolic Approaches and the Future of AI Systems [00:20:00] Evolutionary Algorithms and LLMs in Creative Tasks [00:21:25] Symbolic vs. Sub-Symbolic Approaches in AI [00:28:05] Language as Internal Thought and External Communication [00:30:20] AGI Development and Goal-Directed Behavior [00:35:51] Consciousness and AI: Expanding States of Experience [00:48:50] AI Regulation: Challenges and Approaches [00:55:35] Challenges in AI Regulation [00:59:20] AI Alignment and Ethical Considerations [01:09:15] AGI Development Timeline Predictions [01:12:40] OpenCog Hyperon and AGI Progress [01:17:48] Transhumanism and Resource Allocation Debate [01:20:12] Cultural Perspectives on Transhumanism [01:23:54] AGI and Post-Scarcity Society [01:31:35] Challenges and Implications of AGI Development New! PDF Show notes: https://www.dropbox.com/scl/fi/fyetzwgoaf70gpovyfc4x/BenGoertzel.pdf?rlkey=pze5dt9vgf01tf2wip32p5hk5&st=svbcofm3&dl=0 Refs: 00:00:15 Ray Kurzweil's AGI timeline prediction, Ray Kurzweil, https://en.wikipedia.org/wiki/Technological_singularity 00:01:45 Ben Goertzel: SingularityNET founder, Ben Goertzel, https://singularitynet.io/ 00:02:35 AGI Conference series, AGI Conference Organizers, https://agi-conf.org/2024/ 00:03:55 Ben Goertzel's contributions to AGI, Wikipedia contributors, https://en.wikipedia.org/wiki/Ben_Goertzel 00:11:05 Chain-of-Thought prompting, Subbarao Kambhampati, https://arxiv.org/abs/2405.04776 00:11:35 Algorithmic information content, Pieter Adriaans, https://plato.stanford.edu/entries/information-entropy/ 00:12:10 Turing completeness in neural networks, Various contributors, https://plato.stanford.edu/entries/turing-machine/ 00:16:15 AlphaGeometry: AI for geometry problems, Trieu, Li, et al., https://www.nature.com/articles/s41586-023-06747-5 00:18:25 Shane Legg and Ben Goertzel's collaboration, Shane Legg, https://en.wikipedia.org/wiki/Shane_Legg 00:20:00 Evolutionary algorithms in music generation, Yanxu Chen, https://arxiv.org/html/2409.03715v1 00:22:00 Peirce's theory of semiotics, Charles Sanders Peirce, https://plato.stanford.edu/entries/peirce-semiotics/ 00:28:10 Chomsky's view on language, Noam Chomsky, https://chomsky.info/1983____/ 00:34:05 Greg Egan's 'Diaspora', Greg Egan, https://www.amazon.co.uk/Diaspora-post-apocalyptic-thriller-perfect-MIRROR/dp/0575082097 00:40:35 'The Consciousness Explosion', Ben Goertzel & Gabriel Axel Montes, https://www.amazon.com/Consciousness-Explosion-Technological-Experiential-Singularity/dp/B0D8C7QYZD 00:41:55 Ray Kurzweil's books on singularity, Ray Kurzweil, https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889 00:50:50 California AI regulation bills, California State Senate, https://sd18.senate.ca.gov/news/senate-unanimously-approves-senator-padillas-artificial-intelligence-package 00:56:40 Limitations of Compute Thresholds, Sara Hooker, https://arxiv.org/abs/2407.05694 00:56:55 'Taming Silicon Valley', Gary F. Marcus, https://www.penguinrandomhouse.com/books/768076/taming-silicon-valley-by-gary-f-marcus/ 01:09:15 Kurzweil's AGI prediction update, Ray Kurzweil, https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer
Dr. Ben Goertzel is a multidisciplinary scientist, entrepreneur, and author, originally from Brazil. He currently resides on an island near Seattle after living in Hong Kong. He leads prominent AI organizations like the SingularityNET Foundation, OpenCog Foundation, and the AGI Society, which hosts an annual Artificial General Intelligence conference. Goertzel is also deeply involved in AI development through organizations like Rejuve, Mindplex, and Cogito, and serves as a musician in Jam Galaxy Band, the first-ever band led by a humanoid robot. Additionally, he played a key role in the creation of the Sophia robot at Hanson Robotics and now works on the development of Grace, Sophia's sister, at Awakening Health. Goertzel's research spans fields such as artificial intelligence, cognitive science, natural language processing, and theoretical physics, resulting in over 25 scientific books and 150 technical papers. He frequently lectures at global conferences and has an extensive background in academia, having earned a PhD in mathematics from Temple University and serving on university faculties in the U.S., Australia, and New Zealand. His most recent book, The Consciousness Explosion, explores the intersection of human consciousness and the technological singularity. Shermer and Goertzel explore various topics related to AI, including the nature of intelligence, AGI, the alignment problem, consciousness, and sentience. They consider AI dystopia, utopia, and protopia, along with ethical and legal issues, such as AI values and universal basic income (UBI). Other discussions involve mind uploading, self-driving cars, robots like Sophia, and whether AI can solve political and economic problems or even achieve consciousness.
Dr. Ben Goertzel is a highly influential figure in the fields of artificial intelligence, robotics, and computational finance. Born in 1966, he has been a pioneering force in multiple scientific and technological domains. With a Ph.D. in Mathematics from Temple University, Goertzel has dedicated his career to both the theoretical and practical applications of AI, striving to bridge the gap between machine intelligence and human-like cognition. He helped popularize the term “artificial general intelligence” and he has been a strong advocate for moving the field of AGI.Dr. Goertzel leads the SingularityNET Foundation, the enterprise AI firm TrueAGI, the OpenCog Foundation, and the AGI Society. Additionally, he is the facilitator of the Artificial General Intelligence conference, held annually for more than fifteen years.In this conversation, we discuss:- The future of AI & Blockchain- The history and future of AI- How AI and blockchain work together- Artificial General Intelligence- Artificial Superintelligence Alliance (ASI) merger- $ASI- The killer use case for blockchain-based AI- AI product lifecycle- How AI agents can make our lives easier- AI-based fraud detection is more efficient on the blockchainSingularityNET FoundationWebsite: singularitynet.ioX: @SingularityNETTelegram: t.me/singularitynetBen GoertzelX: @bengoertzelLinkedIn: Ben Goertzel --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
We're entering an age when software will be written on-the-fly, by machines, to meet individual needs—an era of ephemeral applications. This will have a massive impact on the emergence of artificial general intelligence (also called AGI or singularity) and Ben Goertzel returns for a lively discussion about the scope of this dawning era. Ben helped to popularize the term singularity and, as the founder and CEO of SingularityNET (https://singularitynet.io/), has long been working to democratize access to artificial intelligence. As he points out in this episode, "AI that can write code, this is the key to the singularity." Packed with food for thought as well as actionable ideas, this is a hard-hitting episode that expands on ideas central to organizational AGI. You can watch this episode on YouTube: https://onereach.ai/ai-agents/?utm_source=youtube&utm_medium=social&utm_campaign=ephemeral_apps_and_agi_episode&utm_content=1 To learn more about OAGI and try AI agent demos visit https://onereach.ai/ #AI #AIagents #singularity #AGI #OAGI #Goertzel #LLMs #GraphDB
Wanna chat about the episode? Or just hang out? Come join us on discord! --- We're not earthly beings any more... we're cosmic beings. - Eden Ahbez Two episodes in one to grapple with the origins of a cosmic concept, and its future among the stars. --- *Search Categories* Science / Pseudoscience --- *Topic Spoiler* Russian Cosmism and modern Cosmism --- Further Reading https://en.wikipedia.org/wiki/Russian_cosmism https://en.wikipedia.org/wiki/Nikolai_Fyodorov_(philosopher) https://en.wikipedia.org/wiki/Ben_Goertzel https://firstmonday.org/ojs/index.php/fm/article/view/13636/11606 --- *Patreon Credits* Michaela Evans, Heather Aunspach, Alyssa Ottum, David Whiteside, Jade A, amy sarah marshall, Martina Dobson, Eillie Anzilotti, Lewis Brown, Kelly Smith Upton, Wild Hunt Alex, Niklas Brock, Jim Fingal Jenny Lamb, Matthew Walden, Rebecca Kirsch, Pam Westergard, Ryan Quinn, Paul Sweeney, Erin Bratu, Liz T, Lianne Cole, Samantha Bayliff, Katie Larimer, Fio H, Jessica Senk, Proper Gander, Nancy Carlson, Carly Westergard-Dobson, banana, Megan Blackburn, Instantly Joy, Athena of CaveSystem, John Grelish, Rose Kerchinske, Annika Ramen, Alicia Smith, Kevin, Velm, Dan Malmud, tiny, Dom, Tribe Label - Panda - Austin, Noelle Hoover, Tesa Hamilton, Nicole Carter, Paige, Brian Lancaster, tiny, GD, Elloe
HyperCycle was founded in October 2022 following discussions between CEO Toufi Saliba and Ben Goertzel, founder of SingularityNet, at the global AGI summit in 2021. The company focuses on developing a General Purpose Technology supporting a decentralized network for AI-to-AI communication, designed to scale with the worldwide demand for AI consumption.Leveraging technologies such as the TODA Protocol, Earth64 data structure, and SingularityNet's Liquid Reputation model, HyperCycle aims to create a secure, efficient, and globally accessible platform for AI collaboration. Its CEO, Toufi Saliba, recently joined the Bitcoin.com News Podcast to talk about the technology.Toufi holds positions as the Global Chair of IEEE AI Standards, Chair of ACM PB CC, and is a founding member of DAIA (Decentralized AI Alliance). He has been an invited honorary speaker at major global events, including WIC, ITU, UN, Busan, and the Korean National Assembly. In 2021, he posed a thought-provoking question about the global race towards AGI at the global AGI summit. This led to the decision to build an AI brain on the Toda/IP protocol, an initiative he chose to lead.In October 2022, HyperCycle was launched, focusing on customers who recognize the power of cooperative intelligence. The business model is B2B, with zero transaction fees and 1% on royalties. To date, over 300,000 licenses have been sold, and a token sale was initiated with 60 months tokenomics.To learn more about the project visit HyperCycle.AI. And you can reach out to Toufi at LinkedIn or X.
In episode 256 of the Parker's Pensées Podcast, I'm joined once again by Dr. Ben Goertzel to talk about Artificial General Intelligence. Last time we talked about the philosophy of AGI, this time we discuss the state of the art of AGI and just a bit about machine consciousness. Check the time stamps for a full topic list. Join this channel to get access to perks: https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA/join →Sponsors/Discounts Check out https://murdycreative.co/PARKERNOTES and use promo code PARKERNOTES at check out for 10% off your entire order!! Grab a Field Notes notebook or memo book wallet like the one from the video from my affiliate link here to support my work and use promo code PARKERNOTES for 10% off your entire order: https://fieldnotesbrand.com/products/daily-carry-leather-notebook-cover?aff=44 Grab a Saddleback Leather medium Moleskine/Leuchtturm1917 cover from my affiliate link here to support my channel: https://saddlebackleather.com/leather-moleskine-cover-medium/?ktk=d0pac01BLWJmZWY1MmZiYTFi Join this channel to get access to perks: https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA/join Join the Facebook group, Parker's Pensées Penseurs, here: https://www.facebook.com/groups/960471494536285/ If you like this podcast, then support it on Patreon for $3, $5 or more a month. Any amount helps, and for $5 you get a Parker's Pensées sticker and instant access to all the episode as I record them instead of waiting for their release date. Check it out here: Patreon: https://www.patreon.com/parkers_pensees If you want to give a one-time gift, you can give at my Paypal: https://paypal.me/ParkersPensees?locale.x=en_US Check out my merchandise at my Teespring store: https://teespring.com/stores/parkers-penses-merch Come talk with the Pensées community on Discord: dsc.gg/parkerspensees Sub to my Substack to read my thoughts on my episodes: https://parknotes.substack.com/ 0:00 - what is artificial "general" intelligence? 8:44 - does generalization have more to do with logic or imagination? 15:46 - How far back does the idea of AI go? 18:45 - Why does the East and West think differently about AI? 24:08 - The State of the AI race 31:05 - Can AI be wise? 37:57 - How important are Large Language Models? 59:14 - Machine Consciousness and AGI 1:13:25 - Can an AGI system have a psychedelic exerience?
Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we'll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they're actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.The topics that featured in this conversation included:"The Political Singularity" - when the general public realize that one political question has become more important than all the others, namely should humanity be creating an AI with godlike powers, and if so, under what conditionsCriteria to judge whether a forthcoming superintelligent AI is a "worthy successor" to humanity.Selected follow-ups:The website of Dan FaggellaThe BGI24 conference, lead organiser Ben Goertzel of SingularityNETThe Intelligence Trajectory Political MatrixThe Political SingularityA Worthy Successor - the purpose of AGIRoko Mijic on Twitter/XThe novel Diaspora by Greg EganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationThe Code of Entry PodcastThe Code of Entry Podcast, hosted by the insightful Greg Bew, delves deep into the...Listen on: Apple Podcasts Spotify
When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence ... the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human? According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama. We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity. We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet's evolution, and envisages the role of humans in a world where AI surpasses human intelligence. 00:00 Introduction to the Future of AI 01:28 Predicting the Timeline of Artificial General Intelligence 02:06 The Role of LLMs in the Path to AGI 05:23 The Impact of AI on Jobs and Economy 06:43 The Future of AI Development 10:35 The Role of Humans in a World with AGI 35:10 The Diverse Future of Human and Post-Human Minds 36:51 The Challenges of Transitioning to a World with AGI 39:34 Conclusion: The Future of AGI
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Watch the Full Episode for FREE: Dr Ben Goertzel - Why OpenAI Really Fired Sam Altman: AI Is A Threat To Humanity - London Real
Watch the Full Episode for FREE: Dr Ben Goertzel - Why OpenAI Really Fired Sam Altman: AI Is A Threat To Humanity - London Real
Curt Jaimungal joins me for episode 255 of the Parker's Pensées Podcast to discuss theories of everything, phsyics, the simulation hypothesis, worldviews, and more! Check out his channel right now right here: https://www.youtube.com/@UCdWIQh9DGG6uhJk8eyIFl1w 0:00 - What's this episode about? 3:32 - What is a Theory of Everything? 5:57 - Worldview (weltanschauung) and God 21:14 - Why Host an Academic Level Podcast? 29:08 - The Simulation Hypothesis 47:23 - Ben Goertzel is a genius 51:30 - Theolocutions and Office Hours podcasting 1:14:40 - Being your authentic self 1:20:13 - what kind of thing are you? 1:28:12 - What is Human Value Grounded in?
Jim talks with Ben Goertzel about a paper he co-wrote, "OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond." They discuss the way Ben defines AGI, problems with an economically oriented definition, the rate of advancement of a society, the history of OpenCog, mathematical models of intelligence, Jim's early use of OpenCog, a distributed Atomspace, Atomese vs MeTTa languages, knowledge metagraphs, why Ben didn't write a custom programming language for the original OpenCog, type theory, functional logic programming, moving from weirdly ugly to weirdly elegant, technical debt, grounding of Atoms, interfacing Hyperon with LLMs, nourishing a broader open-source community, hierarchical attention-based pattern recognition networks, heuristic induction, cognitive synergy, why scalability requires translating declarative representation into procedural form and vice versa, retrieval-augmented generation, predictive-coding-based learning as an alternative to back-propagation, the possibility of an InfoGAN-style transformer, and much more. Episode Transcript "OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond," by Ben Goertzel et al. Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author. Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. He also chairs the futurist nonprofit Humanity+, serves as Chief Scientist of AI firms Rejuve, Mindplex, Cogito and Jam Galaxy, all parts of the SingularityNET ecosystem, and serves as keyboardist and vocalist in the Jam Galaxy Band, the first-ever band led by a humanoid robot.
Plus, Dr. Ben Goertzel from Prophets of AI created Sophia, the world's most famous robot. He tells us more about artificial general intelligence (AGI) — the AI that thinks like humans do. There's more: GM says goodbye to Apple CarPlay and surveillance tech to stop drunk drivers.
Plus, Dr. Ben Goertzel from Prophets of AI created Sophia, the world's most famous robot. He tells us more about artificial general intelligence (AGI) — the AI that thinks like humans do. There's more: GM says goodbye to Apple CarPlay and surveillance tech to stop drunk drivers.
Blake Lemoine burst onto the public scene a year and a half ago when he went public about his work on Google's LaMDA system. In this interview, Blake talks about the current state of AI development, and our collective involvement in this massively important technological event. Topics include: Google, LLMs, AGI, AI, engineering jargon, LaMDA, chatbot, Gemini, evolution of search engines, safety protocols, sentience and consciousness, Pope's sermon on AI and peace, philosophy, Silicon Valley, transhumanism, Ben Goertzel, Ray Kurzweil, Effective Altruism, Accelerationism, Techno-Utopians, Libertarianism, religion, cults, occult, Discordianism, Turing Test, Roko's Basilisk, panic, Gary Marcus, low emotional intelligence and power, nerds, different characters of LaMDA, narratives, new kind of mind, faithful servant, AlphaGo, Sci fi worries not a real problem, AI as a human weapon, Golem, ethics, privileged access to advanced systems a real danger, MIC, The Gospel system of IDF, automation of worst aspects of human culture and society, artists sounding alarm
Self-Aware AI EngineerThe Age of Transitions and Uncle 12-17-2023 Blake LamoineAOT #409Blake Lemoine burst onto the public scene a year and a half ago when he went public about his work on Google's LaMDA system. In this interview, Blake talks about the current state of AI development and our collective involvement in this massively important technological event. Topics include Google, LLMs, AGI, AI, engineering jargon, LaMDA, chatbot, Gemini, evolution of search engines, safety protocols, sentience and consciousness, Pope's sermon on AI and peace, philosophy, Silicon Valley, transhumanism, Ben Goertzel, Ray Kurzweil, Effective Altruism, Accelerationism, Techno-Utopians, Libertarianism, religion, cults, occult, Discordianism, Turing Test, Roko's Basilisk, panic, Gary Marcus, low emotional intelligence and power, nerds, different characters of LaMDA, narratives, new kind of mind, faithful servant, AlphaGo, Sci-fi worries not a real problem, AI as a human weapon, Golem, ethics, privileged access to advanced systems a real danger, MIC, The Gospel system of IDF, automation of worst aspects of human culture and society, artists sounding alarmUTP #319Blake Lemoine joins Uncle for a fun and hard-hitting exploration of all the big questions. AI may have already passed the Turing Test, but what about the Uncle Test? Topics include: computers, the word committee, AI development, business, college, military service, Twilight Zone computer, talking to machines, AI romantic partners, journalists, automated podcasts, world population, Republicans, government hour, watch how it works, the Beast, exorcism, Knights of Columbus, Pope, new hat, swords, New Year's Revolution, show back on Friday nights, Ryan Seaquest, NYE, The Country Club New Orleans, Bum Wine Bob, hot buttered rum, NFL, Army mechanic, startup employment, it works, ghost in a shell, alchemy of soul creation, PhD in Divinity, Star Trek, Bicentennial Man, Pinnochio, Festivus, VHS live-streams, Christmas specials, Die Hard, holidaysBlake Lamoine TWITTER Xhttps://twitter.com/cajundiscordianRandomly related Links I watched hours of the AI-generated 'Seinfeld' series before it was banned for a transphobic remark. Beyond that scandal, it's also a frustratingly mindless show.https://www.insider.com/ai-generated-seinfeld-parody-twitch-nothing-forever-streaming-transphobia-banned-2023-2Seinfeld - Nothing, Forever | Watchmeforever | AI | Season 1 Episode 1https://www.youtube.com/watch?v=M6mD9YzVbZI‘The Gospel': how Israel uses AI to select bombing targets in Gazahttps://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targetsFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/KEEP OCHELLI GOING.You are the EFFECT if you support OCHELLIhttps://ochelli.com/donate/Ochelli Link Treehttps://linktr.ee/chuckochelliBASIC MONTHLY MEMBERSHIP$10. USD per MonthSupport Ochelli & in 2024Get a Monthly Email that deliversThe 1st Decade of The Ochelli EffectOver 5,000 Podcasts by 2025BASIC + SUPPORTER WALL$150. USD one time gets the sameAll the Monthly Benefits for 1 Yeara spot on The Ochelli.com Supporters Wallhttps://ochelli.com/membership-account/membership-levels/
Jim talks with recurring guest Ben Goertzel about the ideas in his paper "Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs." They discuss the exponential acceleration of AI development, why LLMs by themselves won't lead to AGI, OpenAI's integrative system, skyhooking, why LLMs may be useful for achieving AGI, solving LLM hallucinations, why Google hasn't replicated GPT-4, LLM-tuning lore, what differentiates AGI from other forms of AI, conceptualizing general intelligence, Weaver's theory of open-ended intelligence, multiple intelligence, the Turing test & the Minsky prize, what LLMs aren't good at, the danger of defining AGI as whatever LLMs can't do, the derivative & imitative character of LLMs, banality, doing advanced math with GPT-4, why the human brain doesn't form arbitrary abstractions, the duality of heuristics & abstractions, adding recurrence to transformers, OpenCog Hyperon, using a weighted labeled metagraph, orienting toward self-reflection & self-rewriting, the challenge of scalability of infrastructure, acceleration on non-LLM projects, and much more. Episode Transcript JRS Currents 072: Ben Goertzel on Viable Paths to True AGI "Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs," by Ben Goertzel "OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond," by Ben Goertzel et. al. Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author. Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. He also chairs the futurist nonprofit Humanity+, serves as Chief Scientist of AI firms Rejuve, Mindplex, Cogito and Jam Galaxy, all parts of the SingularityNET ecosystem, and serves as keyboardist and vocalist in the Jam Galaxy Band, the first-ever band led by a humanoid robot.
YouTube Link: https://www.youtube.com/watch?v=nqWxxPhZEGY David Chalmers analyzes consciousness in AI, probing cognitive science and philosophical ramifications of sentient machines. TIMESTAMPS: - 00:00:00 Introduction - 00:02:10 Talk by David Chalmers on LLMs - 00:26:00 Panel with Ben Goertzel, Susan Schneider, and Curt Jaimungal NOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist. THANK YOU: To Mike Duffy, of https://expandingideas.org and https://dailymystic.org for your insight, help, and recommendations on this channel. - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt... - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED: - Podcast w/ Susan Schneider on TOE: https://www.youtube.com/watch?v=VmQXp... - Reality Plus (David Chalmers): https://amzn.to/473AKPw - Mindfest Playlist on TOE (Ai and Consciousness): https://www.youtube.com/playlist?list... - Mindfest (official website): https://www.fau.edu/artsandletters/ne... - Talk by Ben Goertzel on AGI timelines: https://youtu.be/27zHyw_oHSI - Podcast with Ben Goertzel and Joscha Bach on Theolocution: https://youtu.be/xw7omaQ8SgA - Talk by Claudia Passos, Garrett Mindt, and Carlos Montemayor on Petri Minds: https://www.youtube.com/watch?v=t_YMc... - Stephen Wolfram talk on AI, ChatGPT: https://youtu.be/xHPQ_oSsJgg
Robb and Josh welcome Ben Goertzel for a deep exploration of the narrow AIs we are quickly growing accustomed to and our fraught transition to artificial general intelligence, or singularity. As the founder and CEO of SingularityNET, Ben has been working to democratize access to artificial intelligence through decentralization. He's also a leader of both the OpenCog Foundation and the AGI Society, having helped to popularize the term 'artificial general intelligence'. Topics in this episode include balancing design near-term design concerns—like anthropomorphization—against more consequential issues, such as who will control these systems and how they will interact with humanity.
YouTube Link: https://www.youtube.com/watch?v=xw7omaQ8SgA Joscha Bach meets with Ben Goertzel to discuss cognitive architectures, AGI, and conscious computers in another theolocution on TOE. - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt... - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED: - OpenCog (Ben's Ai company): https://opencog.org - SingularityNET (Ben's Decentralized Ai company): https://singularitynet.io - Podcast w/ Joscha Bach on TOE: https://youtu.be/3MNBxfrmfmI - Podcast w/ Ben Goertzel on TOE: https://youtu.be/27zHyw_oHSI - Podcast w/ Michael Levin and Joscha on TOE: https://youtu.be/kgMFnfB5E_A - Podcast w/ John Vervaeke and Joscha on TOE: https://youtu.be/rK7ux_JhHM4 - Podcast w/ Donald Hoffman and Joscha on TOE: https://youtu.be/bhSlYfVtgww TIMESTAMPS: - 00:00:00 Introduction - 00:02:23 Computation vs Awareness - 00:06:11 The paradox of language and self-contradiction - 00:10:05 The metaphysical categories of Charles Peirce - 00:13:00 Zen Buddhism's category of zero - 00:14:18 Carl Jung's interpretation of four - 00:21:22 Language as "representation" - 00:28:48 Computational reality vs AGI - 00:33:06 Consciousness in particles - 00:44:18 Anesthesia and consciousness: Joscha's personal perspective - 00:54:36 Levels of consciousness levels (panpsychism vs functionalism) - 00:56:23 Deep neural nets & LLMs as steps backward from AGI? - 01:05:04 Emergent properties of LLMs - 01:12:26 Turing-completeness and its implications - 01:15:08 OpenAI's bold claims challenged - 01:24:24 Future of AGI - 01:31:58 Intelligent species after human extinction - 01:36:33 Emergence of a cosmic mind - 01:43:56 The timeline to AGI development - 01:52:16 The physics of immortality - 01:54:00 Critique of Integrated Information Theory (pseudoscience?) Learn more about your ad choices. Visit megaphone.fm/adchoices
Watch the Full Episode for FREE: Dr Ben Goertzel - A.I. Wars: Google Fights Back Against OpenAI's ChatGPT - London Real
Watch the Full Episode for FREE: Dr Ben Goertzel - A.I. Wars: Google Fights Back Against OpenAI's ChatGPT - London Real
We are thrilled to announce the third session of our new Incubator Program. If you have a business idea that involves a web or mobile app, we encourage you to apply to our eight-week program. We'll help you validate your market opportunity, experiment with messaging and product ideas, and move forward with confidence toward an MVP. Learn more and apply at tbot.io/incubator. We look forward to seeing your application in our inbox! Peter Voss is the CEO and Chief Scientist of Aigo.ai, a groundbreaking alternative to conventional chatbots and generative models like ChatGPT. Aigo's chatbot is powered by Artificial General Intelligence (AGI), enabling it to think, learn, and reason much like a human being. It boasts short-term and long-term memory, setting it apart in terms of personalized service and context-awareness. Along with host Chad Pytel, Peter talks about how most chatbots and AI systems today are basic. They can answer questions but can't understand or remember the context. Aigo.ai is different because it's built to think and learn more like humans. It can adapt and get better the more you use it. He also highlights the challenges Aigo.ai faces in securing venture capital, given that its innovative approach doesn't align with current investment models heavily focused on generative or deep learning AI. Peter and Chad agree that while generative AI serves certain functions well, the quest for a system that can think, learn, and reason like a human demands a fundamentally different approach. Aigo.ai (https://aigo.ai/) Follow Aigo.ai on LinkedIn (https://www.linkedin.com/company/aigo-ai/) or YouTube (https://www.youtube.com/channel/UCl3XKNOL5rEit0txjVA07Ew). Follow Peter Voss on LinkedIn (https://www.linkedin.com/in/vosspeter/). Visit his website: optimal.org/voss.html (http://optimal.org/voss.html) Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Peter Voss, CEO and Chief Scientist at Aigo.ai. Peter, thanks so much for joining me. PETER: Yes, thank you. CHAD: So, tell us a little bit about what Aigo.ai does. You've been working in AI for a long time. And it seems like Aigo is sort of the current culmination of a lot of your 15 years of work, so... PETER: Yes, exactly. So, the quick way to describe our current product is a chatbot with a brain, and the important part is the brain. That basically, for the last 15-plus years, I've been working on the core technology for what's called AGI, Artificial General Intelligence, a system that can think, learn, reason similar to the way humans do. Now, we're not yet at human level with this technology. But it's a lot smarter and a lot more usable than traditional chatbots that don't have a brain. CHAD: I want to dig into this idea a little bit. I think, like a lot of people, I've used just traditional chatbots, particularly like ChatGPT is the latest. I've built some things on top of it. What is the brain that makes it different? Especially if you've used one, what is using Aigo going to be different? PETER: Right. I can give a concrete example of one of our customers then I can talk about the technology. So, one of our big customers is the 1-800-Flowers group of companies, which is Harry & David Popcorn Factory and several others. And wanted to provide a hyper-personalized concierge service for their customers where, you know, the system learns who you buy gifts for, for what occasions, you know, what your relationship is to them, and to basically remember who you are and what you want for each of their 20 million customers. And they tried different technologies out there, you know, all the top brands and so on, and they just couldn't get it off the ground. And the reason is because they really don't learn. And we now have 89% self-service on the things that we've implemented, which is pretty much unheard of for complex conversations. So, why can we do that? The reason is that our system has deep understanding. So, we have deep pausing, deep understanding, but more importantly, that the system remembers. It has short-term memory. It has long-term memory. And it uses that as context. So, you know, when you call back a second time, it'll remember what your previous call was, you know, what your preferences are, and so on. And it can basically use that information, the short and long-term memory, and reason about it. And that is really a step forward. Now, until ChatGPT, which is really very different technology from chatbot technology, I mean, chatbot technology, you're assuming...the kind of thing we're talking about is really augmenting call center, you know, automatic call center calls. There, you need deep integration into the customers' back-end system. You obviously need to know what the latest product availability is, what the customers' outstanding orders are, you know, all sorts of things like, you know, delivery schedules. And we probably have, like, two dozen APIs that connect our system to their various corporate databases and so on. Now, traditional chatbots obviously can do that. You hook up the APIs and do things, and it's, you know, it's a lot of work. But traditional chatbot technology really hasn't really changed much in 30 years. You basically have a categorizer; how can I help you? Basically, try to...what is the intent, intent categorizer? And then once your intent has been identified, you basically have a flowchart-type program that, you know, forces you down a flowchart. And that's what makes them so horrible because it doesn't use context. It doesn't have short-term memory. CHAD: And I just wanted to clarify the product and where you mentioned call center. So, this isn't just...or only text-based chat. This is voice. PETER: Yes. We started off with chat, and we now also have voice, so omnichannel. And the beauty of the system having the brain as well is you can jump from text messaging to a chat on the website to Apple ABC to voice, you know. So, you can basically move from one channel to another seamlessly. You know, so that's against traditional chatbot technology, which is really what everybody is still using. Now, ChatGPT, of course, the fact that it's called ChatGPT sort of makes it a bit confusing. And, I mean, it's phenomenal. The technology is absolutely phenomenal in terms of what it can do, you know, write poems and give you ideas. And the amount of information it's amazing. However, it's really not suited for commercial-grade applications because it hallucinates and it doesn't have memory. CHAD: You can give it some context, but it's basically faking it. You're providing it information every time you start to use it. PETER: Correct. The next time you connect, that memory is gone, you know [crosstalk 05:58] CHAD: Unless you build an application that saves it and then feeds it in again. PETER: Right. Then you basically run out of context we know very quickly. In fact, I just published a white paper about how we can get to human-level AI. And one of the things we did and go over in the paper is we did a benchmark our technology where we fed the system about 300 or 400 facts, simple facts. You know, it might be my sister likes chocolate or, you know, it could be other things like I don't park my car in the garage or [chuckles], you know. It could be just simple facts, a few hundred of those. And then we asked questions about that. Now, ChatGPT scored less than 1% on that because, you know, with an 8K window, it basically just couldn't remember any of this stuff. So, we use -- CHAD: It also doesn't, in my experience...it's basically answering the way it thinks the answer should sound or look. And so, it doesn't actually understand the facts that you give it. PETER: Exactly. CHAD: And so, if you feed it a bunch of things which are similar, it gets really confused because it doesn't actually understand the things. It might answer correctly, but it will, in my experience, just as likely answer incorrectly. PETER: Yeah. So, it's extremely powerful technology for helping search as well if a company has all the documents and they...but the human always has to be in the loop. It just makes way too many mistakes. But it's very useful if it gives you information 8 out of 10 times and saves you a lot of time. And it's relatively easy to detect the other two times where it gives you wrong information. Now, I know in programming, sometimes, it's given me wrong information and ended up taking longer to debug the misinformation it gave me than it would have taken me. But overall, it's still a very, very powerful tool. But it really isn't suitable for, you know, serious chatbot applications that are integrated into back-end system because these need to be signed off by...legal department needs to be happy that it's not going to get the company into trouble. Marketing department needs to sign off on it and customer experience, you know. And a generative system like that, you really can't rely on what it's going to say, and that's apart from security concerns and, you know, the lack of memory and deep understanding. CHAD: Yeah. So, you mentioned generative AI, which is sort of one of the underlying pieces of ChatGPT. In your solutions, are you using any generative solutions? PETER: No, not at all. Well, I can give one example. You know, what 1-800-Flowers do is they have an option to write a poem for your mother's birthday or Mother's Day or something like it. And for that, we will use ChatGPT, or they use ChatGPT for that because that's what it's good at. But, you know, that's really just any other app that you might call up to do something for you, you know, like calling up FedEx to find out where your goods are. Apart from that, our technology...it's a good question you ask because, you know, statistical systems and generative AI now have really dominated the AI scene for the last about 12 years, really sort of since DeepMind started. Because it's been incredibly successful to take masses amounts of data and masses amounts of computing power and, you know, number crunch them and then be able to categorize and identify images and, you know, do all sorts of magical things. But, the approach we use is cognitive AI as opposed to generative. It's a relatively unknown approach, but that's what we've been working on for 15 years. And it starts with the question of what does intelligence require to build a system so that it doesn't use masses amounts of data? It's not the quantity of data that counts. It's the quality of data. And it's important that it can learn incrementally as you go along like humans do and that it can validate what it learns. It can reason about, you know, new information. Does this make sense? Do I need to ask a follow-up question? You know, that kind of thing. So, it's cognitive AI. That's the approach we're using. CHAD: And, obviously, you have a product, and you've productized it. But you said, you know, we've been working on this, or you've been working on this model for a long time. How has it progressed? PETER: Yes, we are now on, depending on how you count, but on the third major version of it that we've started. And really, the progress has been determined by resources really than any technology. You know, it's not that we sort of have a big R&D requirement. It's really more development. But we are a relatively small company. And because we're using such different technology, it's actually been pretty hard to raise VC money. You know, they look at it and, you know, ask you, "What's your training data? How big is your model?" You know, and that kind of thing. CHAD: Oh, so the questions investors or people know to ask aren't relevant. PETER: Correct. And, you know, they bring in the AI experts, and then they say, "Well, what kind of deep learning, machine learning, or generative, or what transformer model are using?" And we say, "Well, we don't." And typically, that's kind of, "Oh okay, well, then it can't possibly work, you know, we don't understand it." So, we just recently launched. You know, with all the excitement of generative AI now recently, with so much money flowing into it, we actually launched a major development effort. Now we want to hire an additional a hundred people to basically crank up the IQ. So, over the years, you know, we're working on two aspects of it: one is to continually crank up the IQ of the system, that it can understand more and more complex situations; it can reason better and be able to handle bigger amounts of data. So, that's sort of the technical part that we've been working on. But then the other side, of course, running a business, a lot of our effort over the last 15 years has gone into making it industrial strength, you know, security, scalability, robustness of the system. Our current technology, our first version, was actually a SaaS model that we deployed behind a customer's firewall. CHAD: Yeah, I noticed that you're targeting more enterprise deployments. PETER: Yeah, that's at the moment because, financially, it makes more sense for us to kind of get off the ground to work with, you know, larger companies where we supply the technology, and it's deployed usually in the cloud but in their own cloud behind their firewall. So, they're very happy with that. You know, they have complete control over their data and reliability, and so on. But we provide the technology and then just licensing it. CHAD: Now, a lot of people are familiar with generative AI, you know, it runs on GPUs and that kind of thing. Does the hardware profile for where you're hosting it look the same as that, or is it different? PETER: No, no, no, it requires much less horsepower. So, I mean, we can run an agent on a five-year-old laptop, you know, and it doesn't...instead of it costing $100 million to train the model, it's like pennies [laughter] to train the model. I mean, we train it during our regression testing, and that we train it several times a day. Mid-Roll Ad: When starting a new project, we understand that you want to make the right choices in technology, features, and investment but that you don't have all year to do extended research. In just a few weeks, thoughtbot's Discovery Sprints deliver a user-centered product journey, a clickable prototype or Proof of Concept, and key market insights from focused user research. We'll help you to identify the primary user flow, decide which framework should be used to bring it to life, and set a firm estimate on future development efforts. Maximize impact and minimize risk with a validated roadmap for your new product. Get started at: tbot.io/sprint. CHAD: So, you mentioned ramping up the IQ is a goal of yours. With a cognitive model, does that mean just teaching it more things? What does it entail? PETER: Yes, there's a little bit of tension between commercial requirements and what you ultimately want for intelligence because a truly intelligent system, you want it to be very autonomous and adaptive and have a wide range of knowledge. Now, for current commercial applications we're doing, you actually don't want the system to learn things by itself or to make up stuff, you know, you want it to be predictable. So, they develop and to ultimately get to full human-level or AGI capability requires a system to be more adaptive–be able to learn things more. So, the one big change we are making to the system right now is natural language understanding or English understanding. And our current commercial version was actually developed through our—we call them AI psychologists, our linguists, and cognitive psychologists—by basically teaching it the rules of English grammar. And we've always known that that's suboptimal. So, with the current version, we are now actually teaching it English from the ground up the way a child might learn a language, so the language itself. So, it can learn any language. So, for commercial applications, that wasn't really a need. But to ultimately get to human level, it needs to be more adaptive, more autonomous, and have a wider range of knowledge than the commercial version. That's basically where our focus is. And, you know, we know what needs to be done, but, you know, it's quite a bit of work. That's why we need to hire about 100 people to deal with all of the different training things. It's largely training the system, you know, but there are also some architectural improvements we need to make on performance and the way the system reasons. CHAD: Well, you used the term Artificial General Intelligence. I understand you're one of the people who coined that term [chuckles] or the person. PETER: Yes. In 2002, I got together with two other people who felt that the time was ripe to get back to the original dream of AI, you know, from 60 years ago, to build thinking machines basically. So, we decided to write a book on the topic to put our ideas out there. And we were looking for a title for the book, and three of us—myself, Ben Goertzel, and Shane Legg, who's actually one of the founders of DeepMind; he was working for me at the time. And we were brainstorming it, and that's what we came up with was AGI, Artificial General Intelligence. CHAD: So, for people who aren't familiar, it's what you were sort of alluding to. You're basically trying to replicate the human brain, the way humans learn, right? That's the basic idea is -- PETER: Yeah, the human cognition really, yeah, human mind, human cognition. That's exactly right. I mean, we want an AI that can think, learn, and reason the way humans do, you know, that it can hit the box and learn a new topic, you know, you can have any kind of conversation. And we really believe we have the technology to do that. We've built quite a number of different prototypes that already show this kind of capability where it can, you know, read Wikipedia, integrate that with existing knowledge, and then have a conversation about it. And if it's not sure about something, it'll ask for clarification and things like that. We really just need to scale it up. And, of course, it's a huge deal for us to eventually get to human-level AI. CHAD: Yeah. How much sort of studying of the brain or cognition do you do in your work, where, you know, sort of going back and saying, "Okay, we want to tackle this thing"? Do you do research into cognition? PETER: Yeah, that's a very interesting question. It really gets to the heart of why I think we haven't made more progress in developing AGI. In fact, another white paper I published recently is "Why Don't We Have AGI Yet?" And, you know, one of the big problems is that statistical AI has been so incredibly successful over the last decade or so that it sucked all of the oxygen out of the air. But to your question, before I started on this project, I actually took off five years to study intelligence because, to me, that's really what cognitive AI approach is all about is you start off by saying, what is intelligence? What does it require? And I studied it from the perspective of philosophy, epistemology, theory of knowledge. You know, what's reality? How do we know anything? CHAD: [laughs] PETER: How can we be sure? You know, really those most fundamental questions. Then how do children learn? What do IQ tests measure? How does our intelligence differ to animal intelligence? What is that magic difference between, you know, evolution? Suddenly, we have this high-level cognition. And the short answer of that is being able to form abstract concepts or concept formation is sort of key, and to have metacognition, to be able to think about your own thinking. So, those are kind of the things I discovered during the five years of study. Obviously, I also looked at what had already been done in the field of AI, as in good old-fashioned AI, and neural networks, and so on. So, this is what brought me together. So, absolutely, as a starting point to say, what is intelligence? Or what are the aspects of intelligence that are really important and core? Now, as far as studying the brain is concerned, I certainly looked at that, but I pretty quickly decided that that wasn't that relevant. It's, you know, you certainly get some ideas. I mean, neural networks, ours is kind of a neural network or knowledge graph, so there's some similarity with that. But the analogy one often gives, which I think is not bad, is, you know, we've had flying machines for 100 years. We are still nowhere near reverse engineering a bird. CHAD: Right. PETER: So, you know, evolution and biology are just very different from designing things and using the materials that we need to use in computers. So, definitely, understanding intelligence, I think, is key to being able to build it. CHAD: Well, in some ways, that is part of the reason why statistical AI has gotten so much attention with that sort of airplane analogy because it's like, maybe we need to not try to replicate human cognition [chuckles]. Maybe we need to just embrace what computers are good at and try to find a different way. PETER: Right, right. But that argument really falls down when you say you are ignoring intelligence, you know, or you're ignoring the kind of intelligence. And we can see how ridiculous the sort of the current...well, I mean, first of all, let me say Sam Altman, and everybody says...well, they say two things: one, we have no idea how these things work, which is not a good thing if you're [chuckles] trying to build something and improve it. And the second thing they say...Demis Hassabis and, you know, everybody says it, "This is not going to get us to human-level AI, to human-level intelligence." They realize that this is the wrong approach. But they also haven't come up with what the right approach is because they are stuck within the statistical big data approach, you know, we need another 100 billion dollars to build even bigger computers with bigger models, you know, but that's really -- CHAD: Right. It might be creating a tool, which has some uses, but it is not actual; I mean, it's not really even actual artificial intelligence -- PETER: Correct. And, I mean, you can sort of see this very easily if...imagine you hired a personal assistant for yourself, a human. And, you know, they come to you, and they know how to use Excel and do QuickBooks or whatever, and a lot of things, so great. They start working with you. But, you know, every now and again, they say something that's completely wrong with full confidence, so that's a problem. Then the second thing is you tell them, "Well, we've just introduced a new product. We shut down this branch here. And, you know, I've got a new partner in the business and a new board member." And the next day, they come in, and they remember nothing of that, you know, [chuckles] that's not very intelligent. CHAD: Right. No, no, it's not. It's possible that there's a way for these two things to use each other, like generating intelligent-sounding, understanding what someone is saying and finding like things to it, and being able to generate meaningful, intelligent language might be useful in a cognitive model. PETER: We obviously thought long and hard about this, especially when, you know, generative AI became so powerful. I mean, it does some amazing things. So, can we combine the technology? And the answer is quite simply no. As I mentioned earlier, we can use generative AI kind of as an API or as a tool or something. You know, so if our system needs to write a poem or something, then yes, you know, these systems can do a good job of it. But the reason you can't really just combine them and kind of build a Frankensteinian kind of [laughs] thing is you really need to have context that you currently have fully integrated. So you can't have two brains, you know, the one brain, which is a read-only brain, and then our brain, our cognitive brain, which basically constantly adapts and uses the context of what it's heard using short-term memory, long-term memory, reasoning, and so on. So, all of those mental mechanisms of deep understanding of context, short-term and long-term memory, reasoning, language generation–they all have to be tightly integrated and work together. And that's basically the approach that we have. Now, like a human to...if you write, you know, "Generate an essay," and you want to have it come up with maybe some ideas, changing the style, for example, you know, it would make sense for our system to use a generative AI system like a tool because humans are good tool users. You know, I wouldn't expect our system to be the world chess champion or Go champion. It can use a chess-playing AI or a Go-playing AI to do that job. CHAD: That's really cool. You mentioned the short-term, long-term memory. If I am using or working on a deployment for Aigo, is that something that I specify, like, oh, this thing where we've collected goes in short term versus long term, or does the system actually do that automatically? PETER: That's the beauty of the system that: it automatically has short and long-term memory. So, really, the only thing that needs to be sort of externally specified is things you don't want to keep in long-term memory, you know, that for some reason, security reasons, or a company gives you a password or whatever. So, then, they need to be tagged. So, we have, like, an ontology that describes all of the different kinds of knowledge that you have. And in the ontology, you can tag certain branches of the ontology or certain nodes in the ontology to say, this should not be remembered, or this should be encrypted or, you know, whatever. But by default, everything that comes into short-term memory is remembered. So, you know, a computer can have photographic memory. CHAD: You know, that is part of why...someone critical of what they've heard might say, "Well, you're just replicating a human brain. How is this going to be better?" And I think that that's where you're just...what you said, like, when we do artificial general intelligence with computers, they all have photographic memory. PETER: Right. Well, in my presentations, when I give talks on this, I have the one slide that actually talks about how AI is superior to humans in as far as getting work done in cognition, and there's actually quite a number of things. So, let me first kind of give one example here. So, imagine you train up one AI to be a PhD-level cancer researcher, you know, it goes through whatever training, and reading, and coaching, and so on. So, you now have this PhD-level cancer researcher. You now make a million copies of that, and you have a million PhD-level cancer researchers chipping away at the problem. Now, I'm sure we would make a lot more progress, and you can now replicate that idea, that same thinking, you know, in energy, pollution, poverty, whatever, I mean, any disease, that kind of approach. So, I mean, that already is one major difference that you make copies of an AI, which you can't of humans. But there are other things. First of all, they are significantly less expensive than humans. Humans are very expensive. So much lower cost. They work 24/7 without breaks, without getting tired. I don't know the best human on how many hours they can concentrate without needing a break, maybe a few hours a day, or six, maybe four hours a day. So, 24/7. Then, they can communicate with each other much better than humans do because they could share information sort of by transferring blocks of data across from one to the other without the ego getting in the way. I mean, you take humans, not very good at sharing information and discoveries. Then they don't have certain distractions that we have like romantic things and kids in schools and, you know. CHAD: Although if you actually do get a full [laughs] AGI, then it might start to have those things [laughs]. PETER: Well, yeah, that's a whole nother topic. But our AIs, we basically build them not to want to have children [laughs] so, you know. And then, of course, things we spoke about, photographic memory. It has instantaneous access to all the information in the world, all the databases, you know, much better than we have, like, if we had a direct connection to the internet and brain, you know, but at a much higher bandwidth than we could ever achieve with our wetware. And then, lastly, they are much better at reasoning than humans are. I mean, our ability to reason is what I call an evolutionary afterthought. We are not actually that good at logical thinking, and AIs can be, you know. CHAD: We like to think we are, though. PETER: [chuckles] Well, you know, compared to animals, yes, definitely. We are significantly better. But realistically, humans are not that good at rational, logical thinking. CHAD: You know, I read something that a lot of decisions are made at a different level than the logical part. And then, the logical part justifies the decision. PETER: Yeah, absolutely. And, in fact, this is why smart people are actually worse at that because they're really good at rationalizations. You know, they can rationalize their weird beliefs and/or their weird behavior or something. That's true. CHAD: You mentioned that your primary customers are enterprises. Who makes up your ideal customer? And if someone was listening who matched that profile and wanted to get in touch with you, what would they look like? PETER: The simplest and most obvious way is if they have call centers of 100 people or more—hundreds, or thousands, tens of thousands even. But the economics from about 100 people in the call center, where we might be able to save them 50% of that, you know, depending on the kind of business. CHAD: And are your solutions typically employed before the actual people, and then they fall back to people in certain circumstances? PETER: Correct. That's exactly right. And, you know, the advantage there is, whatever Aigo already gathers, we then summarize it and pop that to the human operator so that, you know, that the customer -- CHAD: That's great because that's super annoying. PETER: It is. CHAD: [laughs] PETER: It is super annoying and -- CHAD: When you finally get to a person, and it's like, I just spent five minutes providing all this information that you apparently don't have. PETER: Right. Yeah, no, absolutely, that's kind of one of the key things that the AI has that information. It can summarize it and provide it to the live operator. So that would be, you know, the sort of the most obvious use case. But we also have use cases on the go with student assistant, for example, where it's sort of more on an individual basis. You know, imagine your kid just starts at university. It's just overwhelming. It can have a personal personal assistant, you know, that knows all about you in particular. But then also knows about the university, knows its way around, where you get your books, your meals, and, you know, different societies and curriculum and so on. Or diabetes coach, you know, where it can help people with diabetes manage their meals and activities, where it can learn whether you love broccoli, or you're vegetarian, or whatever, and help guide you through that. Internal help desks are another application, of course. CHAD: Yeah. I was going to say even the same thing as at a university when people join a big company, you know, there's an onboarding process. PETER: Exactly. Yeah. CHAD: And there could be things that you're not aware of or don't know where to find. PETER: Internal HR and IT, absolutely, as you say, on onboarding. Those are other applications where our technology is well-suited. And one other category is what we call a co-pilot. So, think of it as Clippy on steroids, you know, where basically you have complex software like, you know, SAP, or Salesforce, or something like that. And you can basically just have Aigo as a front end to it, and you can just talk to it. And it will know where to navigate, what to get, and basically do things, complex things in the software. And software vendors like that idea because people utilize more features of the software than they would otherwise, you know. It can accelerate your learning curve and make it much easier to use the product. So, you know, really, the technology that we have is industry and application-agnostic to a large extent. We're just currently not yet at human level. CHAD: Right. I hope you get there eventually. It'll be certainly exciting when you do. PETER: Yes. Well, we do expect to get there. We just, you know, as I said, we've just launched a project now to raise the additional money we need to hire the people that we need. And we actually believe we are only a few years away from full human-level intelligence or AGI. CHAD: Wow, that's exciting. So, if the solution that you currently have and people want to go along for the journey with you, how can they get in touch with Aigo? PETER: They could contact me directly: peter@aigo.ai. I'm also active on Twitter, LinkedIn. CHAD: Cool. We'll include all of those links in the show notes, which people can find at giantrobots.fm. If you have questions for me, email me at hosts@giantrobots.fm. Find me on Mastodon @cpytel@thoughtbot.social. You can find a complete transcript for this episode as well at giantrobots.fm. Peter, thank you so much for joining me. I really appreciate it and all of the wisdom that you've shared with us today. PETER: Well, thank you. They were good questions. Thank you. CHAD: This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening, and see you next time. ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com. Special Guest: Peter Voss.
Watch the Full Episode for FREE: Dr Ben Goertzel - Artificial Intelligence Critically Lacks Morals & Ethics: Why We Need Neural-Symbolic Models Now - London Real
Incantio's Founder and CEO, Danny Newcomb, joins Coruzant Technologies for the Digital Executive podcast. He shares his early beginnings in Seattle working with the likes of Pearl Jam and others, bringing music to fans worldwide. Danny founded several bands and eventually led him to working with famed AI Scientist, Dr. Ben Goertzel. This path led him to where he is today - leveraging AI to improve the music indexing, patterns, and ultimately helping independent musicians to get the money they are worth.
AI visionary and CEO of SingularityNET Dr. Ben Goertzel provides a deep dive into the possible realization of Artificial General Intelligence (AGI) within 3-7 years. Explore the intriguing connections between self-awareness, consciousness, and the future of Artificial Super Intelligence (ASI) and discover the transformative societal changes that could arise. This episode is brought to you by AWS Inferentia (https://go.aws/3zWS0au), the AWS Insiders Podcast (https://pod.link/1608453414), and by Modelbit (https://modelbit.com), for deploying models in seconds. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information. In this episode you will learn: • Decentralized and benevolent AGI [03:13] • The SingularityNET ecosystem [13:10] • Dr. Goertzel's vision for realizing AGI - combining DL with neuro-symbolic systems, genetic algorithms and knowledge graphs [25:50] • How reaching AGI will trigger Artificial Super Intelligence [38:51] • Dr. Goertzel's approach to AGI using OpenCog Hyperon [42:34] • Why Dr. Goertzel believes AGI will be positive for humankind [53:07] • How to ensure the AGI is benevolent [1:06:43] • How AGI or ASI may act ethically [1:13:50] Additional materials: www.superdatascience.com/697
In episode 237 of the Parker's Pensées Podcast, I'm joined by Dr. Ben Goertzel to discuss the philosophy of artificial general intelligence. We dive into the distinction between analytic and continental philosophy, the definition of artificial general intelligence, and take a deep dive into Ben's work on his latest AGI project, OpenCog Hyperon. It's an amazing episode and hopefully the first of like a hundred episodes with Ben! If you like this podcast, then support it on Patreon for $3, $5 or more a month. Any amount helps, and for $5 you get a Parker's Pensées sticker and instant access to all the episode as I record them instead of waiting for their release date. Check it out here: Patreon: https://www.patreon.com/parkers_pensees If you want to give a one-time gift, you can give at my Paypal: https://paypal.me/ParkersPensees?locale.x=en_US Check out my merchandise at my Teespring store: https://teespring.com/stores/parkers-penses-merch Come talk with the Pensées community on Discord: dsc.gg/parkerspensees Sub to my Substack to read my thoughts on my episodes: https://parknotes.substack.com/ Check out my blog posts: https://parkersettecase.com/ Check out my Parker's Pensées YouTube Channel: https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA Check out my other YouTube channel on my frogs and turtles: https://www.youtube.com/c/ParkerSettecase Check me out on Twitter: https://twitter.com/trendsettercase Instagram: https://www.instagram.com/parkers_pensees/0:00 - Why AGI instead of the Metaverse? 13:25 - Nihilism/Semi-Reality and the Mind 29:33 - Reverse engineering past authors 31:04 - What is Intelligence? 38:12 - What is Artificial General Intelligence? 39:10 - Analytic Philosophy vs. Continental Philosophy 44:13 - Emulating the Human Mind not the Human Brain 51:05 - The OpenCog Hyperon Architecture and the BlockChain 54:50 - Creating a New Language for AGI (Metta Type Talk) 58:38 - Recurrent Networks for AGI 1:03:39 - The SophiaVerse virtual world for training AGIs 1:04:47 - Morality and Large Language Models 1:12:40 - The Frame Problem for LLMs 1:15:17 - The Secret Formula for AGI (Cognitive Synergy) 1:17:40 - Kill Switches and Existential Threats 1:25:14 - Why "Hyperon"?
Watch the Full Episode for FREE: AI Wars: Google's Bard Takes On OpenAI's ChatGPT - Dr. Ben Goertzel - London Real
Watch the Full Episode for FREE: AI Wars: Google's Bard Takes On OpenAI's ChatGPT - Dr. Ben Goertzel - London Real
A Consensus 2023 panel with Edward Snowden and Ben Goertzel.A former U.S. defense contractor, whistleblower and activist weighs in on the surveillance implications of recent advancements in artificial intelligence in conversation with a cognitive scientist working to democratize AI development.David Z. Morris, chief insights columnist of CoinDesk, moderates alongside panelists:Ben Goertzel, CEO of SingularityNETEdward Snowden, president of Freedom of the Press FoundationThis episode is executive produced by Jared Schwartz and edited by Ryan Huntington, with additional production assistance from Eleanor Pahl. Cover image by Kevin Ross and the theme song is "Get Down" by Elision.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Watch the Full Episode for FREE: Why The Godfather of A.I. Quit Google - Warns Of Danger Ahead - Dr. Ben Goertzel - London Real
Watch the Full Episode for FREE: Why The Godfather of A.I. Quit Google - Warns Of Danger Ahead - Dr. Ben Goertzel - London Real
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future. The complete show notes for this episode can be found at https://twimlai.com/go/625
In this video, Ben interviews Ben Goertzel about the inevitability of AGI and the need for decentralized networks to deploy AI in an open, democratic way. Decentralized AI will play a substantial economic role in the coming years... and there are no signs of stopping! Follow Ben Goertzel: https://twitter.com/bengoertzel?lang=en Interested in Crypto Retirement Accounts? Check out iTrust Capital! ➡️ https://itrust.capital/Bitboy
YouTube Link: https://www.youtube.com/watch?v=27zHyw_oHSI Ben Goertzel is a computer scientist, mathematician, and entrepreneur. His work focuses on AGI, which aims to create truly intelligent machines that can learn, reason, and think like humans. This episode has been released early in an ad-free audio version for TOE members at http://theoriesofeverything.org. Sponsors: - Brilliant: https://brilliant.org/TOE for 20% off - *New* TOE Website (early access to episodes): https://theoriesofeverything.org/ - Patreon: https://patreon.com/curtjaimungal - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt... LINKS MENTIONED: Center for future mind (FAU): https://www.fau.edu/future-mind/ Wolfram talk from Mindfest https://youtu.be/xHPQ_oSsJgg Singularity Net https://singularitynet.io/ TIMESTAMPS: 00:00:00 Introduction 00:02:37 How to make machines that think like people 00:10:03 GPT will make 95% of jobs obsolete 00:18:59 The 5-year Turing test 00:21:37 Definition of "intelligence" doesn't matter 00:26:15 Mathematical definition of self-transcendence 00:30:10 The 3 routes to AGI 00:44:19 Unfolding AI with Galois connections 00:49:32 Neuromorphic chips, hybrid architectures, and future hardware 00:54:05 Super AGI will overshadow humanity 00:56:33 Infinity groupoid 01:01:52 There are no limitations to AI development 01:05:00 Social intelligence is independent in OpenCog Hyperon systems 01:07:33 Embodied collaboration is fundamental to human intelligence 01:08:49 Algorithmic information theory and the Robot College Test Learn more about your ad choices. Visit megaphone.fm/adchoices
In today's video, we have a special guest, Ben Goertzel, the founder of SingularityNet, who is considered the godfather of AI. In this interview, we explore the sudden wave of AI technology in the tech space and how it has taken the industry by surprise. Join us in this interview as we delve deeper into the implications of this category of technologies and the economic and social implications it has. Interested in Crypto Retirement Accounts? Check out iTrust Capital! ➡️ https://itrust.capital/Bitboy
YouTube link: https://youtu.be/xHPQ_oSsJgg Center for the Future Mind (https://www.fau.edu/future-mind/) presents this Wolfram lecture from Mindfest 2023. This episode has been released early in an ad-free audio version for TOE members at http://theoriesofeverything.org. Sponsors: - Brilliant: https://brilliant.org/TOE for 20% off - *New* TOE Website (early access to episodes): https://theoriesofeverything.org/ - Patreon: https://patreon.com/curtjaimungal - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything LINKS MENTIONED: - Center for the Future Mind: https://www.fau.edu/future-mind/ - Donald Hoffman, Bernardo Kastrup, Susan Schneider debate on Machines and Consciousness: https://youtu.be/VmQXpKyUh4g - Stephen Wolfram on Wolfram Physics Project on TOE: https://youtu.be/1sXrRc3Bhrs TIMESTAMPS: 00:00:00 Introduction 00:02:58 Physics from computation 00:11:30 Generalizing Turing machines 00:17:34 Dark matter as Indicating "atoms of space" 00:22:13 Energy as density of space itself 00:30:30 Entanglement limit of all possible computations 00:34:53 What persists across the universe are "concepts" 00:40:09 How does ChatGPT work? 00:41:41 Irreducible computation, ChatGPT, and AI 00:49:20 Recovering general relativity from the ruliad (Wolfram Physics Project) 00:58:38 Coming up: David Chalmers, Ben Goertzel, and more Wolfram Learn more about your ad choices. Visit megaphone.fm/adchoices
Jim talks with recurring guest Forrest Landry about his arguments that continued AI development poses certain catastrophic risk to humanity. They discuss AI versus advanced planning systems (APS), the release of GPT-4, emergent intelligence from modest components, whether deep learning alone will produce AGI, Rice's theorem & the impossibility of predicting alignment, the likelihood that humans try to generalize AI, why the upside of AGI is an illusion, agency vs intelligence, instrumental convergence, implicit agency, deterministic chaos, theories of physics as theories of measurement, the relationship between human desire and AI tools, an analogy with human-animal relations, recognizing & avoiding multipolar traps, an environment increasingly hostile to humans, technology & toxicity, short-term vs long-term risks, why there's so much disagreement about AI risk, the substrate needs hypothesis, an inexorable long-term convergence process, why the only solution is avoiding the cycle, a boiling frog scenario, the displacement of humans, the necessity of understanding evolution, economic decoupling, non-transactional choices, the Forward Great Filter answer to the Fermi paradox, and much more. Episode Transcript JRS EP 153 - Forrest Landry on Small Group Method Forrest Landry on Twitter JRS Currents 072: Ben Goertzel on Viable Paths to True AGI JRS EP25 - Gary Marcus on Rebooting AI JRS Currents 036: Melanie Mitchell on Why AI is Hard EP137 Ken Stanley on Neuroevolution "Why I Am Not (As Much Of) A Doomer (As Some People)," by Scott Alexander Forrest Landry is a philosopher, writer, researcher, scientist, engineer, craftsman, and teacher focused on metaphysics, the manner in which software applications, tools, and techniques influence the design and management of very large scale complex systems, and the thriving of all forms of life on this planet. Forrest is also the founder and CEO of Magic Flight, a third-generation master woodworker who found that he had a unique set of skills in large-scale software systems design. Which led him to work in the production of several federal classified and unclassified systems, including various FBI investigative projects, TSC, IDW, DARPA, the Library of Congress Congressional Records System, and many others.
Jim talks with Joscha Bach about current and future developments in the generative AI space. They discuss the skepticism of the press, small productive applications, questions about intellectual property rights, confabulation in human thinking, nanny rails, 3 approaches to AI alignment, Aquinas's 7 virtues, issues of consciousness-like agency, love as an answer to the alignment problem, the difficulty with fairness, serving shared sacredness, dealing with entropy, integrated information theory & its incompatibility with the Church-Turing thesis, neural Darwinism, a point where extrapolation & interpolation become the same, building an AI artist, free will, the capacity of human memory, consciousness as a conductor, the scaling hypothesis in AGI, making the system learn from its own thoughts, computation as a rewrite system, neurons as animals, and much more. Episode Transcript JRS EP72 - Joscha Bach on Minds, Machines & Magic JRS EP87 - Joscha Bach on Theories of Consciousness JRS EP 178 - Anil Seth on A New Science of Consciousness JRS EP108 - Bernard Baars on Consciousness JRS EP105 - Christof Koch on Consciousness JRS Currents 072: Ben Goertzel on Viable Paths to True AGI JRS EP137 - Ken Stanley on Neuroevolution Joscha Bach is a cognitive scientist working for MIT Media Lab and the Harvard Program for Evolutionary Dynamics. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.
Watch the Full Episode for FREE: Dr Ben Goertzel - Artificial Intelligence & The Singularity: Will A.I. Destroy Life As We Know It? - London Real
Watch the Full Episode for FREE: Dr Ben Goertzel - Artificial Intelligence & The Singularity: Will A.I. Destroy Life As We Know It? - London Real