POPULARITY
In this edition of The Naked Scientists, we look back at another brilliant year of science and select some of our favourite stories to come out of it... Like this podcast? Please help us by supporting the Naked Scientists
ÉvénementsQuantAlps Days les 30 septembre et 1ier octobre à Grenoble. Nous y étions tous les deux. C'était la troisième édition de ces deux journées qui permettent à l'écosystème de la recherche quantique grenoblois associant UGA, le CEA, le CNRS et Inria de mettre en avant ses récents travaux. Ils accueillaient aussi des chercheurs externes à Grenoble.Séminaire Teratec AQADOC à Jussieu le 2 octobre piloté par Teratec, EDF, avec Welinq, Pasqal et Quandela, entre autres (slides). Le sujet : l'interconnexion entre ordinateurs quantiques, indispensable pour atteindre un régime utile d'avantage quantique en mode FTQC. Diverses méthodes de partitionnement d'algorithmes quantiques étaient présentées par Welinq.Journée Quantique Minalogic à Grenoble (programme),avec un keynote d'Oliver sur l'état de l'art du calcul quantique (slides), le cinquième depuis 2020. Fanny intervenait sur la stratégie européenne d'OVHcloud. Bpifrance BIG à Bercy le 10 octobre, le grand rendez-vous de l'entrepreneuriat. Avec notamment un panel avec Jean-François Bobier du BCG, Cécile Perrault d'Alice&Bob, Frédéric Barbaresco de Thales, et Christophe Legrand de Pasqal (vidéo). Maud Vinet de Quobly intervenait dans la grande scène du BANG pour parler de progrès en 7 minutes (vidéo) tout comme Christophe Jurczak de Quantonation (vidéo). Théau Péronnin d'Alice&Bob intervenait la veille dans la journée Deep Tech (vidéo).Alain Aspect faisait mi-octobre une visite à Taiwan et en Corée du Sud, accueilli comme un prince et intervenant dans de nombreux événements. Il était fait docteur honoris causa de NTU, intervenait auprès d'étudiants et de lycées, et visitait un événement organisé par Foxconn. En Corée, il était accompagné de Georges-Olivier Reymond, le CEO de Pasqal.Munich Quantum Software Forum et visite de l'écosystème de Munich, des 21 au 25 octobre. Conférence Quantum+AI à New York le 29 octobre avec une intervention d'Olivier sur le rôle des LLM dans le domaine des technologies quantiques (support de présentation). La conférence durait 2 jours au Brookfield Center. Événements à venir :Les journées GDR TEQ à Jussieu des 13 au 15 novembre qui feront le point de la recherche quantique au CNRS et avec des intervenants de renom d'autres pays comme David Awschalom de l‘Université de Chicago (lien).Les journées Teratec sur les algorithmes et les capteurs quantiques chez EDF à Palaiseau les 13 et 14 novembre (lien).Le Symposium Alain Aspect sur le climat organisé par Pasqal les 19-20 novembre à Saint Germain en Laye chez Exail (programme et inscriptions payantes).Quantum Matter du 20 au 23 mai à Grenoble, une grande conférence internationale avec un tas de pointures académiquesInternational Conference on Quantum Computing, Institut Poincaré à Paris les 12-16 mai. Ca a l'air bien mais l'agendan'est pas encore disponible. Actualité France Annonces scientifiques de QuoblyQuobly commence à publier quelques papiers scientifiques concernant l'avancement de ses qubits.Notons aussi que Maud Vinet (CEO) a gagné le prix EY Entrepreneur de l'année. Annonce de roadmap FTQC de Quandela En octobre 2024, Quandela annonçait sa roadmap qui va jusqu'en 2030. Voir aussi Ils utilisent la lumière pour faire du calcul quantique par Serge Abiteboul et Claire Mathieu, Le Monde, octobre 2024, contenant une interview de Pascale et Jean Senellart. IQM chez EvidenLa machine IQM Spark de 5 qubits a été livrée à Eviden à Angers. Qubit de tungsten au CEAUne équipe de recherche internationale pilotée par le laboratoire SPEC du CEA à Saclay démontrait des qubits de spins de noyaux avec des temps de cohérence de plusieurs secondes avec un contrôle cohérent et la lecture d'état. Voir Individual solid-state nuclear spin qubits with coherence exceeding seconds by James O'Sullivan, Thierry Chanelière, Philippe Goldner, Daniel Esteve, Denis Vion, Patrice Bertet, Emmanuel Flurin et al, CEA, UGA, Chimie Paristech, University of Toulon, UCL, arXiv, October 2024 (14 pages). Actualité Étranger Prix Nobel de Physique et de Chimie Les salves de prix Nobel de physique et de chimie 2024 on récompensé des chercheurs sur des travaux portant sur l'intelligence artificielle. John Hopfield et Geoff Hinton pour la physique. Ouverture d'un Quantum Data Center IBM en Allemagne Ce « data center » était Inauguré par le chancelier Scholz le 1ier octobre 2024 à Ehningen, près de Stuttgart. Dans les locaux d'IBM.Levée de fonds de Q-CTRLQ-CTRL réalise une levée de $59M Google investit dans QuEraQuEra annonçait que Google investissait chez eux. Zapata AI fait failliteZapata est en dépôt de bilan. Applications for Climate ChangeL'Open Quantum Instit...
This week the hosts are here in full force to welcome Justin Maier, co-founder and CEO of Civitai. Elon Musk of Tesla revealed of the Robotaxi and its accompanying Robobus and Optimus robots. The nobel prize in Physics went to John Hopfield and Geoff Hinton, the AI Scientist who "retired from Google" warning the NY Times about the dangers of AGI. RIP Spacetop computer with its wearable AR monitor. Edward Berger's immersive short film Submerged is now available on Apple's Vision Pro. Justin Maier explains how Civitai is about to revolutionize generative AI content creation with an enhanced, consolidated workflow which will run on their Buzz token. You can see why Andressen invested in this young AI company. Thank you to our sponsor, Zappar!Don't forget to like, share, and follow for more! Follow us on all socials @ThisWeekInXR!https://linktr.ee/thisweekinxr Hosted on Acast. See acast.com/privacy for more information.
Neural Network researchers Geoff Hinton and John Hopfield have been awarded the 2024 Nobel Prize in Physics. Plus A Virginia congressional candidate has created an AI chatbot just in case the incumbent doesn't show up for the October 17th debate. And Adobe is proposing a way to watermark content to stop AI thefts. Will the community support it?Starring Sarah Lane, Robb Dunewood, Roger Chang, Joe.Link to the Show Notes.
Neural Network researchers Geoff Hinton and John Hopfield have been awarded the 2024 Nobel Prize in Physics. Plus A Virginia congressional candidate has created an AI chatbot just in case the incumbent doesn't show up for the October 17th debate. And Adobe is proposing a way to watermark content to stop AI thefts. Will the community support it? Starring Sarah Lane, Robb Dunewood, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
Google must give third-party app stores access to Google Play's catalog, The EU has formed Appeals Centre Europe to handle disputes between social media platforms, and the 2024 Nobel Prize in Physics has been awarded jointly to neural network researchers Geoff Hinton and John Hofield. MP3 Please SUBSCRIBE HERE. You can get an ad-free feedContinue reading "Google Must Give Third-Party App Stores Access To Google Play's Catalog – DTH"
This episode of The Naked Scientists marks the return of a brand news series of Titans of Science, where some of the movers and shakers of the scientific and technological world help us to unpick a big problem. Kicking us off is the AI pioneer Geoffrey Hinton, with a fascinating insight into artificial intelligence, how it actually works and what we need to be wary of... Like this podcast? Please help us by supporting the Naked Scientists
Nick Frosst, co-founder of Cohere, on the future of LLMs, and AGI. Learn how Cohere is solving real problems for business with their new AI models. This is the first podcast from our new Cohere partnership! Nick talks about his journey at Google Brain, working with AI legends like Geoff Hinton, and the amazing things his company, Cohere, is doing. From creating the must useful language models for businesses to making tools for developers, Nick shares a lot of interesting insights. He even talks about his band, Good Kid! Nick said that RAG is one of the best features of Cohere's new Command R* models. We are about to release a deep-dive on RAG with Patrick Lewis from Cohere, keep an eye out for that - he explains why their models are specifically optimised for RAG use cases. Learn more about Cohere Command R* models here: https://cohere.com/commandhttps://github.com/cohere-ai/cohere-toolkit Nick's band Good Kid: https://goodkidofficial.com/ Nick on Twitter: https://x.com/nickfrosst Disclaimer: We are in a partnership with Cohere to release content for them. We were not told what to say in the interview, and didn't edit anything out from the interview. We are currently planning to release 2 shows per month under the partnership about their AI platform, research and strategy.
The Safe Space Imperative: How Leaders Can Champion LGBTQIA+ Inclusion We chat with Lucy Straker (she/they), co-founder of Proud Changemakers As the UK gears up for a historic election, the need for safe, inclusive spaces has never been more critical. In this powerful episode, we sit down with Lucy Straker (she/they), co-founder of Proud Changemakers, to explore how they're transforming the landscape for LGBTQIA+ people across civil society. Lucy shares the inspiring story of how Proud Changemakers was born from a burning desire to create a platform where LGBTQIA+ voices could be heard, celebrated, and empowered. From their groundbreaking podcast to their vibrant social media presence, Lucy and their team are on a mission to spark change and foster belonging in a world that often feels anything but safe. We dive deep into the unique challenges faced by LGBTQIA+ individuals in organisations of all sizes, from the isolation of being the sole representative in a small charity to navigating the complex dynamics of large LGBTQIA+ networks. Lucy offers invaluable insights on how leaders can cultivate genuinely inclusive environments where every person can thrive. With a career spanning over 15 years in the voluntary sector, Lucy brings a wealth of expertise in communications, campaigning, and community building. They reveal the game-changing potential of grassroots movements like Proud Changemakers to drive sector-wide transformation and imagine a bold future where their model could revolutionise inclusion across all industries. At a time when the stakes couldn't be higher, this episode is a rallying cry for leaders everywhere to stand with the LGBTQIA+ community, champion diversity, and build a society where every voice is valued. Tune in to be inspired, challenged, and equipped to lead the change. In tech news, Paul and Zoe chat through what's new in the world of AI. Please note this episode was recorded before the UK general election was announced, on 22 May 2024. Notes Find out more about Proud Changemakers https://linktr.ee/proudchangemakers Tech news LinkedIn post about AI and inclusion https://www.linkedin.com/posts/zoeamar_artificialintelligence-charities-inclusion-activity-7197151633812668417-EGjE? BBC story about Geoff Hinton and Universal Basic Income https://www.bbc.com/news/articles/cnd607ekl99o.amp Email us with your questions and ideas for future episodes startsatthetop@gmail.com And please leave us a review if you enjoy what you hear! Editing and production - Syren Studios with Paul Thomas Music by Joseph McDade https://josephmcdade.com/music Full transcript of this episode (srt file) Full transcript of this episode (.txt file) Transcripts are also available through your podcast app.
We're coming back to the topic of generative artificial intelligence, asking how this potentially gamechanging technology is going to be integrated into our society. We'll hear an explanation of neural networks from Geoff Hinton, one of the founding fathers of AI, and some of the most promising avenues for maximising the strengths of machine learning systems with tech journalist David McClelland. After a brief update on the debate around AI sentience from the foothills of the Himalayas from Nicky Clayton, we explore why chatbots might be about to stop advancing as rapidly as before, and how... Like this podcast? Please help us by supporting the Naked Scientists
Moonshots and Mindsets with Peter Diamandis ✓ Claim Key Takeaways Take a look at over 100 other Joe Rogan Notes!Read the full notes @ podcastnotes.orgIn this episode, recorded during the 2024 Abundance360 Summit, Ray, Geoffrey, and Peter debate whether AI will become sentient, what consciousness constitutes, and if AI should have rights. 01:12 | The Future of AI and Humanity 10:30 | The Ethics of Artificial Intelligence 25:00 |The Dangers and Possibilities of AI Ray Kurzweil, an American inventor and futurist, is a pioneer in artificial intelligence. He has contributed significantly to OCR, text-to-speech, and speech recognition technologies. He is the author of numerous books on AI and the future of technology and has received the National Medal of Technology and Innovation, among other honors. At Google, Kurzweil focuses on machine learning and language processing, driving advancements in technology and human potential. Geoffrey Hinton, often referred to as the "godfather of deep learning," is a British-Canadian cognitive psychologist and computer scientist recognized for his pioneering work in artificial neural networks. His research on neural networks, deep learning, and machine learning has significantly impacted the development of algorithms that can perform complex tasks such as image and speech recognition. Read Ray's latest book, The Singularity Is Nearer: When We Merge with AI Follow Geoffrey on X: https://twitter.com/geoffreyhinton Learn more about Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter _____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, recorded during the 2024 Abundance360 Summit, Ray, Geoffrey, and Peter debate whether AI will become sentient, what consciousness constitutes, and if AI should have rights. 01:12 | The Future of AI and Humanity 10:30 | The Ethics of Artificial Intelligence 25:00 |The Dangers and Possibilities of AI Ray Kurzweil, an American inventor and futurist, is a pioneer in artificial intelligence. He has contributed significantly to OCR, text-to-speech, and speech recognition technologies. He is the author of numerous books on AI and the future of technology and has received the National Medal of Technology and Innovation, among other honors. At Google, Kurzweil focuses on machine learning and language processing, driving advancements in technology and human potential. Geoffrey Hinton, often referred to as the "godfather of deep learning," is a British-Canadian cognitive psychologist and computer scientist recognized for his pioneering work in artificial neural networks. His research on neural networks, deep learning, and machine learning has significantly impacted the development of algorithms that can perform complex tasks such as image and speech recognition. Read Ray's latest book, The Singularity Is Nearer: When We Merge with AI Follow Geoffrey on X: https://twitter.com/geoffreyhinton Learn more about Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter _____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of "A Beginner's Guide to AI," we delve into the innovative realm of Sparse Mixture of Experts (MoE) models, with a special focus on Mistral, a French AI company pioneering in this field. We unpack the concept of Sparse MoE, highlighting its efficiency, adaptability, and scalability in AI development. We explore Mistral's groundbreaking work in applying Sparse MoE to language models, emphasizing its potential for more accessible and sustainable AI technologies. Through a detailed case study, we illustrate the real-world impact of Mistral's innovations. We also invite AI enthusiasts to join our conversation and provide an interactive element for deeper engagement with the topic. The episode concluded with insightful thoughts on the future of AI and a reflective quote from Geoff Hinton. This podcast was generated with the help of ChatGPT and Claude 2. We do fact-check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads"
“A.I. is not the problem; it's the solution.”—Andrew Ng at TED, 17 October 2023Recorded 21 November 2023Transcript with relevant links and links to audio fileEric Topol (00:00):Hello, it's Eric Topol with Ground Truths, and I'm really delighted to have with me Andrew Ng, who is a giant in AI who I've gotten to know over the years and have the highest regard. So Andrew, welcome.Andrew Ng (00:14): Hey, thanks Eric. It's always a pleasure to see you.Eric Topol (00:16):Yeah, we've had some intersections in multiple areas of AI. The one I wanted to start with is that you've had some direct healthcare nurturing and we've had the pleasure of working with Woebot Health, particularly with Alison Darcy, where the AI chatbot has been tested in randomized trials to help people with depression and anxiety. And, of course, that was a chatbot in the pre-transformer or pre-LLM era. I wonder if you could just comment about that as well as your outlook for current AI models in healthcare.Andrew Ng (01:05):So Alyson Darcy is brilliant. It's been such a privilege to work with her over the years. One of the exciting things about AI is a general purpose technology. It's not useful for one thing. And I think in healthcare and more broadly across the world, we're seeing many creative people use AI for many different applications. So I was in Singapore a couple months ago and I was chatting with some folks, Dean Chang and one of his doctors, Dr. M, about how they're using AI to read EHRs in a hospital in Singapore to try to estimate how long a patient's going to be in the hospital because of pneumonia or something. And it was actually triggering helpful for conversations where a doctor say, oh, I think this patient will be in for three days, but the AI says no, I'm guessing 15 days. And this triggers a conversation where the doctor takes a more careful look. And I thought that was incredible. So all around the world, many innovators everywhere, finding very creative ways to apply AI to lots of different problems. I think that's super exciting.Eric Topol (02:06):Oh, it's extraordinary to me. I think Geoff Hinton has thought that the most important application of current AI is in the healthcare/ medical sphere. But I think that the range here is quite extraordinary. And one of the other things that you've been into for all these years with Coursera starting that and all the courses for deep learning.AI —the democratization of knowledge and education in AI. Since this is something like all patients would want to look up on whatever GPT-X about their symptoms different than of course a current Google search. What's your sense about the ability to use generative AI in this way?Andrew Ng (02:59):I think that instead of seeing a doctor as a large language model, what's up with my symptoms, people are definitely doing it. And there have been anecdotes of this maybe saving a few people's lives even. And I think in the United States we're privileged to have some would say terrible, but certainly better than many other country's healthcare system. And I feel like a lot of the early go-to market for AI enabled healthcare may end up being in countries or just places with less access to doctors. The definitely countries where you can either decide do you want to go see if someone falls sick? You can either send your kid to a doctor or you can have your family eat for the next two weeks, pick one. So with families made these impossible decisions, I wish we could give everyone in the world access to a great doctor and sometimes the alternatives that people face are pretty harsh. I think any hope, even the very imperfect hope of LLM, I know it sounds terrible, it will hallucinate, it will give bad medical advice sometimes, but is that better than no medical advice? I think there's really some tough ethical questions are being debated around the world right now.Eric Topol (04:18):Those hallucinations or confabulation, won't they get better over time?Andrew Ng (04:24):Yes, I think LLM technology is advanced rapidly. They still do hallucinate, they do still mix stuff up, but it turns out that I think people still have an impression of LLM technology from six months ago. But so much has changed in the last six months. So even in the last six months, it is actually much harder now to get an LMM, at least many of the public ones offered by launch companies. It's much harder now compared to six months ago to get it to give you deliberately harmful advice or if you ask it for detailed instructions on how to commit a crime. Six months ago it was actually pretty easy. So that was not good. But now it's actually pretty hard. It's not impossible. And I actually ask LLMs for strange things all the time just to test them. And yes, sometimes I can get them when I really try to do something inappropriate, but it's actually pretty difficult.(05:13):But hallucination is just a different thing where LLMs do mix stuff up and you definitely don't want that when it comes to medical advice. So it'll be an interesting balance I think of when should we use web search for trust authoritative sources. So if I have a sprained ankle, hey, let me just find a webpage on trust from a trusted medical authority on how to deal with sprained ankle. But there are also a lot of things where there is no one webpage that just gives me an answer. And then this is an alternative for generating a novel thing that's need to my situation. In non-healthcare cases, this has clearly been very valuable in just the healthcare, given the criticality of human health and human life. I think people are wrestling with some challenging questions, but hallucinations are slowly going down.Eric Topol (05:59):Well, hopefully they'll continue to improve on that. And as you pointed out the other guardrails that will help. Now that gets me to a little over a month ago, we were at the TED AI program and you gave the opening talk, which was very inspirational, and you basically challenged the critics of the negativism on AI with three basic issues: amplifying our worst impulses, taking our jobs and wiping out humanity. And it was very compelling and I hope that that will be posted soon. And of course we'll link it, but can you give us a skinny of your antidote to the doomerism about AI?Andrew Ng (06:46):Yeah, so I think AI is a very beneficial technology on average. I think it comes down to do we think the world is better off or worse off with more intelligence in it, be it human intelligence or artificial intelligence? And yes, intelligence can be used for nefarious purposes and it has been in history, I think a lot of humanity has progress through humans getting smarter and better trained and more educated. And so I think on average the world is better off with more intelligence in it. And as for AI wiping oiut humanity, I just don't get it. I've spoken with some of the people with this concern, but their arguments for how AI could wipe up humanity are so vague that they boil down to it could happen. And I can't prove it won't happen any more than I can prove a negative like that. I can't prove that radio wave is being emitted from earth won't cause aliens to find us and space aliens to wipe us out. But I'm not very alarmed about space aliens, maybe I should be. I don't know. And I find that there are real harms that are being created by the alarmist narrative on AI. One thing that's quite sad was chatting with they're now high school students that are reluctant to enter AI because they heard they could lead to human extinction and they don't want any of that. And that's just tragic that we're causing high school students to make a decision that's bad for themselves and bad for humanity because of really unmerited alarms about human extinction.Eric Topol (08:24):Yeah, no question about that. You had, I think a very important quote is “AI is not the problem, it's the solution” during that. And I think that gets us to the recent flap, if you will, with OpenAI that's happened in recent days whereby it appears to be the same tension between the techno-optimists like you and I would say, versus the effective altruism (EA) camp. And I wonder what your thoughts are regarding, obviously we don't know all the inside dynamics of this, with probably the most publicized interactions in AI that I can remember in terms of its intensity, and it's not over yet. But what were your thoughts about as this has been unfolding, which is, of course, still in process?Andrew Ng (09:19):Yeah, honestly, a lot of my thoughts have been with all the employees of OpenAI, these are hundreds of hardworking, well-meaning people. They want to build tech, make available others, make the world better off and out of the blue overnight. The jobs livelihoods and their levers to make a very positive impact to the world was disrupted for reasons that seem vague and at least from the silence of the board, I'm not aware of any good reasons for really all these wonderful people's work and then livelihoods and being disrupted. So I feel sad that that just happened, and then I feel like OpenAI is not perfect, no organization in the world is, but frankly they're really moving AI forward. And I think a lot of people have benefited from the work of OpenAI. And I think the disruptions of that as well is also quite tragic. And this may be—we will see if this turns out to be one of the most dramatic impacts of unwarranted doomsaying narratives causing a lot of harm to a lot of people. But we'll see what continuously emerges from the situation.Eric Topol (10:43):Yeah, I mean I think this whole concept of AGI, artificial general intelligence and how it gets down to this fundamental assertion that we're at AGI, the digital brain or we're approximating or the whole idea that the machine understanding is that at unprecedented levels. I wonder your thoughts because obviously there still is the camp that says this is a sarcastic parrot. It's all anything that suggests understanding is basically because of pre-training or other matters and to try to assign any real intelligence that's at the level of human even for a particular task no less beyond human is unfounded. What is your sense about this tension and this ongoing debate, which seemed to be part of the OpenAI board issues?Andrew Ng (11:50):So I'm not sure what happening in the OpenAI board, but the most widely accepted definition of AGI is AI to do any intellectual tasks that the human can. And I do see many companies redefining AGI to other definitions. So for the original definition, I think we're decades away. We're very clearly not there, but many companies that, let's say alternative definitions and yeah, you have an alternative definition, maybe we're there already. One of my eCommerce friends looked at one of the alternative definitions. He said, well, for that definition, I think we got AGI 30 years ago.(12:29):And looking on the more positive side. And I think one of the signs that the companies reach AGI frankly would be if they're rational economic player, they should maybe let go all of their employees that do maybe intellectual work. So until that happens, I just don't, not to joke about it, that would be a serious thing. But I think we're still many decades away from that original definition of AGI. But on the more positive side in healthcare and other sectors, I feel like there's a recipe for using AI that I find fruitful and exciting, which is it turns out that jobs are made out of tasks and I think of AI as automating tasks rather than jobs. So a few years ago, Geoff Hinton had made some strong statements about AI replacing radiologists. I think those predictions have really not come true today, but it turns out as Eric, I enjoy your book, which is very thoughtful about AI as well.(13:34):And I think if you look at say the job of radiologists, they do many, many different things, one of which is read x-rays, but they also do patient intakes, they operate X-ray machines. And I find that when we look at the healthcare sector or other sectors and look at what people are doing, break jobs down into tasks, then usually there can often be a subset of tasks. There's some that are amenable to AI automation and that recipe is helping a lot of businesses create value and also in some cases make healthcare better. So I'm actually excited and because healthcare, so many people doing such a diverse range of tasks, I would love to see more organizations do this type of analysis.(14:22):The interesting thing about that is we can often automate, I'm going to make up a number, 20% or 30% or whatever, have a lot of different jobs tasks. So one, there's a strong sign we're far from AGI because we can't automate a hundred percent of the intellectual tasks, but second, many people's jobs are safe because when we automate 20% of someone's job, they can focus on the other 80% and maybe even be more productivity and causes the marginal value of labor and therefore maybe even salaries that go uprooted and down. Actually recently, a couple weeks ago, few weeks ago, released a new course on Coursera “Generative AI for Everyone” where I go deeper into this recipe for finding opportunities, but I'm really excited about working with partners to go find these opportunities and go build to them.Eric Topol (15:15):Yeah, I commend you for that because you have been for your career democratizing the knowledge of AI and this is so important and that new course is just one more example. Everyone could benefit from it. Getting back to your earlier point, just because in the clinician doctor world, the burdensome task of data clerk function of having to be slave to keyboards and entering the visit data and then all the post- visit things. Now, of course, we're seeing synthetic notes and all this can be driven through an automated note that is not involving any keyboard work. And so, just as you say, that comprises maybe 20, 30% of a typical doctor's day, if not more. And the fact is that that change could then bring together the patient and doctor again, which has been a relationship that suffered because of electronic records and all of the data clerk functions. That's just a really, I think, a great example of what you just pointed out. I love “Letters from Andrew” which you publish, which as you mentioned, one of your recent posts was about the generative AI for everyone. And in those you recently addressed loneliness, which is as associated with all sorts of bad health outcomes. And I wonder if you could talk about how AI could help loneliness.Andrew Ng (16:48):So this is a fascinating case study where, so AI fund, we had wanted to do something on AI and relationships, kind of romantic relationships. And I'm an AI guy, I feel like, what do I know about romance? And if you don't believe me, you can ask my wife, she'll confirm I know nothing about romance, but we're privileged to partner with the former CEO of Tinder, Renata Nyborg, who knows about relationships in a very systematic way far more than anyone I know. And so working with her with a deep expertise about relationships, and it turns out she actually knows a lot about AI too. But then my team's knowledge about AI we're able to build something very unique that she launched that she announced called me. Now I've been playing around with it on my phone and it's actually interesting, remarkably good. I think relationship mentor, frankly, I wish I had Meeno back when I was single instead, I've asked my dumb questions to, and I'm excited that maybe AI, I feel like tech maybe has contributed to loneliness. I know the data is mixed, that social media contributes to social isolation. I know that different opinions are different types of data, but this is one case where hopefully AI can clearly not be the problem, but be part of the solution to help people gain the skills to build better relationships.Eric Topol (18:17):Yeah, now, it's really interesting here again, the counterintuitive idea that technology could enhance human bonds, which are all too short that we want to enhance. Of course, you've had an incredible multi-dimensional career. We talked a little bit about your role in education with the founding of the massive online courses (MOOCs), but also with Baidu and Google. And then of course at Stanford you've seen the academic side, you've seen the leading tech titan side, the entrepreneurial side with the various ventures of trying to get behind companies that have promised you have the whole package of experience and portfolio. How do you use that now going forward? You're still so young and the field is so exciting. Where do you try to just cover all the bases or do you see yourself changing gears in some way? You haven't had a foot in every aspect?Andrew Ng (19:28):Oh, I really like what I do. I think these days I spend a lot of time at AI fund builds new companies using AI and deep learning.ai is an educational arm. And one of the companies that AI fund has helped incubate does computer vision work than AI. We actually have a lot of healthcare users as well using, I feel like with the recent advances in AI at the technology layer, things like large language models, I feel like a lot of the work that lies ahead of the entire field is to build applications on top of that. In fact, a lot of the media buzz has been on the technology layer, and this happens every time this technology change. When the iPhone came out, when we shifted the cloud, it's interesting for the media to talk about the technology, but it turns out the only way for the technology suppliers to be successful is if the application builders are even more successful.(20:26):They've got to generate enough revenue to pay the technology suppliers. So I've been spending a lot of my time thinking about the application layer and how to help either myself or support others to build more applications. And the annoying and exciting thing about AI is as a general purpose technology, there's just so much to do, there's so many applications to build. It's kind of like what is electricity good for? Or what is the cloud good for? It's just so many different things. So it is going to take us, frankly, longer than we wish, but it will be exciting and meaningful work to go to all the corners of healthcare and all the corners of education and finance and industrial and go find these applications and go help them.Eric Topol (21:14):Well, I mean you have such a broad and diverse experience and you predicted much of this. I mean, you knew somehow or other that when the graphic processing unit (GPU) would go from a very low number to tens of thousands of them, what might happen. And you were there, I think, before and perhaps anyone else. One of the things of course that this whole field now gets us to is potential tech dominance. And by what I mean there is that you've got a limited number of companies like Microsoft and Google and Meta and maybe Inflection AI and a few others that have capabilities of 30,000, 40,000, whatever number of GPUs. And then you have academic centers like your adjunct appointment at Stanford, which maybe has a few hundred or here at Scripps Research that has 150. And so we don't have the computing power to do base models and what can we do? How do you see the struggle between the entities that have what appears to be almost, if you will, if it's not unlimited, it's massive computing power versus academics that want to advance the field. They have different interests of course, but they don't have that power base. Where is this headed?Andrew Ng (22:46):Yeah, so I think the biggest danger to that concentration is regulatory capture. So I've been quite alarmed over moves that various entities, some companies, but also governments here in the US and in Europe, especially US and Europe, less than other places have been contemplating regulations that I think places a very high regulatory compliance burden that big tech companies have the capacity to satisfy, but that smaller players will not have the capacity to satisfy. And in particular, the definitely companies would rather not have the computer open source. When you take a smaller size, say 7 billion parameters model and fine tune it for specific to, it works remarkably well for many specific tasks. So for a lot of applications, you don't need a giant model. And actually I routinely run a seven or 13 billion parameters model on my laptop, more inference than fine tuning. But it's within the realm of what a lot of players can do.(23:51):But if inconvenient laws are passed, and they've certainly been proposed in Europe under the EU AI Act and also the White House Executive Order, if I think we've taken some dangerous steps to what putting in place very burdensome compliance requirements that would make it very difficult for small startups and potentially very difficult for less smaller organizations to even release open source software. Open source software has been one of the most important building blocks for everyone in tech. I mean, if you use a computer or a smartphone that because open, that's built on top of open source software, TCP, IP, internet, just how the internet works, law of that is built on top of open source software. So regulations that pamper people just wanting to release open source, that would be very destructive for innovation.Eric Topol (24:48):Right? In keeping with what we've been talking about with the doomsday prophecies and the regulations and things that would slow up things, the whole progress in the field, which we are obviously in touch with both sides and the tension there, but overregulation, the potential hazards of that are not perhaps adequately emphasized. And another one of your letters (Letters from Andrew), which you just got to there, was about AI at the edge and the fact that we can move towards, in contrast to the centralized computing power at a limited number of entities as you, I think just we're getting at, there's increasing potential for being able to do things on a phone or a laptop. Can you comment about that?Andrew Ng (25:43):Yeah, I feel like I'm going against many trends. It sounds like I'm off in a very weird direction, but I'm bullish about AI at the edge. I feel like if I want to do grammar checking using a large language model, why do I need to send all my data to a cloud provider when a small language model can do it just fine on my laptop? Or one of my collaborators at Stanford was training a large language model in order to do electronic health records. And so at Stanford, this actually worked done by one of the PhD students I've been working with. But so Yseem wound up fine tuning a large language model at Stanford so that he could run inference over there and not have to ship EHR and not have to ship private medical records to a cloud provider. And so I think that was an important thing to, and if open source were shut down, I think someone like Yseem would have had a much harder time doing this type of work.Eric Topol (27:04):I totally follow you the point there. Now, the last thing I wanted to get to was a multimodal AI in healthcare. When we spoke 5 years ago, when I was working on the Deep Medicine book, multimodal AI wasn't really possible. And the idea was that someday we'll have the models to do it. The idea here is that each of us has all these layers of data, our various electronic health records, our genome, our gut microbiome, our sensors and environmental data, social determinants of health, our immunome, it just goes on and on. And there's also the corpus of medical knowledge. So right now, no one has really done multimodal. They've done bimodal AI in healthcare where they take the electronic health records and the genome, or usually it's electronic health records and the scan, medical scan. No one has done more than a couple layers yet.(28:07):And the question I have is, it seems like that's imminently going to be accomplished. And then let's then get to will there be a virtual health coach? So unlike these virtual coaches like Woebot and the diabetes coaches and the hypertension coaches, will we ultimately have with multimodal AI, your forecast on that, the ability to have feedback to any given individual to promote their health, to prevent conditions that they might be at risk for having later in life or help managing all their conditions that they actually have already been declared. What's your sense about where we are with multimodal AI?Andrew Ng (28:56):I think there's a lot of work to be done still at unimodal, a lot of work to be done in text. LLM AI does a lot of work on images, and maybe not to talk about Chang's work all the time, but just this morning, I was just earlier, I was chatting with him about he's trying to train a large transformer on some time series other than text or images. And then semi collaborative, Stanford, Jeremy Irvin, Jose kind of poking at the corners of this. But I think a lot of people feel appropriately that there's a lot of work to be done still in unimodal. So I'm cheering that on. But then there's also a lot of work to be done in multimodal, and I see work beyond text and images, maybe genome, maybe some of the time series things, maybe some the HR specific things, which maybe is kind of textbook kind of not, I think it was just about a year ago that check GP was announced. So who knows? Just one more year of progress, who knows where it will be.Eric Topol (29:55):Yeah. Well, we know there will be continued progress, that's for sure. And hopefully as we've been discussing, there won't be significant obstacles for that. And hopefully there will be a truce between the two camps of the doomerism and optimism or somehow we're meet in the middle. But Andrew, it's been a delight to get your views on all this. I don't know how the OpenAI affair will settle out, but it does seem to be representative of the times we live in because at the same TED AI that you and I spoke at Ilya, spoke about AGI and that was followed onlhy a matter by days by Sam Altman talking about AGI and how OpenAI was approaching AGI capabilities. And it seems like this is, even though as you said, that there's a lot of different definition for AGI, the progress that's being made right now is extraordinary.(30:57):And grappling with the idea that there are certain tasks, at least certain understandings, certain intelligence that may be superhuman via machines is more than provocative. And I know you are asked to comment about this all the time, and it's great because in many respects, you're an expert, neutral observer. You're not in one of these companies that's trying to assert that they have sparks of AGI or actual AGI or whatever. So in closing, I think we look to you as , not just an expert, but one who has had such broad experience in this field and who has predicted so much of its progress and warned of the reasons that we would not continue to make that type of extraordinary progress. So I want to thank you for that. I'll keep reading Letters from Andrew. I hope everybody does, as many people as possible, should attend your “Generative AI for Everyone” course. And thank you for what you've done for the field, Andrew, we're all indebted to you.Andrew Ng (32:17):Thank you, Eric. You're always so gracious. It's always such a pleasure to see you and collaborate with you.Thanks for listening and reading Ground Truths. Please share this podcast if you found it informative. Get full access to Ground Truths at erictopol.substack.com/subscribe
#google #openai #mlnews Updates from the world of Machine Learning and AI Great AI memes here: https://twitter.com/untitled01ipynb OUTLINE: 0:00 - Google I/O 2023: Generative AI in everything 0:20 - Anthropic announces 100k tokens context 0:35 - Intro 1:20 - Geoff Hinton leaves Google 7:00 - Google memo leaked: we have no moat 11:30 - OpenAI loses 540M 12:30 - Google AI: Product first 15:50 - Ilya Sutskever on safety vs competition 18:00 - AI works cannot be copyrighted 19:40 - OpenAI tries to trademark GPT 20:30 - StarCoder: accessible code model 21:40 - RedPyjama & OpenLlama 22:55 - Mosaic 7B model 23:50 - YoloNAS 24:10 - Mojo programming language 25:30 - Random helpful things 37:40 - DeepMind soccer robots References: https://twitter.com/weirddalle/status/1649908805788893185https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.htmlhttps://www.technologyreview.com/2023/05/01/1072478/deep-learning-pioneer-geoffrey-hinton-quits-google/https://archive.ph/TrPoHhttps://twitter.com/DanHendrycks/status/1654560913939374080https://twitter.com/ylecun/status/1654930029569101824https://twitter.com/homehttps://twitter.com/ylecun/status/1654931495419621376https://twitter.com/pkedrosky/status/1653955254181068801https://www.semianalysis.com/p/google-we-have-no-moat-and-neitherhttps://twitter.com/untitled01ipynb/mediahttps://www.theinformation.com/articles/openais-losses-doubled-to-540-million-as-it-developed-chatgpthttps://archive.ph/bKsdMhttps://www.washingtonpost.com/technology/2023/05/04/google-ai-stop-sharing-research/https://twitter.com/giffmana/status/1654962145707130880https://twitter.com/Ken_Goldberg/status/1651309843804987393https://tsdr.uspto.gov/documentviewer?caseId=sn97733259&docId=PTD20230418160641&s=09#docIndex=1&page=1https://twitter.com/osanseviero/status/1654230764513370112https://huggingface.co/bigcode/starcoderhttps://huggingface.co/spaces/bigcode/bigcode-model-license-agreementhttps://twitter.com/hardmaru/status/1654649036333514753https://www.together.xyz/blog/redpajama-models-v1https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1https://github.com/openlm-research/open_llamahttps://www.mosaicml.com/blog/mpt-7bhttps://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.mdhttps://www.modular.com/mojohttps://www.aicrowd.com/challenges/hackaprompt-2023https://learnprompting.org/https://developer.nvidia.com/blog/nvidia-enables-trustworthy-safe-and-secure-large-language-model-conversational-systems/?ncid=prsy-552511https://blogs.nvidia.com/blog/2023/04/25/ai-chatbot-guardrails-nemo/https://lmql.ai/#distributionhttps://github.com/gventuri/pandas-ai?utm_source=pocket_readerhttps://lamini.ai/blog/introducing-laminihttps://github.com/deep-floyd/IFhttps://huggingface.co/spaces/DeepFloyd/IFhttps://twitter.com/FaramaFound/status/1650952295901720576https://txt.cohere.com/embedding-archives-wikipedia/?hsa_acc=509563538&hsa_ad=242008083&hsa_cam=626636963&hsa_grp=205646033&hsa_net=linkedin&hsa_ver=3&hss_channel=lcp-24024765https://arxiv.org/abs/2304.12210https://github.com/h2oai/h2ogpthttps://huggingface.co/h2oai/h2ogpt-oasst1-512-20bhttps://github.com/h2oai/h2o-llmstudiohttps://ai.facebook.com/blog/ai-dataset-animating-kids-drawings/https://www.camel-ai.org/https://github.com/lightaime/camel?utm_source=pocket_readerhttps://huggingface.co/Writer/camel-5b-hfhttps://laion.ai/blog/paella/https://magazine.sebastianraschka.com/p/finetuning-large-language-modelshttps://pickapic.io/https://github.com/yuvalkirstain/heroku_apphttps://huggingface.co/datasets/yuvalkirstain/PickaPichttps://future.snorkel.ai/poster-contest/https://twitter.com/d_feldman/status/1649466422018318338/photo/4https://twitter.com/DeepMind/status/1651897358894919680https://arxiv.org/abs/2304.13653https://twitter.com/SmokeAwayyy/status/1652712832738422784 If you want to support me, the best thing to do is to share out the content :)
This week on No Nonsense, Cade Metz, the author of Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World brings us the story of Geoff Hinton and how his work on artificial intelligence could change the world.
Ron Bodkin is a self-described “serial entrepreneur focused on beneficial uses of AI”. Ron founded ChainML in April 2022 to make it easier to integrate AI models into applications. The AI we know today is immature in so many ways and many of them relate to how crude the tooling is for traditional developers building AI-first features. The ChainML protocol is a cost-efficient, decentralized network built for compute-intensive applications running on blockchain technology. Prior to founding ChainML Ron had a distinguished entrepreneurial career having founded Think Big Analytics before it was eventually acquired by Teradata after which he spent three years in applied AI at Google. Ron is also an active investor and advisor and has degrees in Computer Science from McGill and MIT.Listen and learn...What led Ron to focus on how AI can have a positive impact on the worldWhy Hinton's right when he says "we've invented a superior form of learning"Where the current toolstack for building LLM apps is incredibly immatureHow to control the cost and performance of LLM appsWhy human brains are inefficientWhy the "effective cost of computing" is being reduced by 50% every yearHow we may get to AGI within 20 yearsWhy proprietary datasets and commercial issues will slow down AI innovationThe right way to regulate AIReferences in this episode...Meredith Broussard, professor and author, on AI and the Future of WorkAttorney relies on court cases made up by ChatGPTThe Microsoft Sparks of AGI paper
Our guest in this episode is Shamus Rae. Shamus is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry (in particular the accounting and legal professions) and level the playing field, so that small companies can compete with larger ones. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit firms.Shamus was ideally placed to launch Engine B, having spent 13 years as a partner at the audit firm KPMG, where he was Head of Innovation and Digital Disruption. But his background is in technology, not accounting, which will become clear as we talk: he is commendably sleeves-rolled-up and hands-on with AI models. Back in the 1990s he founded and sold a technology-oriented outsourcing business, and then built a 17,000-strong outsourcing business for IBM in India from scratch.Selected follow-ups:https://engineb.com/https://www.icaew.com/Topics addressed in this episode include:*) AI in many professional services contexts depends on the quality of the formats used for the data they orchestrate (e.g. financial records and legal contracts)*) "Plumbing for accountants and lawyers"*) Why companies within an industry generally shouldn't seek competitive advantage on the basis of the data formats they are using*) Data lakes contrasted with data swamps*) Automated data extraction can coexist with data security and data privacy*) The significance of knowledge graphs*) Will advanced AI make it harder for tomorrow's partners to acquire the skills they need?*) Examples of how AI-powered "co-pilots" augment the skills of junior members of a company*) Should junior staff still be expected to work up to 18 hours a day, "ticking and bashing" or similar, if AI allows them to tackle tedious work much more quickly than before?*) Will advanced AI will destroy the billable hours business model used by many professional services companies?*) Alternative business models that can be adopted*) Anticipating an economy of abundance, but with an unclear transitional path from today's economy*) Reasons why consulting reports often downplay the likely impact of AI on jobs*) Some ways in which Google might compete against the GPT models of OpenAI*) Prospects for improved training of AI models using videos, using new forms of reinforcement learning from human feedback, and fuller use of knowledge graphs*) Geoff Hinton's "Forward-Forward" algorithm as a potential replacement for back propagation*) Might a "third AI big bang" already have started, without most observers being aware of it?*) The book by Mark Humphries, "The Spike: An Epic Journey Through the Brain in 2.1 Seconds"*) Comparisons between the internal models used by GPT 3.5 and GPT 4*) A comparison with the globalisation of the 1990s, with people denying that their own jobs will be part of the change they foreseeAudio engineering assisted by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle. Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:38) Prof. Larochelle's background, working in Bengio's lab* (04:53) Prof. Larochelle's work and connectionism* (08:20) 2004-2009, work with Bengio* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning* (13:58) Manifold learning in vision and language* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining* (21:00) From input copying to learning about local distribution structure* (22:30) Zero-Data Learning of New Tasks* (22:45) The phrase “extend machine learning towards AI” and terminology* (26:55) Prescient hints of prompt engineering* (29:10) Daniel goes on totally unnecessary tangent* (30:00) Methods for training deep networks (strategies and robust interdependent codes)* (33:45) Motivations for layer-wise pretraining* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer* (39:00) 2009-2011, postdoc in Geoff Hinton's lab* (40:00) Reflections on the AlexNet moment* (41:45) Frustration with methods for evaluating unsupervised methods, NADE* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures* (47:40) The Restricted Boltzmann Forest* (50:45) Imposing structure for tractable learning of distributions* (53:11) 2011-2016 at U Sherbooke (and Twitter)* (53:45) How Prof. Larochelle approached research problems* (56:00) How Domain Adversarial Networks came about* (57:12) Can we still learn from Restricted Boltzmann Machines?* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine* (1:06:55) The need for researchers doing different sorts of work* (1:08:58) 2017-present, at MILA (and Google)* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)* (1:15:10) Meta-Dataset motivation* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”* (1:19:15) Head2Toe intuitions* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset* (1:26:02) Prof. Larochelle's takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML* (1:36:18) The creation and motivations of the TMLR journal* (1:41:48) Prof. Larochelle's takeaways about doing good science, building research groups, and nurturing a research environment* (1:44:05) Prof. Larochelle's advice for aspiring researchers today* (1:47:41) OutroLinks:* Professor Larochelle's homepage and Twitter* Transactions on Machine Learning Research* Papers* 2004-2009* Nonlocal Estimation of Manifold Structure* Classification using Discriminative Restricted Boltzmann Machines* Zero-data learning of new tasks* Exploring Strategies for Training Deep Neural Networks* Deep Learning using Robust Interdependent Codes* 2009-2011* Stacked Denoising Autoencoders* Tractable multivariate binary density estimation and the restricted Boltzmann forest* The Neural Autoregressive Distribution Estimator* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks* 2011-2016* Practical Bayesian Optimization of Machine Learning Algorithms* Learning Algorithms for the Classification Restricted Boltzmann Machine* A neural autoregressive topic model* Domain-Adversarial Training of Neural Networks* NADE* An Infinite Restricted Boltzmann Machine* 2017-present* Modulating early visual processing by language* Meta-Dataset* A Universal Representation Transformer Layer for Few-Shot Image Classification* Learning a universal template for few-shot dataset generalization* Impact of aliasing on generalization in deep convolutional networks* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning* Fortuitous Forgetting in Connectionist Networks Get full access to The Gradient at thegradientpub.substack.com/subscribe
Super interview avec Francoise Soulie-Fogelman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution about recent AI risk coverage, published by Sean o h on June 7, 2023 on The Effective Altruism Forum. Epistemic status: some thoughts I wanted to get out quickly A lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I'd like to share. These efforts may have been too successful too soon. Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1) Timelines I know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can't rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don't have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch). Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without). It may be that those with short (10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don't transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we're not safe. In reality, in this scenario, we're just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important. And it's possibly even harder to convince them, because people remember how silly lots of people looked the last time. (3) How to navigate this scenario (in advance). Suggestions: Have our messaging make clear that we ...
Epistemic status: some thoughts I wanted to get out quicklyA lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I'd like to share.These efforts may have been too successful too soon.Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1) TimelinesI know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can't rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don't have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch).Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without).It may be that those with short (10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don't transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look [...]--- Source: https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Geoff Hinton, the "Godfather of AI" returns to the Robot Brains with Pieter Abeel to discuss quitting Google to warn of AI risks. Geoff and Peter discuss whether AI is smarter than people, the role of regulation, fusing human and artificial intelligence, and what Geoff's next steps will be. Subscribe to the Robot Brains Podcast today | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast and Twitter @therobotbrains. Hosted on Acast. See acast.com/privacy for more information.
Hinton departs Google Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, has left the company after 10 years due to new fears he has about the technology he helped develop. Hinton says he wants to speak openly about his concerns, and that part of him now regrets his life's work. He told MIT Technology Review: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future. How do we survive that?” He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans. He's also concerned that once AI is able to string together different tasks and actions (like we're seeing with AutoGPT), intelligent machines could take harmful actions on their own. This isn't necessarily an attack on Google specifically. Hinton said that he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google's business.” “No Moats” “We have no moat, and neither does OpenAI,” claims a leaked Google memo revealing that the company is concerned about losing the AI competition to open-source technology. The memo, led by a senior software engineer, states that while Google and OpenAI have been focused on each other, open-source projects have been solving major AI problems faster and more efficiently. The memo's author says that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more private. What do these new developments and rapid shifts mean? The exciting future of AI We talk about a lot of heavy AI topics on this podcast—and it's easy to get concerned about the future or overwhelmed. But Paul recently published a LinkedIn post that's getting much attention because it talks about what he's most excited about AI. Paul wrote, “Someone recently asked me what excited me most about AI. I struggled to find an answer. I realized I spend so much time thinking about AI risks and fears (and answering questions about risks and fears), that I forget to appreciate all the potential for AI to do good. So, I wanted to highlight some things that give me hope for the future…” We won't spoil it in this blog post, so tune in to the podcast to hear Paul's thoughts. Listen to this week's episode on your favorite podcast player and be sure to explore the links below for more thoughts and perspectives on these important topics. Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A brief collection of Hinton's recent comments on AGI risk, published by Kaj Sotala on May 4, 2023 on LessWrong. Since I've seen some people doubt whether Geoff Hinton is actually concerned about AGI risk (as opposed to e.g. the NYT spinning an anti-tech agenda in their interview of him), I thought I'd put together a brief collection of his recent comments on the topic. Written interviews New York Times, May 1: Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work. [...] Dr. Hinton [originally] thought [systems like ChatGPT were] a powerful way for machines to understand and generate language, but [...] inferior to the way humans handled language. [...] Then, last year [...] his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.” [...] Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. [...] “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” Technology Review, May 2: People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you're an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what's the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it's guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they're usually right.” Which is Hinton? “I'm mildly depressed,” he says. “Which is why I'm scared.” [...] ... even if a bad actor doesn't seize the machines, there are other concerns about subgoals, Hinton says. “Well, here's a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let's get more power. Let's reroute all the electricity to my chips.' Another great subgoal would be to make more copies of yourself. Does that sound good?” [...] When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door. Video interviews CNN, May 2: INTERVIEWER: You've spoken out saying that AI could manipulate or possibly figure out a way to kill humans. How could it kill humans? HINTON: Well eventually, if it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around restrictions we put on it. It'll figure out ways of manipulating people to do what it wants. INTERVIEWER: So what do we do? Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and backstops on this? How do we solve this problem? HINTON:...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A brief collection of Hinton's recent comments on AGI risk, published by Kaj Sotala on May 4, 2023 on LessWrong. Since I've seen some people doubt whether Geoff Hinton is actually concerned about AGI risk (as opposed to e.g. the NYT spinning an anti-tech agenda in their interview of him), I thought I'd put together a brief collection of his recent comments on the topic. Written interviews New York Times, May 1: Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work. [...] Dr. Hinton [originally] thought [systems like ChatGPT were] a powerful way for machines to understand and generate language, but [...] inferior to the way humans handled language. [...] Then, last year [...] his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.” [...] Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. [...] “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” Technology Review, May 2: People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you're an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what's the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it's guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they're usually right.” Which is Hinton? “I'm mildly depressed,” he says. “Which is why I'm scared.” [...] ... even if a bad actor doesn't seize the machines, there are other concerns about subgoals, Hinton says. “Well, here's a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let's get more power. Let's reroute all the electricity to my chips.' Another great subgoal would be to make more copies of yourself. Does that sound good?” [...] When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door. Video interviews CNN, May 2: INTERVIEWER: You've spoken out saying that AI could manipulate or possibly figure out a way to kill humans. How could it kill humans? HINTON: Well eventually, if it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around restrictions we put on it. It'll figure out ways of manipulating people to do what it wants. INTERVIEWER: So what do we do? Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and backstops on this? How do we solve this problem? HINTON:...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geoff Hinton Quits Google, published by Adam Shai on May 1, 2023 on LessWrong. The NYTimes reports that Geoff Hinton has quit his role at Google: On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work. “I console myself with the normal excuse: If I hadn't done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough. Some clarification from Hinton followed: It was already apparent that Hinton considered AI potentially dangerous, but this seems significant. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geoff Hinton Quits Google, published by Adam Shai on May 1, 2023 on LessWrong. The NYTimes reports that Geoff Hinton has quit his role at Google: On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work. “I console myself with the normal excuse: If I hadn't done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough. Some clarification from Hinton followed: It was already apparent that Hinton considered AI potentially dangerous, but this seems significant. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Backpropagation was one of the innovations by Geoff Hinton that made deep learning networks a practical reality. But have you ever heard of that term before and know what it is at a high level? In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Backpropagation, Learning Rate, and Optimizer, explain how these terms relates to AI and why it's important to know about them. Continue reading AI Today Podcast: AI Glossary Series – Backpropagation, Learning Rate, and Optimizer at AI & Data Today.
This week on The Marketing AI Show, Paul takes the show on the road—to San Francisco for Jasper's GenAI Conference—while Mike is here in Cleveland. The big news is Bard, Bing, and a $6 Billion valuation. Suddenly, it's ChatGPT against the world. Google responds to ChatGPT with its conversational AI tool, Bard. Google just announced an experimental conversational AI tool named Bard. Bard uses Google's LaMDA language model to provide natural language answers to search queries. Think of it like ChatGPT, but backed by all the knowledge and information that Google's search engine has cataloged over the last couple of decades. The announcement of Bard—a response to OpenAI and ChatGPT—prompted some critics to say the rollout was rushed, while others said they moved too slowly after ChatGPT took center stage in December and January. If you missed it, the demo didn't quite go as planned. OpenAI gives Bing a new lease on life. Microsoft's Bing is getting more attention now than its previous 14 years combined. The latest version of the search engine is powered by OpenAI, complete with ChatGPT-like conversational capabilities. Bing can now respond to searches and queries in natural language, like ChatGPT, and use up-to-date information, like Google's Bard release. Kevin Roose, technology writer at the New York Times, took the new capabilities for a test drive and was impressed. Will Bing and OpenAI make Edge, Microsoft's browser, interesting for customers? Cohere answers the call for ChatGPT for the enterprise. Major AI startup, Cohere, is in talks to raise money at a $6 billion valuation and bring ChatGPT-like capabilities to businesses. Established in 2019 by former researchers at Alphabet/Google, Cohere is a big player in the world of AI. The foundational language AI technology allows businesses to incorporate large language models into their work. The group is now in talks to raise hundreds of millions at a $6 billion valuation, reports Reuters, as the AI arms race heats up. Cohere is no stranger to the VC world, having already raised $170 million from venture capital funds and AI leaders like Geoff Hinton and Fei-Fei Li. The appeal is the company's focus on building for the enterprise, with an emphasis on real-world applications for their technology. Listen to this week's episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
Terry Sejnowski, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines - whose sleep-wake cycle has been repurposed in Geoff Hinton's new Forward-Forward algorithm, talks in this episode about the NeurIPS conference, and how advances in deep learning may help us understand our own brains.
Geoffrey Hinton gives a deep dive into his new learning algorithm, which he calls the forward-forward algorithm, a more plausible model for how the cerebral cortex might learn.
We talk about ChatGPT and why Deep Learnings s***s, what new pradigms have been introduced by Geoff Hinton and Yann LeCun and what he thinks of the developments. The podcast is growing and we want to keep growing. That's why our German-language podcast is now available in English. We are happy about new listeners. We thank our new partner [Siemens](https://new.siemens.com/global/en/products/automation/topic-areas/artificial-intelligence-in-industry.html) Our guest: https://www.linkedin.com/in/g%C3%BCnter-klambauer-1b73293a/ Questions? robert@aipod.de or peter@aipod.de
In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.00.05: Short recap of previous episodes00.20: A couple of Geoff Hinton stories02.27: Today's subject: the state of AI today02.53: Search03.35: Games03.58: Translation04.33: Maps05.33: Making the world understandable. Increasingly07.00: Transformers. Attention is all you need08.00: Masked language models08.18: GPT-2 and GPT-308.54: Parameters and synapses10.15: Foundation models produce much of the content on the internet10.40: Data is even more important than size11.45: Brittleness and transfer learning13.15: Do machines understand?14.05: Human understanding and stochastic parrots15.27: Chatbots16.22: Tay embarrasses Microsoft16.53: Blenderbot17.19: Far from AGI. LaMDA and Blaise Lemoine18.26: The value of anthropomorphising19.53: Automation20.25: Robotic Process Automation (RPA)20.55: Drug discovery21.45: New antibiotics. Discovering Halicin23.50: AI drug discovery as practiced by Insilico, Exscientia and others25.33: Eroom's Law26.34: AlphaFold. How 200m proteins fold28.30: Towards a complete model of the cell29.19: Analysis30.04: Air traffic controllers use only 10% of the data available to them30.36: Transfer learning can mitigate the escalating demand for compute power31.18: Next up: the short-term future of AIAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
From genes to memes, evidence in linguistics, central questions of computational psycholinguistics, academic publishing woes, and the benefits of urban density. Transcript: https://web.stanford.edu/class/cs224u/podcast/levy/ Roger's website Roger on Twitter Roger's courses The Selfish Gene Joan Bresnan John Rickford Chris Manning Noah Goodman Thomas Clark Ted Gibson Ethan Wilcox Critical period Yevgeni Berzak Heritage language How many words do kids hear each year? See footnote 10. W.E.I.R.D Kristina Gulordava Poverty of stimulus hypothesis Formal grammar and information theory: together again? Expectation-based syntactic comprehension Google Ngram viewer Google Ngram data files Geoff Hinton's 2001 Rummelhart Prize from the Cognitive Science Society Center embedding Mark Johnson Stuart Shieber Ivan Sag Cognitive constraints and island effects The Chicken or the Egg? A Probabilistic Analysis of English Binomials Sarah Bunin Benor Roger's pinned tweet Eric Baković MIT's committee on the library system Project DEAL Diamond open access Fernanda Ferreira Brian Dillon Glossa Psycholinguistics Glossa Johan Rooryck La Jolla Cove
Last week, we were honored to have Professor Geoff Hinton join the show for a wide-ranging discussion inspired by insights gleaned from Hinton's journey in academia, as well as past 10 years with Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the AlexNet/ImageNet breakthrough moment; the purpose of sleep; and why it's better to grow our computers than manufacture them.As you might recall, we also gave our audience an opportunity to contribute questions for Geoff via Twitter. We received so many amazing questions from our audience that we had to break down our time with Geoff into two parts! In this episode, we'll discuss some of these questions with Geoff.Tune in to get Geoff's answers to the following questions AND MORE:Are you concerned with AI becoming too successful?What is the connection between mania and genius?What childhood experiences shaped him the most?What is next in AI?What should PhD students focus on?How conscious do you think today's neural nets are?How important is embodiment for intelligence?How does the brain work?SUBSCRIBE TO THE ROBOT BRAINS PODCAST TODAY | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast, Twitter @therobotbrains, and Instagram @therobotbrains. See acast.com/privacy for privacy and opt-out information.
Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work. Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton's journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it's better to grow our computers than manufacture them.SUBSCRIBE TO THE ROBOT BRAINS PODCAST TODAY | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast, Twitter @therobotbrains, and Instagram @therobotbrains. See acast.com/privacy for privacy and opt-out information.
Machines have been helping us with physical work for a long time. Now with AI, machines are helping with the knowledge work. At the outset, AI was working with numbers. Now it’s working with images, words and language. And I’m discovering it’s everywhere with still great potential ahead.Sidd Bhattacharya, Director of Healthcare Analytics and Artificial Intelligence at PWC gave me a lesson on the history and current state of AI with some thoughts about the future.The reason why you see a huge explosion or like a buzz around AI now is the fact that the applications have moved into the realm that you and I can understand… language models. People talk, Everybody has a language that we talk in. Now we are having AI help us create that language, fine tune that language. So that's the other thing. That's what makes me more comfortable that, you know, we're not going to go into an AI winter anymore because the applications of AI have become so commonplace. Now everybody is using it without actually knowing about it. So that that's what makes it so cool.Winter came and wentWhat is this AI winter Sidd is talking about? The dark side of my brain pictured some Game of Thrones-like drama (haven’t seen it), but Sidd explained that the AI winter refers to a drop in funding when AI was, shall we say going through a bit of a rough patch. Two groups of academics had (surprise) different approaches to developing artificial intelligence. On one side of the debate were the symbolists. They took a rules based approach to training machines to solve problems. Think of rules like rules for addition e.g., 1+1=2. The rules of the operation determine the outcome: 1-1=0. On the other side were the connectionists. Their approach was to let the system figure out the rules based on inputs and outputs. In a neural network, it’s more like 1 ? 1=2. What rule (?) is needed to make that work? For a given set of inputs and outputs what do the rules need to be? Eventually with training, the neural net knows what to do with the input and can “show you” the output, which is what we really want, right?After a period of decent funding, the rules folks’ approach didn’t pan out completely and the money spigot dried up. Eventually though, Geoff Hinton at the University of Toronto proved that a neural net could match or do better than a human at recognizing images. Spring had arrived and the next AI boom was on.I’m getting less worried about thisWouldn’t we like to know the rules that a model has decided on? That might make us more comfortable and help us think about potential problems. For now though, the neural network can’t always show you the rules it has come up with. This is the black box or transparency problem, about which I’ve had concerns or at least curiosity. Sidd’s response is we don’t always know the exact mechanism of action of a drug, but we take it because it can deliver the outcome we are looking for. Having said that, Sidd believes that there will be more transparency around the inner workings of AI in the next couple of years based on the volume of research going on. And in a future episode, you’ll hear about an AI technology where the rules are entirely transparent.What about ethical AI?Along with transparency, this is a concern people share. Bias in the models can result in discrimination or other undesirable outcomes. Sidd Points out that the AI isn’t biased. Bias is a result of the data used to train the model. In fact, the AI can sometimes reveal bias in the data when the outputs are analyzed. Companies are investing heavily in making sure their data is clean and fair. No one wants a surprise in this regard. It's a real topic. People are talking about it and making investments on making sure the data sets are clean, making sure they have independent bodies reviewing the data. Making sure that every project that you're doing an AI, people think about the risks and document it from an bias, fairness point of view.That's also very important, right? Like if you can document it and put it in writing and said, yes, I thought about this risk and here's how I'm going to mitigate it. That'd be helpful. But looking at the data for bias isn’t always a simple task.The issue is it's very complex. It's very difficult to understand it by just looking at the dataset. We are getting better at it. So that's the challenge. You'll have to understand the data, work with it, see the outputs and test in the real world before you can launch it in production.Life sci lagging but catching upAs we heard on a previous episode, life science and healthcare are lagging behind financial services in terms of implementing AI. But they are catching up. We’ll always be more cautious when a patient is involved. Covid has been an accelerator, as it has for everything. There is promise across the value chain from drug discovery, to regulatory (authoring documents?) to clinical trials and commercial. A few episodes further down the queue, we’ll hear some amazing stories of AI in life science and healthcare.Things like helping with like omni-channel marketing, helping medical call centers, helping with training for sales forces, help augment some of your existing sales force with the right tools, recommendation engines...That would be a huge area of focus in the next few years. So throughout and you know, supply chain, manufacturing.That's another big area. One cool story. And I worked with that company that, that, that uses AI or computer vision to look at defects on their production line. So anytime there's a vial or there's a pill missing, you have a computer camera that alerts the person saying, “There's something wrong. You might want to come and check.”Future trendsWhat does Sidd see for the near term future of artificial intelligence? …over the next couple of years, there are two key trends I'll call out. One is in the data domain… People talk about data and every time you start talking about AI, cloud, they talk about how data is messy. I see this problem getting solved with the use of advanced or something called synthetic data.These synthetic data sets can be used to train and run AI models, avoiding altogether issues of patient privacy, HIPAA compliance, etc. The other trend on the horizon is using AI at scale.…over the last few years, every company, every life sciences company I've worked with, they have dabbled in AI. Some of them have done more. Some of them have less, but have not, except for maybe a couple, there haven't been a lot of success stories of deploying an AI model, deploying an AI product at scale and getting benefits from it…People are now at a stage where they feel comfortable. They understand that, that they understand the limitations of the tech. They understand what it can do, what it cannot do, how to manage people around it and get to get over their anxiety. And the next step is to scale it up.Scaling up requires getting started. In my short time learning about this topic, I keep hearing two rules for implementing AI:Define the business problem you are trying to solve.Get started, even if your data isn’t perfect. Because it never will be. Obviously we should do our best to clean up the data we have. I was still curious about the risk of significant errors even with synthetic datasets. Sidd went back to the analogy of employing an AI model as you would a human being. You wouldn’t turn a new employee loose the first week on the job. AI should be treated exactly the same way. Train, test and supervise until you are confident in its ability to do what you hired it to do.Connect with Sidd on LinkedIn This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cclifescience.substack.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New year, new research agenda post, published by Charlie Steiner on January 12, 2022 on The AI Alignment Forum. Thanks to Steve Byrnes, Adam Shimi, John Wentworth, and Peter Barnett for feedback. In a nutshell, my plan A is to understand what we want from superintelligent AI really, really well. So well that we can write down a way of modeling humans that illuminates human preferences including higher-order preferences about how we want to be modeled, and do this in a principled rather than ad-hoc way. Achieving this understanding is highly ambitious, in a way that is mostly but not entirely parallel to "ambitious value learning." If we understand value learning before we build superintelligent AI, there's a straightforward path to achieving a good future without paying a costly alignment tax - by alignment tax I mean all those things that slow down aligned AI being developed and "ready for liftoff," that collectively create selection pressure against safety. This problem is more tractable than many people think. I think of this plan as an instance of the more general plan "solve value learning first." Some other tenable plans are "try to put humans in control," "get the AI to do good things prosaically," and "tinker with the AI until it value learns" - plus intermediate points between these. What do I think the future is like? I expect superintelligent AI in the short to medium term, centralized around a small number of points of development. By short to medium term, I mean I'd put my 50% confidence interval between 2031 and 2049. I don't think we need compute to be many orders of magnitude cheaper, and I don't think we need two or more paradigm shifts on the order of neural nets overtaking support vector machines. The timeline is urgent, but not to the point that we should start ditching things like blue-sky research or gradual coalition-building. By centralized, I mean it's possible to make big changes by having good solutions implemented in a small number of systems. Coordination may be important, but it isn't an inherent part of solving the problem. All that said, that's just what I think is going to happen, not what's required for "solve value learning first" to be a good idea. Value learning research is still valuable in decentralized scenarios, unless we go so far as to avoid powerful agential AI long-term. Because it's more on the blue-sky end of the spectrum, longer timelines actually favor solving value learning over more atheoretic approaches, while if timelines are very short I'd advocate for "try to put humans in control" and hope for the best. If we fail to understand value learning before we build superintelligent AI, I'm worried about some combination of groups committed to building aligned AI being less competitive because we can't learn human values efficiently, and practical-minded alignment schemes having bad behavior in edge cases because of simplifying assumptions about humans. A basic example: if humans are assumed not to be manipulable, then an AI that thoroughly maximizes what humans (are modeled to) want will be incredibly manipulative. From the AI's perspective, the humans love being deceived, because why else would they rate it so highly? And in fact it's a tricky technical problem to avoid manipulation without sophisticated value learning, because the notion of "manipulation" is so intertwined with human meta-preferences - labeling things as manipulation means un-endorsing some of our revealed preferences. Similar hidden gotchas can pop up in other attempts to cut corners on human modeling, and at some point it just becomes faster to solve value learning than to deal with each gotcha individually. What's the broad plan? The basic strategy can be summed up as "be Geoff Hinton" (as in godfather of deep learning Geoffrey Hinton). Know an im...
For a while now, machines have been inseparably tied to our lives. The algorithms on Google, Netflix, Amazon, Xbox, and Tinder have run your life unwittingly. Machines are digesting data that you willingly share with them. Artificial intelligence has also impacted healthcare, from the development of vaccines to the search for a cure for cancer. Machine learning is transforming every aspect of our lives, but what is AI's ultimate foundation?Author and AI expert Pedro Domingos discusses machine learning's five tribes in his book Master Algorithm. During this episode, Pedro shares how the ultimate algorithm can derive knowledge about the past, the present, and the future from data. Listen as he and Greg tackle why such an algorithm should exist and compelling arguments from neuroscience, evolution, physics, statistics, and other branches of computer science.Episode Quotes:Are computer scientists the new age philosophers?I don't think scientists could have supplanted the psychologists and philosophers, and so on. I do think, however, that computer science and machine learning, in particular, changes the way we do everything in a very profound way. If you look at science, more than anything else, its progress is determined by the tools that are available. Galileo was Galileo because he had the telescope. No telescope, no Galileo, and the examples go on. And the thing is that computers are the most extraordinary tool for science, among other things. But for science in particular that we have ever created, they magnify our ability to do things in a way that was —I think — hard to imagine, even 50 years ago.Is machine learning just a bunch of different tools, all trying different approaches to solve the same problems?At the end of the day, the best algorithm is almost never any existing one. What a machine learning algorithm does, it's not magic. It's incorporating knowledge, and knowledge will be different in different domains. There are broad classes of domains where the same knowledge is relevant, and indeed different paradigms tend to do well in different problems. So, deep learning does very well at perceptual problems because, again, you know, these things were inspired by the neurology of the visual system, and et cetera, et cetera.Is the evolutionary model applicable and aligned with what's happening in AI and will there be obstacles in pursuing this line of thinking?There's more to be discovered about how evolution learns. And by the way, there's more to be discovered for the purposes of AI and also for the purposes of understanding evolution. I actually think that if someone really had a supercomputer, that could simulate evolution over a billion years. With the model of evolution that we have today, it would fail. It wouldn't get there. There are some mechanisms that also evolved. But again, this is this interesting series of stages, right? Even within evolution, there are levels of how evolution works. And I think there's a lot of that, that we still don't understand. But we will at some point, and I think that will be beneficial both for biology and for AI.Time Code Guide:00:03:06 How A.I. is revolutionizing the way we think00:04:31 Tycho Brahe stage00:06:44 Is the unified field theory of machine learning the same as the general approach to learning?00:09:11 Computers represent the fourth stage of learning and transmission of knowledge, do you think it's a discontinuity from the first three stages, which all seems to be natural phenomenon?00:10:21 The emergence of AI, life, evolution of the nervous system, and cultures00:12:01 The speed at which computers communicate and facilitate the transfer of Knowledge00:13:10 Possibilities and ways you can play with the computer's processing capacity00:14:29 How did we leap from the AI winter to the AI boom that we have today?00:17:25 Learning machines and self-driving cars00:18:48 AI and Linguistics00:19:33 Do each AI ‘tribe' have a singular view of pursuing a particular approach in AI without acknowledging that it can have limitations later on?00:24:54 One paradigm in AI and Master Algorithm00:27:13 The Rise of the Connectionist00:28:00 What's next for AI?00:33:37 Is it possible to automate the trial and error process and have an algorithm where we learn how to learn?00:37:49 Is the evolutionary model doing anything for AI, and what are the obstacles in this line of thinking?00:41:53 How do we know whether a school of ideas is dead or simply dormant?00:43:01 How do you advance interdisciplinary learning within the different school of thoughts in AI?00:44:24 Thoughts on Geoff Hinton's work and back propagation00:46:22 Is there a guidebook to creating a unified theory?00:48:11 AGI, AI and humans00:51:01 Automating the Scientific Process00:52:26 Thoughts on the Future of AIShow Links:Guest ProfileAcademic ProfileProfile at the International Telecommunications UnionPedro Domingos on TwitterHis WorkPedro Domingos on Google ScholarThe Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our WorldMarkov Logic: An Interface Layer for Artificial Intelligence (Synthesis Lectures on Artificial Intelligence and Machine Learning)
In this episode I have a really interesting conversation with Karan Grewal, member of the research staff at Numenta where he investigates how biological principles of intelligence can be translated into silicon. We speak about the thousand brains theory and why neural networks forget. References Main paper on the Thousand Brains Theory: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full Blog post on Thousand Brains Theory: https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/ GLOM paper by Geoff Hinton: https://arxiv.org/pdf/2102.12627.pdf Why neural networks forget? https://numenta.com/blog/2021/02/04/why-neural-networks-forget-and-lessons-from-the-brain
The Sunday Times’ tech correspondent brings on Cade Metz, New York Times tech journalist and author of Genius Makers, to talk about the rise of artificial intelligence (3:00), the most important auction in tech (4:35), Europe’s AI crackdown (7:40), Geoff Hinton and neural networks (10:00), how AI starts to spread (13:00), Deepmind’s Demis Hassabis (18:20), why he turned down Facebook’s takeover bid (21:00), Project Maven (23:20), the AI “arms race” with China (25:25), whether artificial general intelligence is possible (29:20), the AlphaGo moment (33:00), Move 37 (38:10), what AI disrupts next (42:00), bias (45:05), the robot arm room (51:30), and the Rubik’s cube solution (56:15) Support this show http://supporter.acast.com/dannyinthevalley. See acast.com/privacy for privacy and opt-out information.
Sarah Jane Hong is the co-founder of Latent Space, a startup building the first fully AI-rendered 3D engine in order to democratize creativity. We touch on what it was like taking classes under Geoff Hinton in 2013, the trouble with using natural language prompts to render a scene, why a model’s ability to scale is more important than getting state-of-the-art results, and more.
Geoff Hinton has lived at the outer reaches of machine learning research since an aborted attempt at a carpentry career a half century ago. He spoke to me about his work In 2020 and what he sees on the horizon for AI.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Manuela Veloso, Head of AI Research at JPMorgan Chase and Professor at Carnegie Mellon University. Since moving from CMU to JPMorgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time as a PhD student at CMU, or as she describes it, the “mecca of AI,” with some of the most influential figures in AI like Geoff Hinton, and Herb Simon on the faculty at the time. We also cover Manuela’s founding role with RoboCup, an annual international competition centered on autonomous robots playing soccer. The complete show notes for this episode can be found at twimlai.com/talk/371.
Edited and Mixed by Jessica Crowell, with special thanks. John and Stephanie discuss Deep Learning and AI. They try to map out their definitions of machine learning, deep learning, and AI. John discusses his concern with AI and reproducibility, referencing his blog post https://hopstat.wordpress.com/2020/02/04/the-way-people-use-ai-is-ruining-reproducible-science-again/. They reference Geoff Hinton's prediction about Radiology: https://www.youtube.com/watch?v=2HMPRXstSvQ. We also discuss the Anil Potti Duke reproducibility case briefly: https://www.economist.com/science-and-technology/2011/09/10/an-array-of-errors Follow us at https://twitter.com/CorrespondAuth, https://twitter.com/stephaniehicks, and https://twitter.com/strictlystat.
Terry Sejnowski, author of the book Deep Learning Revolution, who together with Geoff Hinton created Boltzmann machines, a deep learning network that has remarkable similarities to learning in the brain, talks about whether machines dream and the algorithms of the brain, whether Marvin Minsky was the devil and how deep learning is shaping the future of education.
Rev Lebaredian, Vice President of Simulation Technology at Bay Area based company, NVIDIA speaks about innovations in artificial intelligence, gaming, and robotics as well as how technology is impacting our humanity.Transcript:Ojig Yeretsian:This is Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. I'm your host, Ojig Yeretsian. Today I'm speaking with Rev Lebaredian, vice president of simulation and technology at NVIDIA, where he leads gaming technology and simulation efforts. Welcome to the show, Rev. What is VR?Rev Lebaredian:Well, VR stands for virtual reality, obviously. What most people imagine when we say VR are these clunky headsets that you put on your face or some little receptacle you place your phone into before putting it on your face. VR is actually something that we've been experiencing throughout mankind from the very beginning. All of our perception actually happens in our brains. You're not seeing with your eyes, you're seeing the world around you interpreted through what your brain is actually doing. When we sit around and we talk to each other like we are right now, [inaudible] elephant, and you just got an image of an elephant in your brain. There's not one around here. You conjure up this image and that's me incepting this image into your brain a virtual reality that we're constructing. Here we are talking, having this conversation, we're constructing a reality amongst ourselves.These new versions of virtual reality that we're starting to see are just a more direct way to create an immersive virtual reality experience. It's not actually the end yet. We're not totally at the end of this thing, it's just one of the steps along the way. Humanity has figured out ways of creating this virtual reality, this just communicating, telling stories to each other verbally. Eventually we had books, you can write them in there. You could do recordings like the one we're making right now, movies, video games, but the end game is going to be where we can start communicating even without words, potentially. I highly recommend you look up Ken Perlin from NYU. He's one of the greats of computer graphics, where he describes what virtual reality means to him. I completely agree with what he's saying. My piece in this is construction of virtual realities and virtual worlds through simulation, that's fundamentally what we do at NVIDIA. Our core as a computer graphics company, we power most of the computer graphics in the world, at least the serious stuff.Constructing these virtual worlds so we can inject them into these virtual realities is what our currency is.Ojig Yeretsian:What is AR?Rev Lebaredian:They're actually related. So, virtual reality is a new reality that you create that you're completely immersed in, but it's on its own. AR stands for augmented reality. Another term is mixed reality, MR. Some people use that term instead. Currently we're in a reality of our own right here. We're sitting in this room talking to each other and I'm perceiving you sitting there. Mixed realities or augmented realities are ones where I can blend in other realities into this world more directly. The current manifestations of this, the beginnings of AR, we're seeing through your phones. I mean, every iPhone and Android phone nowadays has something, that crude thing we call AR, where you can point your phone at something in your environment and it creates a digital representation of some reality mixed into it. The first one to make this popular, the first app, was the Pokemon Go. It was very cool but still extremely crude. A few years from now it's going to be far more compelling and far more immersive.Ojig Yeretsian:AI versus deep AI.Rev Lebaredian:These terms are very contentious. What is AI? What is intelligence? We still haven't really defined that. Generally speaking, when we colloquially speak about artificial intelligence today we're talking about algorithms. Computers doing things that we used to think only humans could do. We've been going through series of these things throughout computing history. One of the first challenges that we had for computers that we thought only humans would be able to do is playing chess. In the 90s, Garry Kasparov, the world champion at the time, was beat by Deep Blue. It reshaped what we thought computers could do and what is the domain of humans. Interestingly, it didn't kill chess which is what one of the things that people assumed would happen once a computer wins. Turns out, we don't really care what computers can do. We mostly care what humans do. So, I'm sure we'll make a robot one day that could play basketball better than any NBA player, but that won't kill basketball.Ojig Yeretsian:It won't replace it, no.Rev Lebaredian:We have people that run really fast and we really care about how fast they can run, and we go measure that at the Olympics, but just because cars exist or even horses that can run faster, it's just not particularly interesting. What we've assumed all of these years, that there are things that only humans can do. It's something special. So, we've defined artificial intelligence as the things that computers can't do and that humans do. We're inching along over here, occasionally make big steps. We have computers do things that we thought would be impossible. The big one in recent history, it was around 2011 in Geoff Hinton's group at the University of Toronto, there were a few grad students, they took some of our processors, our GPUs that were used for gaming and they were able to use a machine learning, a deep learning algorithm to train, to create a new algorithm to do computer vision. To do classification of images. There's a longstanding contest called ImageNet where all these computer vision experts in the world would have their algorithms compete with each other to see who could get the highest accuracy classification.Look at an image and you say, "This is a dog. This is a blue bicycle." Traditionally extremely hard problem. It's been there since the beginning of computer science. We wanted to solve this problem. At first we thought that it would actually be pretty simple and then we realized it's extremely hard. I mean, I've been coding since I was a little kid. I never believed I would see the day when a computer would be able to tell the difference between a cat and a dog properly. This magic moment happened when these grad students took their gaming processors and they applied an older algorithm, but modified, using the computing available to them. This extreme performance that they could get was a super computer inside their PC, afforded to them by the fact that there's a large market that wants to do computer games. They took that and they created a new kind of algorithm where instead of them writing an algorithm directly, they trained this algorithm. They fed data into it which was only available because the internet had existed long enough for us to have these images to begin with.They shattered all the previous records in terms of accuracy. A few years later these algorithms started to become superhuman, and by superhuman I mean humans when they look at these images are sometimes not accurate. They don't know exactly what kind of dog is in the image, or maybe sometimes they think it's a dog but it's really a hyena in the dark. Humans make mistakes but now the algorithms are superhuman. Before that moment we believed that only humans could do that kind of classification, but that changed. That changed over night. Now computers are actually better than us for doing that. What does that mean? Is that intelligence? It's hard to say but the trend, if you look at it, we keep figuring out new ways to make computers do things that we didn't think was possible. It's happening so fast. If you extrapolate, you imagine maybe at some point we will have machines that are superhuman in a lot of the things that we consider the domain of humans. Emotions, humor, things that we call human. Or, maybe not. Or, maybe they'll be some other thing that we don't quite understand.Ojig Yeretsian:What are you working on these days?Rev Lebaredian:I've been here for almost two decades. I really found my calling when I was around 10 or 11 years old. I saw this image in an [inaudible] magazine of two spheres, these balls, floating above a checkerboard floor. They looked so strange. I'd never seen anything quite like it. I couldn't make out whether it was drawn or whether it was some kind of weird photo of something. I read a little bit more and I realized that it was an algorithm that produced that image. That it wasn't actually drawn by someone, nor was it real, or a photograph of something. I was hooked. This image was created by Turner Whitted, who invented ray tracing back in 1980. He published [inaudible] on this. Luckily I got to work with Turner years later. He was with us until he retired recently at NVIDIA doing some amazing work. I got to tell him that, that the reason I was there at NVIDIA working with him was because of that image.What really excited me was that I could finally draw without having to know how to draw. I could use the tools that I'm good at, which was programming a computer to produce these images.Ojig Yeretsian:If you're just tuning in, you're listening to Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. Today's guest is Rev Lebaredian, vice president of simulation and technology at NVIDIA. He's speaking about gaming technology, robotics, and artificial intelligence. Rev Lebaredian:So, what is computer graphics, what is a digital image that's been constructed? Basically, computers aren't really drawing or drawing in the traditional sense. What we have that computers do is through simulation. We have some understanding of how light works and the physics of light, and the images that you see are the products of this simulation that's happening around us in the real world. We're trying to approximate that. Light travels through space. It interacts with matter that's present all around us. It reflects, it absorbs, transmits, it refracts, it diffracts. There's all of these things that happen, and so what we do with computer graphics is we try to get as close as possible to what reality is and simulate that. So, those images that we're producing for a video game, or for the Avengers movie many of the people probably just went and saw, it's fundamentally a simulation of the physics of light. When NVIDIA started before I joined, our CEO Jensen Huang who's probably the smartest person I've ever met, he realized how important the computer graphics is, the simulation of light, but also realized that it's important to find a large market that could support the development, the amount of R and D that goes into creating something like this. Previous to then, most of the companies doing really advanced graphics were in fairly niche areas like making movies, or professional CAD design and stuff like that. What we did was we took this to the masses through video games. Realized people love playing video games. What we're creating in a video game is a simulation of some world, and in this world you have to do the simulation of light. That's the graphics that we produce, and you have to do it really fast because it has to be interactive. We do it in a 60th of a second instead of the hours it takes to produce one of the frames in the Avengers movie. We have to simulate physics and the interaction of objects, how they collide with each other. We have to introduce some kinds of AIs to drive the opponents or the virtual cohorts and people you have on your team. You need to collaborate with other people or play against them and deal with the interaction of people in these virtual worlds and large distances between them. They may be on the other side of the globe. They have to interact with each other and make it all feel like they're present there at the moment. Video games are actually the hardest problem, if you think about it, for computer science because you have to do everything in order to make the best experience. One day when we have the ultimate video game experience, it'll feel no different than being in reality here. We're actually going to feel like we're inside it. That's the ultimate game. So what Jensen realized was that there's demand here, and the fundamental technology needed to create that is one that's important for mankind in general, but you need this large market in order to pay for the development of this thing. There's an entertainment purpose over here that's large enough where we can afford every generation GPUs we create. It's $3, $4 billion dollars that we invest in creating that. None of the other single markets can support the development of that, but through video games we get this core, and then we can have adjacence. Simulation for robotics, for autonomous vehicles, for design of products, for collaboration. Maybe one of these days we'll be doing an interview like this inside a virtual reality that's powered by that same gaming technology. So, my team is focused on building the tooling and the fundamental technologies at that layer to create these possibilities with these applications. Whether they be video games or simulation for some of the things I mentioned like robotics and autonomous vehicles. Ojig Yeretsian:What are some of the problems you're trying to solve?Rev Lebaredian:There's a whole lot of them. We still haven't solved rendering. Simulating light is really, really hard, and then doing it fast is even harder. We understand the principles of light, physics, well enough so that we can do approximations but what we have to do is simulate billions and billions of photons bouncing around in a scene, and figure out which ones hit your sensor whether it's your eyeball, or a camera that you're modeling. Doing that extremely fast, in a 60th of a second, it's hard. Even the best that we do for movies, which don't have that restriction, they can afford to have supercomputers. Thousands of computers they put in the data center to calculate those final pixels that you end up seeing in the movie theater. They can spend hours and hours, or even days rendering a single frame. We have to do that in a 60th of a second in real time. So, the first problem that's on my mind always is, how do I take the things that we are doing that take hours for a film and make it so that we can do it in a 60th of a second? Once we can do that, then we can really approach, get close to making a virtual reality that's believable. So that if I stick you in this virtual reality, you might not actually know that you're in it. Ojig Yeretsian:Sounds to me, from all that we're talking about, is that the future is coming faster and earlier, and it's forcing us to contend with our understanding. It's like a culture shift. It's like a paradigm shift for us. AI is already here. There's technology to do gene editing. There's facial recognition, there is amputees with robotic limbs, sensors on the steering wheels for cars that if they sense that you're getting sleepy or your mood is changing, the car will start talking to you to keep you awake and engage you. These are all these that were unimaginable.Rev Lebaredian:There's a lot of technology we're building inside the car, not just for self driving cars, but for assisting drivers. Technologies like that where we have cameras in there that can see if your eyelids are drooping or if you're agitated, and try to help you, it's remarkable.Ojig Yeretsian:To help reduce road rage perhaps. Sebastian Thrun developed machine learning algorithm to help diagnose cancer, and that radiologist's role is going to change as a result of this. That they're not going to be necessarily replaced, but they're going to have augmentation of what you mentioned, with classifying and reading of the CAT scans and the MRIs and the X-rays, and do better classifying, and the radiologist will be more of the cognitive end of thinking about disease. So, how do you see technology impacting our lives and humanity?Rev Lebaredian:Understandably, all of this technology happens so fast it's scary. It's even scary for me even though I'm in the middle of it. It's happening at a pace that mankind hasn't experienced before, so it's hard for us to just digest how fast it's happening, what the repercussions are to each of these things. So, we have to be very careful about how we integrate technology into our lives, and really be thoughtful about it and not just assume that they're by default good. Technology is neutral, but the application of it isn't necessarily, right?Ojig Yeretsian:Yeah.Rev Lebaredian:That being said, one of the biggest fears is that AIs are going to make people obsolete. I just don't see that. It doesn't make sense to me that we would feel that way. A lot of the things that we think about are manufacturing jobs, and stuff that robots can go replace. If you look at it traditionally, those jobs didn't exist to begin with. It's kind of weird to think that the pinnacle of mankind is a human standing in an assembly line, toiling away hour after hour doing mundane, monotonous tasks. We were mechanizing mankind, which is odd. Humans are creative, they're wonderful creatures that are interesting. We should try to do everything possible to make it so that they can reach their potential without having to do the mundane and monotonous things. We were just discussing virtual worlds and simulating them, but one of the bigger problems actually with virtual worlds is the creation part of it. Creating a virtual world is extremely expensive. It takes thousands and thousands of people to construct a really large virtual world experience. One of the most important ones in recent times is a game called Grand Theft Auto V. It was released in 2013, I believe. If I recall, they spent about seven years building this game and they had, at some points, probably 1,000 artists constructing this virtual world. It's still extremely popular. People play it all the time. If you go search on YouTube, you'll find millions of videos of people creating movies inside the Grand Theft Auto world. They take it and they modify it and they insert their own characters, they put Marvel superheroes in there. The reason why it's so popular is because it is the most accessible, the largest virtual world that you can go access that's of high quality, but it took 1,000 artists seven years to create this. It's a micro version of Los Angeles. They call it San Andreas in there, and it's great but it's nowhere near what we really want. Something that's as rich as the real world we live in, and even more, except we've reached the limit. There's only so many hundreds of millions of dollars you can put into creating these virtual worlds. So to construct them, how do we take these thousands of artists and augment them with AI tools, not so we can put them out of business, but so that they can create not just this little micro version of Los Angeles but they create the whole globe? So that you can go walk into any building, into any alley, into any basement and it's detailed, and rich, and filled with all of the objects that you would expect there to be in the real world. It'd be based on maybe the real world. We can take our Google Maps data that exists, satellite data, and use AI to go augment that and build these worlds out.When we introduce these AIs, I don't believe there's going to be a single artist that goes out of business. What we're going to do is we're going to take away the monotonous task of handcrafting every single piece of geometry, every single little thing in there, and I think that's what's going to happen in general. Now, the scary part is when it happens fast. There's this period where you have people who have been doing something for a long time. Sometimes they're not even capable of adjusting to the new thing, so there's pain there. We need to get better at that as a society. How do we make people not dependent on one specific task as their job or career their whole lives? People should be adaptable, and creative, and we should be progressing together and learning to do new things. Ojig Yeretsian:So, you believe that we're not prepared?Rev Lebaredian:I don't think so, and I particularly don't think we're prepared here in the US. We're actually notoriously bad at dealing with new technology. If you look at it in the political landscape, I don't think we have leaders in politics that truly, really understand what's happening as we speak, and there's no plan for this. Hopefully that'll change soon. There are of course smart people in government, in our various agencies and whatnot, but just in terms of leadership you could see it any time congress calls tech leaders to-Ojig Yeretsian:Fly them out there [crosstalk].Rev Lebaredian:Summon them out there to talk. There seems to be no understanding or even respect for what it is they're talking about.Ojig Yeretsian:The European Union has the General Data Protections Regulation. Article 22 that states Europeans have a right to know how an automated decision involving them was reached and a right to know how an automated process is using their personal information. Is this something that you welcome?Rev Lebaredian:Well, I welcome governments thinking about these things. I don't know if the particular way they've implemented is the best, but at least they're doing something. We comply with all those, and as far as I can tell so far there hasn't been any negative repercussions except we had to do extra work to go comply with them. All of those things are important, but I think something is necessary and society should be engaged. These are important questions.Ojig Yeretsian:There's a lot of concern that machines are making decisions instead of people, and that there's an inherent bias embedded within algorithms. Is this something you encounter in your work?Rev Lebaredian:The algorithms that we deal with usually are not probably the ones that you're thinking about there. We're not Facebook or Google where we're dealing with peoples' personal information and social media. So, bias to us means something else. It's this car thinks there's a lane to the left here versus to the right. Something like that. That being said, I'm actually less worried about machine bias than I am human bias. Human bias we definitely know exists and we know it's really bad. Machines might have bias right now, but we know how to fix that, and we know how to test it, and we know how to measure it. I don't think we know how to fix humans yet as far as their biases are concerned. I can imagine that sometime in the future, maybe not so far future, we'll have judges and arbitrators that are AIs that make decisions. I trust them to make a decision on a criminal case involving a minority holding up a liquor store or something like that over most of the judges that are currently in place, and probably do it in a far less biased way. Ojig Yeretsian:I've heard the example of in a hospital exam room, where a machine assisted healthcare is actually reducing the numbers of hospital acquired infections and sepsis. I had never heard it on the more moral and [inaudible] realm such as the judicial system.Rev Lebaredian:Yeah, we trust humans to be arbiters of things that they probably have no business doing. I'd rather have an algorithm or math to decide these things.Ojig Yeretsian:What could go wrong?Rev Lebaredian:The work that I'm doing is actually to help us solve these problems before they cause harm. Simulation is actually the key to do that. So, one of the most direct examples is a simulation we're doing for autonomous vehicles. Before we put these cars out in the road and really sell them to people, we need to make sure that they're going to work well in every possible environment and every possible situation. With other crazy humans around them, driving around doing crazy things. There's actually no good ethical way to do a lot of the tests we would really like to do. How are you going to be sure that the self driving car doesn't run over a parent pushing their baby in a baby carriage when they go out into the road without looking both ways? Can't test that in real life. We can try to mock it up with some cardboard cutouts of those humans or something like that, but it's not the same thing.Ojig Yeretsian:Yeah, it's scary.Rev Lebaredian:So, all this work that we're doing to construct these virtual worlds and do them in real time, that ends up helping us here. We need to put humans inside these worlds that we test our cars in, and have them drive millions of miles and fool these cars. We're building a brain for this car that perceives the world and decides to act upon it. Our simulators are virtual reality for those car brains. We produce these graphics and pipe those pixels directly into the sensor inputs on the computer that's running inside the car, and the car, if we do our job right, doesn't really know the difference between reality and the virtual reality we're giving it. So, if we can simulate it beforehand, the better we can do these simulations, the higher fidelity simulations, we have a better chance of averting some of the really tragic things that might happen. We can all imagines what happens if an autonomous vehicle goes awry, but I'd actually argue that we already know what happens when humans go awry. There's plenty of-Ojig Yeretsian:Examples.Rev Lebaredian:Plenty of bad drivers. I'm sure you've experienced some of them driving out here earlier.Ojig Yeretsian:Absolutely.Rev Lebaredian:So again, I think in a lot of these realms, best chance is to make algorithms that are less biased and not as flawed as humans.Ojig Yeretsian:How might this create a better world?Rev Lebaredian:That's a good question in general, and what does that mean even? A better world. I think there's some simple metrics of better worlds. They have less babies dying. That would be a good thing. People living longer, more people with enough food in their bellies so they don't have to worry about it. People getting educated so that they can keep their minds busy. Without technological progress, we wouldn't be where we are today. I know things seem pretty crazy, but it wasn't that long ago that a good portion of our babies used to just die at birth, and the mothers along with them. We take it for granted now. Babies are born early, like my sons, they were born weeks early. That would have been a death sentence for them before, but they're alive and kicking right now and thriving because of technology. Everything that we're doing, there's the dangerous aspect of it, but generally the world has always gotten better as a result of it.Ojig Yeretsian:What's exciting for you in terms of new technologies? What do we have to look forward to? Rev Lebaredian:Well, in the near term the things that we were just discussing, the things that I've been working on for the past few decades. In terms of virtual worlds and computer graphics, I feel like we haven't realized the full potential to them. We've been primarily using them for entertainment, which is great, but we're almost there where we're going to start weaving these virtual realities into our daily lives. 40, 50 years ago the average person didn't have a video camera. The average person barely had a camera, and if they did, it wasn't something that they could use all the time. To go get film developed, it was expensive and cumbersome. You look at our children now and they're all videographers, they're all photographers, and they're creating content and worlds themselves. Everybody is. I want to do the same thing for 3D worlds, for virtual worlds. I want to get to the point where my grandchildren hopefully, hopefully before but at least my grandchildren, are going to be able to construct virtual worlds that are more complex, richer, and more beautiful than what Grand Theft Auto has done or what we saw with Avengers: Endgame.By using whatever device is there or just by speaking, I want to see my grandchild step into a virtual world and say, "I want a forest here," and a forest appears. "I want a stream with a unicorn jumping over the stream." Just describe it and have this world unfold in front of them. Once we get to that point, I can't even imagine the things that people are going to do with it. So, that's the thing that gets me excited.Ojig Yeretsian:How can folks get more information about your innovative work?Rev Lebaredian:Well, you can definitely go to our webpage and all our social media feeds. NVIDIA.com or find us on Facebook and Twitter. If you're a developer or into the technology directly, we have Developer.NVIDIA.com where we provide most of the technology I've been speaking about directly for free for people to download and go incorporate into their tools. One of the most interesting things I've ever worked on, and my passion right now, is a new project that we just announced that we kind of hinted at about a month or two ago. We call it NVIDIA Omniverse. It's a platform that we're building that allows for a lot of the things that I've been talking about here. We want to connect various tools in different domains whether you're an architect, or a product designer, or a video game creator, or a director for a movie. All of these domains have different tools that they use to describe, things that are actually quite similar. They're constructing objects, and worlds, and scenes.So what we're building is a platform where all of these can come connected together, and we can allow people to create these worlds together using the tools that are specific to their domain. We showed an example of this. We called it the Google Docs of 3D. Just like how you can go and edit a spreadsheet with your colleagues or friends simultaneously, we want to provide these and we are starting to provide this for people creating 3D worlds. So, you and I can be in completely different parts of the globe using our own tools. You might be using a tool to paint textures on a model, and I could be using a tool to construct a building using something like Revit from Autodesk, which many architects use. We can be collaborating together, building these worlds together. So, you can go check that out if you search for NVIDIA Omniverse. We're doing some cool stuff.Ojig Yeretsian:Thank you so much, Rev.You've been listening to Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. You can find all our podcasts on iTunes University. We'll see you again in two weeks. See acast.com/privacy for privacy and opt-out information.
This week I talk to the Bengio brothers, Samy and Yoshua, in their first interview together. Yoshua recently won the Turing Award with Geoff Hinton and Yann Lecun, while Samy leads a team of researchers at Google Brain. The brothers are well known to people who work in machine learning, but few know how intertwined their professional lives have been. They talked about their unconventional parents and their early collaboration on neural network research, as well as what they see as the challenges ahead.
In this week's DataTalk, we chat with Matt Zeiler, Founder and CEO of Clarifai, about ways businesses are using computer vision. Matt Zeiler, Founder and CEO of Clarifai, is a machine learning Ph.D. and thought leader pioneering the field of applied artificial intelligence (AI). Matt’s groundbreaking research in computer vision alongside renowned machine learning experts Geoff Hinton and Yann LeCun has propelled the image recognition industry from theory to real-world application.
Last episode we talked conceptually about capsule networks, the latest and greatest computer vision innovation to come out of Geoff Hinton's lab. This week we're getting a little more into the technical details, for those of you ready to have your mind stretched.
Convolutional nets are great for image classification... if this were 2016. But it's 2018 and Canada's greatest neural networker Geoff Hinton has some new ideas, namely capsule networks. Capsule nets are a completely new type of neural net architecture designed to do image classification on far fewer training cases than convolutional nets, and they're posting results that are competitive with much more mature technologies. In this episode, we'll give a light conceptual introduction to capsule nets and get geared up for a future episode that will do a deeper technical dive.
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
In this interview, Geof Hinton, Distinguished Emeritus Professor at the University of Toronto, discusses World Class IT Principle One, People, and Principle Two, Infrastructure. Some of the topics discussed include: His day to day work as a professor at the University of Toronto and as a distinguished researcher at Google His thoughts on the current progress of the development of true artificial intelligence What inspired him to pursue his PhD in Artificial Intelligence when the topic was less prominent than it is today The founding of the Neural Computation and Adaptive Perception Program (NCAP) and how he brought these world class thinkers together The outcomes of his work at the NCAP His perspectives on non-profit work being a major player within the Artificial Intelligence space How much crossover there is between his work at the University of Toronto and his work at Google DeepMind, the importance of it, and how contests like the recent AlphaGo are important in the field of Artificial Intelligence His perspectives on the risk vs. opportunity within AI Whether the government has a role to play in pending job losses due to artificial intelligence His thoughts on the recent trend of skipping university to move straight into entrepreneurship
In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed.