Podcast appearances and mentions of Lee Sedol

South Korean Go player

  • 69PODCASTS
  • 101EPISODES
  • 34mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 13, 2026LATEST
Lee Sedol

POPULARITY

20192020202120222023202420252026


Best podcasts about Lee Sedol

Latest podcast episodes about Lee Sedol

New Scientist Weekly
Mathematics is Undergoing the Biggest Change in its History

New Scientist Weekly

Play Episode Listen Later Mar 13, 2026 24:01


Episode 351 Artificial intelligence is starting to solve mathematical theorems better than humans. Mathematicians say AI is now an existential threat to their work. As one professor puts it; “We are running out of places to hide.” From winning gold medals at mathematics competitions, to solving previously unanswered Erdős problems, multiple AI achievements have come together recently to exceed all expectations of its capabilities. Find out just how quickly the tech is advancing, how we can tell the AI isn't just hallucinating answers, why it may help us formalise all of mathematics - and whether it will really put humans out of a job. And 10 years on since Google's AlphaGo AI first beat human Go master Lee Sedol, we reflect on that epic moment and hear from Chris Maddison who saw it all unfold. Rowan Hooper is joined by New Scientist's Alex Wilkins to discuss “one of the most remarkable stories” he's ever worked on. Chapters (00:00) Intro - The biggest moment in the history of mathematics (01:10) The many problems AI is now solving (04:11) Are these models similar to ChatGPT or Claude? (05:09) Will AI help us advance the field of mathematics? (07:28) How can we check AI's answers - are they just hallucinations? (10:51) Why it's important to “formalise” maths (12:03) Will we become too reliant on this AI? (13:00) 10 years on since AI beat Lee Sedol at Go (14:54) AI creativity: The famous ‘Move 37' (16:50) How it felt to watch this epic moment (19:21) How AlphaGo led to the LLMs of today (20:25) Are regular chatbots becoming more creative? To read more about these stories, visit https://www.newscientist.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Star Point
96: Lee Sedol's Appearance in The Devil's Plan, Wall Go

Star Point

Play Episode Listen Later Sep 22, 2025 71:51


Here are my impressions of Lee Sedol's appearance on the Korean reality Netflix show, The Devil's Plan, as well as some thoughts about “Wall Go,” one of the games featured on the show.Play Wall GoJoin the Discord⁠Support Star Point⁠The Star Point Store

Tronche de Tech
#55 - Laurent Sifre - Déjà un Nobel pour l'IA

Tronche de Tech

Play Episode Listen Later Sep 4, 2025 65:24


En 2016, l'IA réalise un exploit qui va sidérer la planète. Cet ingénieur français est une des clés de ce succès. Retour en 2014.Après une thèse en reconnaissance d'image, Laurent termine son stage chez Google Brain, la prestigieuse branche IA du géant américain.Logiquement, il postule pour rester dans l'équipe.Mais, il y a un obstacle de taille : le visa.Pour les profils comme Laurent, c'est Google qui fait la demande au gouvernement.Le hic, c'est qu'il n'y a pas assez de place pour tout le monde.La décision se fait donc…Au tirage au sort

monos estocásticos
Intenté poner un lavavajillas ayudado por ChatGPT agent y CASI MUERO

monos estocásticos

Play Episode Listen Later Jul 24, 2025 89:36


0:00 El incidente de Matías con el lavavajillas 4:09 OpenAI es oro en la Olimpiada de Matemáticas 10:16 Espera, Gemini también sacó medalla de oro 13:37 Por qué es importante el oro de dos LLMs 23:11 Todos tendremos nuestro momento Lee Sedol 28:25 La IA no se hace sola, hay que hacerla 31:17 Digresión sobre el Magnum de nocciola y helados LIDL 33:51 ChatGPT agent es hijo de Deep Research y Operator 37:44 Qué puede hacer el agente de ChatGPT 42:15 Una tecnología para gente paciente 48:18 Probamos el navegador Comet de Perplexity 52:47 Probamos el navegador IA de la gente de Arc 56:58 No entendemos Magnific Precision de Freepik 59:27 YouTube la ha liado un poco 1:03:25 Hay IA en El Eternauta de Netflix 1:05:21 Puerta grande o enfermería 1:25:38 Canción de cierre (rap argentino) Tutorial para instalar ComfyUI con Flux en Blender usando NVIDIA NIM https://github.com/NVIDIA-AI-Blueprints/3d-guided-genai-rtx monos estocásticos es un podcast sobre inteligencia artificial presentado por Antonio Ortiz (@antonello) y Matías S. Zavia (@matiass). Sacamos un episodio nuevo cada jueves. Puedes seguirnos en YouTube, LinkedIn y X. Más enlaces en cuonda.com/monos-estocasticos/links - (0) El incidente de Matías con el lavavajillas - (04:09) OpenAI es oro en la Olimpiada de Matemáticas - (10:16) Espera, Gemini también sacó medalla de oro - (13:37) Por qué es importante el oro de dos LLMs - (23:11) Todos tendremos nuestro momento Lee Sedol - (28:25) La IA no se hace sola, hay que hacerla - (31:17) Digresión sobre el Magnum de nocciola y helados LIDL - (33:51) ChatGPT agent es hijo de Deep Research y Operator - (37:44) Qué puede hacer el agente de ChatGPT - (42:15) Una tecnología para gente paciente - (48:18) Probamos el navegador Comet de Perplexity - (52:47) Probamos el navegador IA de la gente de Arc - (56:58) No entendemos Magnific Precision de Freepik - (59:27) YouTube la ha liado un poco - (1h03) Hay IA en El Eternauta de Netflix - (1h05) Puerta grande o enfermería - (1h25) Canción de cierre

Korean. American. Podcast
Episode 99: The Match Review (Media)

Korean. American. Podcast

Play Episode Listen Later May 29, 2025 119:00


This week Jun and Daniel review the popular Korean film "The Match" (승부), which tells the story of two legendary Go players in Korea during the late 1980s and early 1990s. Our hosts explore the cultural significance of Go in Korean society, discussing how it was once one of the four major activities Korean children would pursue alongside math academies, taekwondo, and piano. They delve into the controversy surrounding the film's star Yoo Ah-in and his drug scandal, examining Korea's strict cancellation culture and how it differs between actors, K-pop stars, and politicians. The conversation expands to cover the historic AlphaGo vs. Lee Sedol match in 2016 and its symbolic impact on Korean society's understanding of AI. Through scene-by-scene analysis, they highlight cultural details from 1980s Korea including car parades for international achievements, traditional family hierarchies, smoking culture, and nostalgic elements like fumigation trucks and Nikon cameras as status symbols.If you're interested in learning about the cultural significance of Go in East Asian societies, understanding Korea's approach to celebrity scandals and cancellation culture, exploring the philosophical differences between individualism and traditional hierarchy in Korean society, or discovering nostalgic details about 1980s Korean life including housing styles and family dynamics, tune in to hear Daniel and Jun discuss all this and more! This episode also touches on topics like the decline of Go's popularity in modern Korea, the East Asian "Cold War" competition in Go between Korea, Japan, and China, and how the film serves as a metaphor for Korea's journey from copying to innovating on the global stage.Support the showAs a reminder, we record one episode a week in-person from Seoul, South Korea. We hope you enjoy listening to our conversation, and we're so excited to have you following us on this journey!Support us on Patreon:https://patreon.com/user?u=99211862Follow us on socials: https://www.instagram.com/koreanamericanpodcast/https://twitter.com/korampodcasthttps://www.tiktok.com/@koreanamericanpodcastQuestions/Comments/Feedback? Email us at: koreanamericanpodcast@gmail.com

Star Point
80: Go-Adjacent Board Games

Star Point

Play Episode Listen Later Apr 7, 2025 64:00


Let's look at how modern board games reflect our beloved game of Go. From Carcassonne to Ticket to Ride, you can find Go-like elements hiding in plain sight at your local game store!Also... Lee Sedol designs board games now??Games mentioned in this episode: CarcassonneHiveBlokusOnitamaRiskHexTicket to RideTakYinshStrategoOthelloKing's CrownGreat KingdomNine Knights⁠Dice Tower Video⁠⁠Support Star Point⁠The Star Point Store

Ground Truths
The Holy Grail of Biology

Ground Truths

Play Episode Listen Later Mar 18, 2025 43:43


“Eventually, my dream would be to simulate a virtual cell.”—Demis HassabisThe aspiration to build the virtual cell is considered to be equivalent to a moonshot for digital biology. Recently, 42 leading life scientists published a paper in Cell on why this is so vital, and how it may ultimately be accomplished. This conversation is with 2 of the authors, Charlotte Bunne, now at EPFL and Steve Quake, a Professor at Stanford University, who heads up science at the Chan-Zuckerberg Initiative The audio (above) is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.TRANSCRIPT WITH LINKS TO AUDIO Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths and we've got a really hot topic today, the virtual cell. And what I think is extraordinarily important futuristic paper that recently appeared in the journal Cell and the first author, Charlotte Bunne from EPFL, previously at Stanford's Computer Science. And Steve Quake, a young friend of mine for many years who heads up the Chan Zuckerberg Initiative (CZI) as well as a professor at Stanford. So welcome, Charlotte and Steve.Steve Quake (00:42):Thanks, Eric. It's great to be here.Charlotte Bunne:Thanks for having me.Eric Topol (00:45):Yeah. So you wrote this article that Charlotte, the first author, and Steve, one of the senior authors, appeared in Cell in December and it just grabbed me, “How to build the virtual cell with artificial intelligence: Priorities and opportunities.” It's the holy grail of biology. We're in this era of digital biology and as you point out in the paper, it's a convergence of what's happening in AI, which is just moving at a velocity that's just so extraordinary and what's happening in biology. So maybe we can start off by, you had some 42 authors that I assume they congregated for a conference or something or how did you get 42 people to agree to the words in this paper?Steve Quake (01:33):We did. We had a meeting at CZI to bring community members together from many different parts of the community, from computer science to bioinformatics, AI experts, biologists who don't trust any of this. We wanted to have some real contrarians in the mix as well and have them have a conversation together about is there an opportunity here? What's the shape of it? What's realistic to expect? And that was sort of the genesis of the article.Eric Topol (02:02):And Charlotte, how did you get to be drafting the paper?Charlotte Bunne (02:09):So I did my postdoc with Aviv Regev at Genentech and Jure Leskovec at CZI and Jure was part of the residency program of CZI. And so, this is how we got involved and you had also prior work with Steve on the universal cell embedding. So this is how everything got started.Eric Topol (02:29):And it's actually amazing because it's a who's who of people who work in life science, AI and digital biology and omics. I mean it's pretty darn impressive. So I thought I'd start off with a quote in the article because it kind of tells a story of where this could go. So the quote was in the paper, “AIVC (artificial intelligence virtual cell) has the potential to revolutionize the scientific process, leading to future breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.” That's a pretty big statement. So maybe we can just kind of toss that around a bit and maybe give it a little more thoughts and color as to what you were positing there.Steve Quake (03:19):Yeah, Charlotte, you want me to take the first shot at that? Okay. So Eric, it is a bold claim and we have a really bold ambition here. We view that over the course of a decade, AI is going to provide the ability to make a transformative computational tool for biology. Right now, cell biology is 90% experimental and 10% computational, roughly speaking. And you've got to do just all kinds of tedious, expensive, challenging lab work to get to the answer. And I don't think AI is going to replace that, but it can invert the ratio. So within 10 years I think we can get to biology being 90% computational and 10% experimental. And the goal of the virtual cell is to build a tool that'll do that.Eric Topol (04:09):And I think a lot of people may not understand why it is considered the holy grail because it is the fundamental unit of life and it's incredibly complex. It's not just all the things happening in the cell with atoms and molecules and organelles and everything inside, but then there's also the interactions the cell to other cells in the outside tissue and world. So I mean it's really quite extraordinary challenge that you've taken on here. And I guess there's some debate, do we have the right foundation? We're going to get into foundation models in a second. A good friend of mine and part of this whole I think process that you got together, Eran Segal from Israel, he said, “We're at this tipping point…All the stars are aligned, and we have all the different components: the data, the compute, the modeling.” And in the paper you describe how we have over the last couple of decades have so many different data sets that are rich that are global initiatives. But then there's also questions. Do we really have the data? I think Bo Wang especially asked about that. Maybe Charlotte, what are your thoughts about data deficiency? There's a lot of data, but do you really have what we need before we bring them all together for this kind of single model that will get us some to the virtual cell?Charlotte Bunne (05:41):So I think, I mean one core idea of building this AIVC is that we basically can leverage all experimental data that is overall collected. So this also goes back to the point Steve just made. So meaning that we basically can integrate across many different studies data because we have AI algorithms or the architectures that power such an AIVC are able to integrate basically data sets on many different scales. So we are going a bit away from this dogma. I'm designing one algorithm from one dataset to this idea of I have an architecture that can take in multiple dataset on multiple scales. So this will help us a bit in being somewhat efficient with the type of experiments that we need to make and the type of experiments we need to conduct. And again, what Steve just said, ultimately, we can very much steer which data sets we need to collect.Charlotte Bunne (06:34):Currently, of course we don't have all the data that is sufficient. I mean in particular, I think most of the tissues we have, they are healthy tissues. We don't have all the disease phenotypes that we would like to measure, having patient data is always a very tricky case. We have mostly non-interventional data, meaning we have very limited understanding of somehow the effect of different perturbations. Perturbations that happen on many different scales in many different environments. So we need to collect a lot here. I think the overall journey that we are going with is that we take the data that we have, we make clever decisions on the data that we will collect in the future, and we have this also self-improving entity that is aware of what it doesn't know. So we need to be able to understand how well can I predict something on this somewhat regime. If I cannot, then we should focus our data collection effort into this. So I think that's not a present state, but this will basically also guide the future collection.Eric Topol (07:41):Speaking of data, one of the things I think that's fascinating is we saw how AlphaFold2 really revolutionized predicting proteins. But remember that was based on this extraordinary resource that had been built, the Protein Data Bank that enabled that. And for the virtual cell there's no such thing as a protein data bank. It's so much more as you emphasize Charlotte, it's so much dynamic and these perturbations that are just all across the board as you emphasize. Now the human cell atlas, which currently some tens of millions, but going into a billion cells, we learned that it used to be 200 cell types. Now I guess it's well over 5,000 and that we have 37 trillion cells approximately in the average person adult's body is a formidable map that's being made now. And I guess the idea that you're advancing is that we used to, and this goes back to a statement you made earlier, Steve, everything we did in science was hypothesis driven. But if we could get computational model of the virtual cell, then we can have AI exploration of the whole field. Is that really the nuts of this?Steve Quake (09:06):Yes. A couple thoughts on that, maybe Theo Karaletsos, our lead AI person at CZI says machine learning is the formalism through which we understand high dimensional data and I think that's a very deep statement. And biological systems are intrinsically very high dimensional. You've got 20,000 genes in the human genome in these cell atlases. You're measuring all of them at the same time in each single cell. And there's a lot of structure in the relationships of their gene expression there that is just not evident to the human eye. And for example, CELL by GENE, our database that collects all the aggregates, all of the single cell transcriptomic data is now over a hundred million cells. And as you mentioned, we're seeing ways to increase that by an order of magnitude in the near future. The project that Jure Leskovec and I worked on together that Charlotte referenced earlier was like a first attempt to build a foundational model on that data to discover some of the correlations and structure that was there.Steve Quake (10:14):And so, with a subset, I think it was the 20 or 30 million cells, we built a large language model and began asking it, what do you understand about the structure of this data? And it kind of discovered lineage relationships without us teaching it. We trained on a matrix of numbers, no biological information there, and it learned a lot about the relationships between cell type and lineage. And that emerged from that high dimensional structure, which was super pleasing to us and really, I mean for me personally gave me the confidence to say this stuff is going to work out. There is a future for the virtual cell. It's not some made up thing. There is real substance there and this is worth investing an enormous amount of CZIs resources in going forward and trying to rally the community around as a project.Eric Topol (11:04):Well yeah, the premise here is that there is a language of life, and you just made a good case that there is if you can predict, if you can query, if you can generate like that. It is reminiscent of the famous Go game of Lee Sedol, that world champion and how the machine came up with a move (Move 37) many, many years ago that no human would've anticipated and I think that's what you're getting at. And the ability for inference and reason now to add to this. So Charlotte, one of the things of course is about, well there's two terms in here that are unfamiliar to many of the listeners or viewers of this podcast, universal representations (UR) and virtual instrument (VIs) that you make a pretty significant part of how you are going about this virtual cell model. So could you describe that and also the embeddings as part of the universal representation (UR) because I think embeddings, or these meaningful relationships are key to what Steve was just talking about.Charlotte Bunne (12:25):Yes. So in order to somewhat leverage very different modalities in order to leverage basically modalities that will take measurements across different scales, like the idea is that we have large, may it be transformer models that might be very different. If I have imaging data, I have a vision transformer, if I have a text data, I have large language models that are designed of course for DNA then they have a very wide context and so on and so forth. But the idea is somewhat that we have models that are connected through the scales of biology because those scales we know. We know which components are somewhat involved or in measurements that are happening upstream. So we have the somewhat interconnection or very large model that will be trained on many different data and we have this internal model representation that somewhat capture everything they've seen. And so, this is what we call those universal representation (UR) that will exist across the scales of biology.Charlotte Bunne (13:22):And what is great about AI, and so I think this is a bit like a history of AI in short is the ability to predict the last years, the ability to generate, we can generate new hypothesis, we can generate modalities that we are missing. We can potentially generate certain cellular state, molecular state have a certain property, but I think what's really coming is this ability to reason. So we see this in those very large language models, the ability to reason about a hypothesis, how we can test it. So this is what those instruments ultimately need to do. So we need to be able to simulate the change of a perturbation on a cellular phenotype. So on the internal representation, the universal representation of a cell state, we need to simulate the fact the mutation has downstream and how this would propagate in our representations upstream. And we need to build many different type of virtual instruments that allow us to basically design and build all those capabilities that ultimately the AI virtual cell needs to possess that will then allow us to reason, to generate hypothesis, to basically predict the next experiment to conduct to predict the outcome of a perturbation experiment to in silico design, cellular states, molecular states, things like that. And this is why we make the separation between internal representation as well as those instruments that operate on those representations.Eric Topol (14:47):Yeah, that's what I really liked is that you basically described the architecture, how you're going to do this. By putting these URs into the VIs, having a decoder and a manipulator and you basically got the idea if you can bring all these different integrations about which of course is pending. Now there are obviously many naysayers here that this is impossible. One of them is this guy, Philip Ball. I don't know if you read the language, How Life Works. Now he's a science journalist and he's a prolific writer. He says, “Comparing life to a machine, a robot, a computer, sells it short. Life is a cascade of processes, each with a distinct integrity and autonomy, the logic of which has no parallel outside the living world.” Is he right? There's no way to model this. It's silly, it's too complex.Steve Quake (15:50):We don't know, alright. And it's great that there's naysayers. If everyone agreed this was doable, would it be worth doing? I mean the whole point is to take risks and get out and do something really challenging in the frontier where you don't know the answer. If we knew that it was doable, I wouldn't be interested in doing it. So I personally am happy that there's not a consensus.Eric Topol (16:16):Well, I mean to capture people's imagination here, if you're successful and you marshal a global effort, I don't know who's going to pay for it because it's a lot of work coming here going forward. But if you can do it, the question here is right today we talk about, oh let's make an organoid so we can figure out how to treat this person's cancer or understand this person's rare disease or whatever. And instead of having to wait weeks for this culture and all the expense and whatnot, you could just do it in a computer and in silico and you have this virtual twin of a person's cells and their tissue and whatnot. So the opportunity here is, I don't know if people get, this is just extraordinary and quick and cheap if you can get there. And it's such a bold initiative idea, who will pay for this do you think?Steve Quake (17:08):Well, CZI is putting an enormous amount of resources into it and it's a major project for us. We have been laying the groundwork for it. We recently put together what I think is if not the largest, one of the largest GPU supercomputer clusters for nonprofit basic science research that came online at the end of last year. And in fact in December we put out an RFA for the scientific community to propose using it to build models. And so we're sharing that resource within the scientific community as I think you appreciate, one of the real challenges in the field has been access to compute resources and industry has it academia at a much lower level. We are able to be somewhere in between, not quite at the level of a private company but the tech company but at a level beyond what most universities are being able to do and we're trying to use that to drive the field forward. We're also planning on launching RFAs we this year to help drive this project forward and funding people globally on that. And we are building a substantial internal effort within CZI to help drive this project forward.Eric Topol (18:17):I think it has the looks of the human genome project, which at time as you know when it was originally launched that people thought, oh, this is impossible. And then look what happened. It got done. And now the sequence of genome is just a commodity, very relatively, very inexpensive compared to what it used to be.Steve Quake (18:36):I think a lot about those parallels. And I will say one thing, Philip Ball, I will concede him the point, the cells are very complicated. The genome project, I mean the sort of genius there was to turn it from a biology problem to a chemistry problem, there is a test tube with a chemical and it work out the structure of that chemical. And if you can do that, the problem is solved. I think what it means to have the virtual cell is much more complex and ambiguous in terms of defining what it's going to do and when you're done. And so, we have our work cut out for us there to try to do that. And that's why a little bit, I established our North Star and CZI for the next decade as understanding the mysteries of the cell and that word mystery is very important to me. I think the molecules, as you pointed out earlier are understood, genome sequenced, protein structure solved or predicted, we know a lot about the molecules. Those are if not solved problems, pretty close to being solved. And the real mystery is how do they work together to create life in the cell? And that's what we're trying to answer with this virtual cell project.Eric Topol (19:43):Yeah, I think another thing that of course is happening concurrently to add the likelihood that you'll be successful is we've never seen the foundation models coming out in life science as they have in recent weeks and months. Never. I mean, I have a paper in Science tomorrow coming out summarizing the progress about not just RNA, DNA, ligands. I mean the whole idea, AlphaFold3, but now Boltz and so many others. It's just amazing how fast the torrent of new foundation models. So Charlotte, what do you think accounts for this? This is unprecedented in life science to see foundation models coming out at this clip on evolution on, I mean you name it, design of every different molecule of life or of course in cells included in that. What do you think is going on here?Charlotte Bunne (20:47):So on the one hand, of course we benefit profits and inherit from all the tremendous efforts that have been made in the last decades on assembling those data sets that are very, very standardized. CELLxGENE is very somehow AI friendly, as you can say, it is somewhat a platform that is easy to feed into algorithms, but at the same time we actually also see really new building mechanisms, design principles of AI algorithms in itself. So I think we have understood that in order to really make progress, build those systems that work well, we need to build AI tools that are designed for biological data. So to give you an easy example, if I use a large language model on text, it's not going to work out of the box for DNA because we have different reading directions, different context lens and many, many, many, many more.Charlotte Bunne (21:40):And if I look at standard computer vision where we can say AI really excels and I'm applying standard computer vision, vision transformers on multiplex images, they're not going to work because normal computer vision architectures, they always expect the same three inputs, RGB, right? In multiplex images, I'm measuring up to 150 proteins potentially in a single experiment, but every study will measure different proteins. So I deal with many different scales like larger scales and I used to attention mechanisms that we have in usual computer vision. Transformers are not going to work anymore, they're not going to scale. And at the same time, I need to be completely flexible in whatever input combination of channel I'm just going to face in this experiment. So this is what we right now did for example, in our very first work, inheriting the design principle that we laid out in the paper AI virtual cell and then come up with new AI architectures that are dealing with these very special requirements that biological data have.Charlotte Bunne (22:46):So we have now a lot of computer scientists that work very, very closely have a very good understanding of biologists. Biologists that are getting much and much more into the computer science. So people who are fluent in both languages somewhat, that are able to now build models that are adopted and designed for biological data. And we don't just take basically computer vision architectures that work well on street scenes and try to apply them on biological data. So it's just a very different way of thinking about it, starting constructing basically specialized architectures, besides of course the tremendous data efforts that have happened in the past.Eric Topol (23:24):Yeah, and we're not even talking about just sequence because we've also got imaging which has gone through a revolution, be able to image subcellular without having to use any types of stains that would disrupt cells. That's another part of the deep learning era that came along. One thing I thought was fascinating in the paper in Cell you wrote, “For instance, the Short Read Archive of biological sequence data holds over 14 petabytes of information, which is 1,000 times larger than the dataset used to train ChatGPT.” I mean that's a lot of tokens, that's a lot of stuff, compute resources. It's almost like you're going to need a DeepSeek type of way to get this. I mean not that DeepSeek as its claim to be so much more economical, but there's a data challenge here in terms of working with that massive amount that is different than the human language. That is our language, wouldn't you say?Steve Quake (24:35):So Eric, that brings to mind one of my favorite quotes from Sydney Brenner who is such a wit. And in 2000 at the sort of early first flush of success in genomics, he said, biology is drowning in a sea of data and starving for knowledge. A very deep statement, right? And that's a little bit what the motivation was for putting the Short Read Archive statistic into the paper there. And again, for me, part of the value of this endeavor of creating a virtual cell is it's a tool to help us translate data into knowledge.Eric Topol (25:14):Yeah, well there's two, I think phenomenal figures in your Cell paper. The first one that kicks across the capabilities of the virtual cell and the second that compares the virtual cell to the real or the physical cell. And we'll link that with this in the transcript. And the other thing we'll link is there's a nice Atlantic article, “A Virtual Cell Is a ‘Holy Grail' of Science. It's Getting Closer.” That might not be quite close as next week or year, but it's getting close and that's good for people who are not well grounded in this because it's much more taken out of the technical realm. This is really exciting. I mean what you're onto here and what's interesting, Steve, since I've known you for so many years earlier in your career you really worked on omics that is being DNA and RNA and in recent times you've made this switch to cells. Is that just because you're trying to anticipate the field or tell us a little bit about your migration.Steve Quake (26:23):Yeah, so a big part of my career has been trying to develop new measurement technologies that'll provide insight into biology. And decades ago that was understanding molecules. Now it's understanding more complex biological things like cells and it was like a natural progression. I mean we built the sequencers, sequenced the genomes, done. And it was clear that people were just going to do that at scale then and create lots of data. Hopefully knowledge would get out of that. But for me as an academic, I never thought I'd be in the position I'm in now was put it that way. I just wanted to keep running a small research group. So I realized I would have to get out of the genome thing and find the next frontier and it became this intersection of microfluidics and genomics, which as you know, I spent a lot of time developing microfluidic tools to analyze cells and try to do single cell biology to understand their heterogeneity. And that through a winding path led me to all these cell atlases and to where we are now.Eric Topol (27:26):Well, we're fortunate for that and also with your work with CZI to help propel that forward and I think it sounds like we're going to need a lot of help to get this thing done. Now Charlotte, as a computer scientist now at EPFL, what are you going to do to keep working on this and what's your career advice for people in computer science who have an interest in digital biology?Charlotte Bunne (27:51):So I work in particular on the prospect of using this to build diagnostic tools and to make diagnostics in the clinic easier because ultimately we have somewhat limited capabilities in the hospital to run deep omics, but the idea of being able to somewhat map with a cheaper and lighter modality or somewhat diagnostic test into something much richer because a model has been seeing all those different data and can basically contextualize it. It's very interesting. We've seen all those pathology foundation models. If I can always run an H&E, but then decide when to run deeper diagnostics to have a better or more accurate prediction, that is very powerful and it's ultimately reducing the costs, but the precision that we have in hospitals. So my faculty position right now is co-located between the School of Life Sciences, School of Computer Science. So I have a dual affiliation and I'm affiliated to the hospitals to actually make this possible and as a career advice, I think don't be shy and stick to your discipline.Charlotte Bunne (28:56):I have a bachelor's in biology, but I never only did biology. I have a PhD in computer science, which you would think a bachelor in biology not necessarily qualifies you through. So I think this interdisciplinarity also requires you to be very fluent, very comfortable in reading many different styles of papers and publications because a publication in a computer science venue will be very, very different from the way we write in biology. So don't stick to your study program, but just be free in selecting whatever course gets you closer to the knowledge you need in order to do the research or whatever task you are building and working on.Eric Topol (29:39):Well, Charlotte, the way you're set up there with this coalescence of life science and computer science is so ideal and so unusual here in the US, so that's fantastic. That's what we need and that's really the underpinning of how you're going to get to the virtual cells, getting these two communities together. And Steve, likewise, you were an engineer and somehow you became one of the pioneers of digital biology way back before it had that term, this interdisciplinary, transdisciplinary. We need so much of that in order for you all to be successful, right?Steve Quake (30:20):Absolutely. I mean there's so much great discovery to be done on the boundary between fields. I trained as a physicist and kind of made my career this boundary between physics and biology and technology development and it's just sort of been a gift that keeps on giving. You've got a new way to measure something, you discover something new scientifically and it just all suggests new things to measure. It's very self-reinforcing.Eric Topol (30:50):Now, a couple of people who you know well have made some pretty big statements about this whole era of digital biology and I think the virtual cell is perhaps the biggest initiative of all the digital biology ongoing efforts, but Jensen Huang wrote, “for the first time in human history, biology has the opportunity to be engineering, not science.” And Demis Hassabis wrote or said, ‘we're seeing engineering science, you have to build the artifact of interest first, and then once you have it, you can use the scientific method to reduce it down and understand its components.' Well here there's a lot to do to understand its components and if we can do that, for example, right now as both of AI drug discoveries and high gear and there's umpteen numbers of companies working on it, but it doesn't account for the cell. I mean it basically is protein, protein ligand interactions. What if we had drug discovery that was cell based? Could you comment about that? Because that doesn't even exist right now.Steve Quake (32:02):Yeah, I mean I can say something first, Charlotte, if you've got thoughts, I'm curious to hear them. So I do think AI approaches are going to be very useful designing molecules. And so, from the perspective of designing new therapeutics, whether they're small molecules or antibodies, yeah, I mean there's a ton of investment in that area that is a near term fruit, perfect thing for venture people to invest in and there's opportunity there. There's been enough proof of principle. However, I do agree with you that if you want to really understand what happens when you drug a target, you're going to want to have some model of the cell and maybe not just the cell, but all the different cell types of the body to understand where toxicity will come from if you have on-target toxicity and whether you get efficacy on the thing you're trying to do.Steve Quake (32:55):And so, we really hope that people will use the virtual cell models we're going to build as part of the drug discovery development process, I agree with you in a little of a blind spot and we think if we make something useful, people will be using it. The other thing I'll say on that point is I'm very enthusiastic about the future of cellular therapies and one of our big bets at CZI has been starting the New York Biohub, which is aimed at really being very ambitious about establishing the engineering and scientific foundations of how to engineer completely, radically more powerful cellular therapies. And the virtual cell is going to help them do that, right? It's going to be essential for them to achieve that mission.Eric Topol (33:39):I think you're pointing out one of the most important things going on in medicine today is how we didn't anticipate that live cell therapy, engineered cells and ideally off the shelf or in vivo, not just having to take them out and work on them outside the body, is a revolution ongoing, and it's not just in cancer, it's in autoimmune diseases and many others. So it's part of the virtual cell need. We need this. One of the things that's a misnomer, I want you both to comment on, we keep talking about single cell, single cell. And there's a paper spatial multi-omics this week, five different single cell scales all integrated. It's great, but we don't get to single cell. We're basically looking at 50 cells, 100 cells. We're not doing single cell because we're not going deep enough. Is that just a matter of time when we actually are doing, and of course the more we do get down to the single or a few cells, the more insights we're going to get. Would you comment about that? Because we have all this literature on single cell comes out every day, but we're not really there yet.Steve Quake (34:53):Charlotte, do you want to take a first pass at that and then I can say something?Charlotte Bunne (34:56):Yes. So it depends. So I think if we look at certain spatial proteomics, we still have subcellular resolutions. So of course, we always measure many different cells, but we are able to somewhat get down to resolution where we can look at certain colocalization of proteins. This also goes back to the point just made before having this very good environment to study drugs. If I want to build a new drug, if I want to build a new protein, the idea of building this multiscale model allows us to actually simulate different, somehow binding changes and binding because we simulate the effect of a drug. Ultimately, the redouts we have they are subcellular. So of course, we often in the spatial biology, we often have a bit like methods that are rather coarse they have a spot that averages over certain some cells like hundreds of cells or few cells.Charlotte Bunne (35:50):But I think we also have more and more technologies that are zooming in that are subcellular where we can actually tag or have those probe-based methods that allow us to zoom in. There's microscopy of individual cells to really capture them in 3D. They are of course not very high throughput yet, but it gives us also an idea of the morphology and how ultimately morphology determine certain somehow cellular properties or cellular phenotype. So I think there's lots of progress also on the experimental and that ultimately will back feed into the AI virtual cell, those models that will be fed by those data. Similarly, looking at dynamics, right, looking at live imaging of individual cells of their morphological changes. Also, this ultimately is data that we'll need to get a better understanding of disease mechanisms, cellular phenotypes functions, perturbation responses.Eric Topol (36:47):Right. Yes, Steve, you can comment on that and the amazing progress that we have made with space and time, spatial temporal resolution, spatial omics over these years, but that we still could go deeper in terms of getting to individual cells, right?Steve Quake (37:06):So, what can we do with a single cell? I'd say we are very mature in our ability to amplify and sequence the genome of a single cell, amplify and sequence the transcriptome of a single cell. You can ask is one cell enough to make a biological conclusion? And maybe I think what you're referring to is people want to see replicates and so you can ask how many cells do you need to see to have confidence in any given biological conclusion, which is a reasonable thing. It's a statistical question in good science. I think I've been very impressed with how the mass spec people have been doing recently. I think they've finally cracked the ability to look at proteins from single cells and they can look at a couple thousand proteins. That was I think one of these Nature method of the year things at the end of last year and deep visual proteomics.Eric Topol (37:59):Deep visual proteomics, yes.Steve Quake (38:00):Yeah, they are over the hump. Yeah, they are over the hump with single cell measurements. Part of what's missing right now I think is the ability to reliably do all of that on the same cell. So this is what Charlotte was referring to be able to do sort of multi-modal measurements on single cells. That's kind of in its infancy and there's a few examples, but there's a lot more work to be done on that. And I think also the fact that these measurements are all destructive right now, and so you're losing the ability to look how the cells evolve over time. You've got to say this time point, I'm going to dissect this thing and look at a state and I don't get to see what happens further down the road. So that's another future I think measurement challenge to be addressed.Eric Topol (38:42):And I think I'm just trying to identify some of the multitude of challenges in this extraordinarily bold initiative because there are no shortage and that's good about it. It is given people lots of work to do to overcome, override some of these challenges. Now before we wrap up, besides the fact that you point out that all the work has to be done and be validated in real experiments, not just live in a virtual AI world, but you also comment about the safety and ethics of this work and assuming you're going to gradually get there and be successful. So could either or both of you comment about that because it's very thoughtful that you're thinking already about that.Steve Quake (41:10):As scientists and members of the larger community, we want to be careful and ensure that we're interacting with people who said policy in a way that ensures that these tools are being used to advance the cause of science and not do things that are detrimental to human health and are used in a way that respects patient privacy. And so, the ethics around how you use all this with respect to individuals is going to be important to be thoughtful about from the beginning. And I also think there's an ethical question around what it means to be publishing papers and you don't want people to be forging papers using data from the virtual cell without being clear about where that came from and pretending that it was a real experiment. So there's issues around those sorts of ethics as well that need to be considered.Eric Topol (42:07):And of those 40 some authors, do you around the world, do you have the sense that you all work together to achieve this goal? Is there kind of a global bonding here that's going to collaborate?Steve Quake (42:23):I think this effort is going to go way beyond those 40 authors. It's going to include a much larger set of people and I'm really excited to see that evolve with time.Eric Topol (42:31):Yeah, no, it's really quite extraordinary how you kick this thing off and the paper is the blueprint for something that we are all going to anticipate that could change a lot of science and medicine. I mean we saw, as you mentioned, Steve, how that deep visual proteomics (DVP) saved lives. It was what I wrote a spatial medicine, no longer spatial biology. And so, the way that this can change the future of medicine, I think a lot of people just have to have a little bit of imagination that once we get there with this AIVC, that there's a lot in store that's really quite exciting. Well, I think this has been an invigorating review of that paper and some of the issues surrounding it. I couldn't be more enthusiastic for your success and ultimately where this could take us. Did I miss anything during the discussion that we should touch on before we wrap up?Steve Quake (43:31):Not from my perspective. It was a pleasure as always Eric, and a fun discussion.Charlotte Bunne (43:38):Thanks so much.Eric Topol (43:39):Well thank you both and all the co-authors of this paper. We're going to be following this with the great interest, and I think for most people listening, they may not know that this is in store for the future. Someday we will get there. I think one of the things to point out right now is the models we have today that large language models based on transformer architecture, they're going to continue to evolve. We're already seeing so much in inference and ability for reasoning to be exploited and not asking for prompts with immediate answers, but waiting for days to get back. A lot more work from a lot more computing resources. But we're going to get models in the future to fold this together. I think that's one of the things that you've touched on the paper so that whatever we have today in concert with what you've laid out, AI is just going to keep getting better.Eric Topol (44:39):The biology that these foundation models are going to get broader and more compelling as to their use cases. So that's why I believe in this. I don't see this as a static situation right now. I just think that you're anticipating the future, and we will have better models to be able to integrate this massive amount of what some people would consider disparate data sources. So thank you both and all your colleagues for writing this paper. I don't know how you got the 42 authors to agree to it all, which is great, and it's just a beginning of something that's a new frontier. So thanks very much.Steve Quake (45:19):Thank you, Eric.**********************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access, with no ads..Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding by US biomedical research at NIH and other governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

All Things Go
11 of 11 - Go/Baduk/Weiqi - Chess GM Plays Go, Surrounding Game Interview, Tournament Time Controls & Season Takeaways

All Things Go

Play Episode Listen Later Mar 6, 2025 61:29


Theme music by UNIVERSFIELD & background music by PodcastACThe Perpetual Chess Podcast episode with GM Tiger Hillarp Persson talking about playing GoInformation on Go professionals Lee Sedol, Cho Hun-hyun, Go Seigen & Michael RedmondThe Go Magic interview for The Surrounding GameDevin Fraze's Baduk Club & Baduk Club's all-in-one tournament tool and online timerThe Pommodoro TechniqueShow your support hereEmail: AllThingsGoGame@gmail.com

Your Undivided Attention
Behind the DeepSeek Hype, AI is Learning to Reason

Your Undivided Attention

Play Episode Listen Later Feb 20, 2025 31:34


When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIAFurther reading on DeepSeek's R1 and the market reaction Further reading on the debate about the actual cost of DeepSeek's R1 model  The study that found training AIs to code also made them better writers More information on the AI coding company Cursor Further reading on Eric Schmidt's threshold to “pull the plug” on AI Further reading on Move 37RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We're Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn The AI ‘Race': China vs. the US with Jeffrey Ding and Karen Hao 

Altri Orienti
EP.109 - Il fondatore di DeepSeek è un idealista

Altri Orienti

Play Episode Listen Later Jan 30, 2025 27:27


Liang Wenfeng è nato in una città del Guangdong negli anni ‘80. è un balinghou, come vengono chiamate in Cina le persone nate in quegli anni. È lui che ha fondato DeepSeek, di cui ha plasmato gli aspetti tecnici e quelli comunicativi. Liang è il risultato degli investimenti cinesi in AI da molti anni a oggi. Ed è anche un incredibile idealista.  . Fonti: le fonti audio di questa puntata sono tratte da: 1957: Sputnik I, canale YouTube International Astronautical Federation, 16 aprile 2008; AlphaGo 3-0 Lee Sedol, AlphaGo wins DeepMind Challenge, canale YouTube SciNews, 12 marzo 2016; 中国AI鲶鱼DeepSeek创始人梁文峰:中国要从技术“搭便车”转向技术贡献者|中国缺的不是资本,而是信心和有效组织高密度人才的能力|AGI有望2-10年内实现, Bilibili, 22 gennaio 2025. Learn more about your ad choices. Visit megaphone.fm/adchoices

La TERTULia de la Inteligencia Artificial
rStar-Math, el AlphaGo de las matemáticas

La TERTULia de la Inteligencia Artificial

Play Episode Listen Later Jan 30, 2025 42:15


¿Pueden los modelos pequeños mostrar capacidades de razonamiento matemático comparables a o1? En Microsoft creen que sí y nos lo demuestran con un método inspirado en AlphaGo, el sistema que venció a Lee Sedol hace ya casi una década. Hoy en la tertulia vemos modelos de lenguaje pequeños que superan a o1. Participan en la tertulia: Paco Zamora, Íñigo Olcoz, Carlos Larríu, Íñigo Orbegozo y Guillermo Barbadillo. Recuerda que puedes enviarnos dudas, comentarios y sugerencias en: https://twitter.com/TERTUL_ia Más info en: https://ironbar.github.io/tertulia_inteligencia_artificial/

All Things Go
3 of 11 - Go/Baduk/Weiqi - Go & Quarto, The History of Go, Pro Dan System Discrepancies & Preventing Burnout

All Things Go

Play Episode Listen Later Jan 9, 2025 50:42


Theme music by UNIVERSFIELD & background music by PodcastACThe board game QuartoPresident Nam Chihyung with the International Society of Go Studies (ISGS)Netflix's Captivating the KingConfucian Text ChunqiuKibi no MakabiSunjang BadukThe father of Korean Go, Cho Nam-chulOther pro players: Hsu Hao-hung, Lee Sedol, Cho Hun-hyun, Ichiriki Ryo & Cho ChikunThe Ing CupJapanese Go AssociationKorea Baduk AssociationKorean professional players Lee Sedol, female Kim Eunji and Chinese female Hua XuemingThe Fujitsu CupVideo games Genshin impact and Zelda: Breath of the WildPolish Pro Mateusz Surma and his site polegote.comBenKyo's league and websiteShow your support hereContact: AllThingsGoGame@gmail.com

All Things Go
1 of 11 - Go/Baduk/Weiqi - A Go Origin Story, Michael Chen Interview, #1 Pro Shin Jin-seo, & In-Person Tournaments

All Things Go

Play Episode Listen Later Dec 26, 2024 57:53


Theme music by UNIVERSFIELD & background music by PodcastACMichael Chen's interview in the European Go JournalMa Xiaochun's The Thirty-Six Stratagems Applied to GoMichael Chen's Twitch ChannelWikipedia pages for Go professionals: Shin Jin-seo, Lee Sedol, Lee Chang-ho, Cho Chikun, Ke Jie, Gu Li, Cho Hun-hyun, & Park JungwhanUS Go CongressThe North American Go Federation which runs the professional qualification tournamentThe online Fox Go ServerThe Toronto Go Spectacular tournamentBenKyo's league and websiteShow your support hereContact: AllThingsGoGame@gmail.com

origin stories tournaments shin lee chang lee sedol michael chen weiqi
Big Tech
AI Has Mastered Chess, Poker and Go. So Why Do We Keep Playing?

Big Tech

Play Episode Listen Later Dec 17, 2024 35:34


The board game Go has more possible board configurations than there are atoms in the universe.Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.In 2016, researchers at Google's DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.He wasn't alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He's spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.Mentioned:“AlphaGo”“The Beauty of Games” by Frank Lantz“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern“Heads-up limit hold'em poker is solved” by Michael Bowling Et al.Further Reading:“How to Play a Game” by Frank Lantz“The Afterlife of Go” by Frank Lantz“How A.I. Conquered Poker” by Keith Romer“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade MetzHey Robot by Frank LantzUniversal Paperclips by Frank Lantz

The Science Hour
Doing a deal

The Science Hour

Play Episode Listen Later Nov 29, 2024 49:34


It's Black Friday! Everyone is camping in the street, staying up all night for the very best deals around. And Unexpected Elements are joining in.We take a look at the huge underground trade of vital resources...not run by criminals but fungi.Then it is onto illegal animal trade and the 300 pets who got a terrible deal, strapped to a man's chest as he tried to make it through airport security. Have you ever asked a pigeon for advice when gambling? We hear from a professor of psychology about why you should not.And finally, the story of Lee Sedol, the world's best player of the board game Go, who was challenged by Google to a game worth one million dollars. Presenter: Caroline Steel, with Phillys Mwatee and Christine Yohannes Producers: Emily Knight, Harrison Lewis, Imaan Moin and William Hornbrook Sound engineer: Searle Whittney

The Eric Ries Show
Risks, Rewards, and Building the Unicorn Chip Company Taking on Nvidia | Inside Groq with Jonathan Ross

The Eric Ries Show

Play Episode Listen Later Nov 7, 2024 75:13


The story of Groq, a semiconductor startup that makes chips for AI inference and was recently valued at $2.8 billion, is a classic “overnight success that was years in the making” tale. On this episode, I talk with founder and CEO Jonathan Ross. He began the work that eventually led to Groq as an engineer at Google, where he was a member of the rapid eval team – “the team that comes up with all the crazy ideas at Google X.” For him, the risk involved in leaving to launch Groq in 2016 was far less than the risk of staying in-house and watching the project die. Groq has had many “near-death” experiences in its eight years of existence, all of which Jonathan believes have ultimately put it in a much stronger position to achieve its mission: preserving human agency in the age of AI. Groq is committed to giving everyone access to relatively low-cost generative AI compute, driving the price down even as they continue to increase speed. We talked about how the company culture supports that mission, what it feels like to now be on the same playing field as companies like Nvidia, and Jonathan's belief that true disruption isn't just doing things other people can't do or don't want to do, but doing things other people don't believe can be done – even when you show them evidence to the contrary.  Other topics we touched on include: Why the ability to customize on demand makes generative AI different  Managing your own and other people's fear as a founder The problems of corporate innovation The role of luck in business How he thinks about long-term goals and growth — Brought to you by: Mercury – The art of simplified finances. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Learn more⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. DigitalOcean – The cloud loved by developers and founders alike. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Sign up⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Runway – The finance platform you don't hate. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Learn more⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. — Where to find Jonathan Ross: • X: ⁠https://x.com/JonathanRoss321⁠  • LinkedIn: ⁠https://www.linkedin.com/in/ross-jonathan/⁠ Where to find Eric: • Newsletter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://ericries.carrd.co/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  • Podcast: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://ericriesshow.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  • YouTube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@theericriesshow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  — In This Episode We Cover: (04:24) Jonathan's involvement with the DeepMind Challenge Match between AlphaGo and Lee Sedol (06:06) How Jonathan's work Google and how it led him to that moment (08:46) Why generative AI isn't just the next internet or mobile (10:12) The divine move in the DeepMind Challenge Match (11:56) How Jonathan ended up designing chips without the usual background (13:11) GPUs vs. TPUs (14:33) What risk really is (15:11) Groq's mind-blowing AI demo  (16:23) How Jonathan decided to leave Google and start Groq (17:30) The differences between doing an innovation project at a company and starting a new company (19:03) Nassim Taleb's Black Swan theory (21:02) Groq's founding story (24:12) The difference in attitude towards AI now compared to 2016 and how it affected Groq (25:46) The moment the tide turned with LLMs (28:28) The week-over-week jump from 8,000 users to 400,000 users (30:32) How Groq used HBM and what is it (the memory used by GPUs) (32:33) Jonathan's approach to disruption (35:38) Groq's initial raise and focus on software (36:13) How struggling to survive made Groq stronger (37:13) Hiring for return on luck (40:07) How Jonathan and Groq think about the long-term (42:25) Founder control issues (45:31) How Groq thinks about maintaining its mission and trustworthiness (49:51) Jonathan's vision for a capital market that would support companies like Groq (52:58) How Groq manages internal cultural alignment (55:59) Groq's mission and to preserve human agency in the age of AI how it approaches achieving it (59:48) Lightning round You can find the transcript and references at ⁠⁠https://www.ericriesshow.com/⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Eric may be an investor in the companies discussed.

Training Data
OpenAI's Noam Brown, Ilge Akkaya and Hunter Lightman on o1 and Teaching LLMs to Reason Better

Training Data

Play Episode Listen Later Oct 2, 2024 45:22


Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks. Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better.  Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Learning to Reason with LLMs: Technical report accompanying the launch of OpenAI o1. Generator verifier gap: Concept Noam explains in terms of what kinds of problems benefit from more inference-time compute. Agent57: Outperforming the human Atari benchmark, 2020 paper where DeepMind demonstrated “the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.” Move 37: Pivotal move in AlphaGo's second game against Lee Sedol where it made a move so surprising that Sedol thought it must be a mistake, and only later discovered he had lost the game to a superhuman move. IOI competition: OpenAI entered o1 into the International Olympiad in Informatics and received a Silver Medal. System 1, System 2: The thesis if Danial Khaneman's pivotal book of behavioral economics, Thinking, Fast and Slow, that positied two distinct modes of thought, with System 1 being fast and instinctive and System 2 being slow and rational. AlphaZero: The predecessor to AlphaGo which learned a variety of games completely from scratch through self-play. Interestingly, self-play doesn't seem to have a role in o1. Solving Rubik's Cube with a robot hand: Early OpenAI robotics paper that Ilge Akkaya worked on. The Last Question: Science fiction story by Isaac Asimov with interesting parallels to scaling inference-time compute. Strawberry: Why? O1-mini: A smaller, more efficient version of 1 for applications that require reasoning without broad world knowledge. 00:00 - Introduction 01:33 - Conviction in o1 04:24 - How o1 works 05:04 - What is reasoning? 07:02 - Lessons from gameplay 09:14 - Generation vs verification 10:31 - What is surprising about o1 so far 11:37 - The trough of disillusionment 14:03 - Applying deep RL 14:45 - o1's AlphaGo moment? 17:38 - A-ha moments 21:10 - Why is o1 good at STEM? 24:10 - Capabilities vs usefulness 25:29 - Defining AGI 26:13 - The importance of reasoning 28:39 - Chain of thought 30:41 - Implication of inference-time scaling laws 35:10 - Bottlenecks to scaling test-time compute 38:46 - Biggest misunderstanding about o1? 41:13 - o1-mini 42:15 - How should founders think about o1?

Training Data
Phaidra's Jim Gao on Building the Fourth Industrial Revolution with Reinforcement Learning

Training Data

Play Episode Listen Later Aug 20, 2024 50:33


After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product. Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Mustafa Suleyman: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose” Joe Kava: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra Constrained optimization: the class of problem that reinforcement learning can be applied to in real world systems  Vedavyas Panneershelvam: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project Katie Hoffman: co-founder, President and COO of Phaidra  Demis Hassabis: CEO of DeepMind

Fakt ab! Eine Woche Wissenschaft
Sex-Spiel gegen Hautkrebs: Streichle mich und zähl meine Leberflecken!

Fakt ab! Eine Woche Wissenschaft

Play Episode Listen Later Jul 26, 2024 29:41


Diese Woche mit Charlotte Grieser und Sina Kürtz Ihre Themen sind: - Gesetz in Japan: EinwohnerInnen müssen täglich lachen (00:39) - Peinlichkeitsforschung: Schlecht Singen für die Wissenschaft (08:28) - Sex-Spiel gegen Hautkrebs: Streichle mich und zähl meine Leberflecken! (14:10) - Stuhlgang: 1 bis 3 Mal am Tag koten ist gesund (22:56) Weitere Infos und Studien gibt's hier: Kein Witz: Neues Gesetz verpflichtet Menschen in Japan zu Lachen https://www.swr.de/swrkultur/wissen/kein-witz-neues-gesetz-verpflichtet-menschen-in-japan-zu-lachen-100.html Warum wir erröten https://www.deutschlandfunk.de/karaoke-experiment-warum-erroeten-wir-interview-christian-keysers-dlf-039c4739-100.html Hautarzt für eine Nacht https://www.faz.net/aktuell/wissen/medizin-ernaehrung/erotischer-hautkrebs-check-skintimacy-von-aok-und-amorelie-19864583.html Wie häufig ist gesund?Studie zum Stuhlgang: Wie häufig ist gesund? https://www.faz.net/aktuell/wissen/medizin-ernaehrung/studie-zum-stuhlgang-wie-haeufig-ist-gesund-19859556.html Poop Frequency Linked to Long-Term Health, New Study Reveals https://scienceblog.com/546070/poop-frequency-linked-to-long-term-health-new-study-reveals/ Unser Podcast-Tipp der Woche: Der KI-Podcast Vom Schachtürken im 18. Jahrhundert gehts zum berühmten Mathematiker Alan Turing und der Frage "Sind Gehirne wie Computer?". Es endet beim menschlich nie übertroffenen Meister des Spiels "Go", Lee Sedol, der gegen die KI AlphaGo dann doch verloren hat. In der neuen Folge von unseren KollegInnen von "Der KI-Podcast" geht es um die großen Meilensteine in der Entwicklung von künstlicher Intelligenz. Und das das ist wirklich beeindruckend! Oder war euch bewusst, wie lange die Menschheit eigentlich schon daran forscht? Der KI-Podcast ist jetzt genau ein Jahr alt und wenn die Entwicklung so weitergeht wie bisher, können wir uns bestimmt auf noch viele weitere Folgen, Jahre und Meilensteine freuen. Herzlichen Glückwunsch, Marie, Gregor und Fritz! Hört euch unbedingt ihren KI-Podcast in der ARD Audiothek an - jeden Dienstag gibts eine neue Folge. https://www.ardaudiothek.de/sendung/der-ki-podcast/94632864/ Habt ihr auch Nerd-Facts und schlechte Witze für uns? Schreibt uns bei WhatsApp oder schickt eine Sprachnachricht: 0174/4321508 Oder per E-Mail: faktab@swr2.de Oder direkt auf http://swr.li/faktab Instagram: @charlotte.grieser @julianistin @sinologin @aeneasrooch Redaktion: Christoph König und Chris Eckardt Idee: Christoph König

Der KI-Podcast
Welcher KI-Moment hat die Welt verändert?

Der KI-Podcast

Play Episode Listen Later Jul 23, 2024 38:04


Der KI-Podcast feiert Einjähriges - mit einer ganz besonderen Folge. Nicht nur hosten Marie, Gregor und Fritz zum ersten Mal eine Folge zu dritt - sie haben währenddessen auch noch lustige Partyhüte auf! Und vor allem haben sie ihre Lieblingsmomente aus der KI-Geschichte dabei, von falschen Schachspielern, neuronalen Netzen und dem Schulterschlag auf der fünften Linie. Über die Hosts: Gregor Schmalzried ist freier Tech-Journalist und Berater, er arbeitet u.a. für den Bayerischen Rundfunk und Brand Eins. Fritz Espenlaub ist freier Journalist und Moderator beim Bayerischen Rundfunk und 1E9 mit Fokus auf Technologie und Wirtschaft. Marie Kilg ist Chief AI Officer bei der Deutschen Welle. Zuvor war sie Produkt-Managerin bei Amazon Alexa. 00:00 Intro 02:24 Der Mechanical Turk 12:17 McCulloch und Pitts: Sind Gehirne wie Computer? 21:37 AlphaGo gegen Lee Sedol 35:27 Was diese KI-Geburtstage über die Technologie sagen Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Links und Quellen: DER KI-PODCAST LIVE beim BR Podcastfestival in Nürnberg https://tickets.190a.de/event/der-ki-podcast-live-in-nurnberg-hljs6y Der Mechanical Turk https://www.britannica.com/story/the-mechanical-turk-ai-marvel-or-parlor-trick Amazon MTurk https://www.mturk.com/ Gehirn-Maschinen-Metaphern: https://dirt.fyi/article/2024/03/metaphorically-speaking https://arxiv.org/abs/2206.04603 Warren McCulloch and Walter Pitts: A Logical Calculus of the Ideas Immanent in Nervous Activity https://link.springer.com/chapter/10.1007/978-3-642-70911-1_14 McCulloch-Pitts Neuron — Mankind's First Mathematical Model Of A Biological Neuron https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1 Untold History of AI: How Amazon's Mechanical Turkers Got Squeezed Inside the Machine https://spectrum.ieee.org/untold-history-of-ai-mechanical-turk-revisited-tktkt AlphaGo-Doku auf Youtube: https://www.youtube.com/watch?v=WXuK6gekU1Y MANIAC von Benjamin Labatut: https://www.suhrkamp.de/buch/benjamin-labatut-maniac-t-9783518431177 Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!

The Bacon Podcast with Brian Basilico | CURE Your Sales & Marketing with Ideas That Make It SIZZLE!
Episode 970 – Man vs Machine – A Brief History of Algorithms and Artificial Intelligence

The Bacon Podcast with Brian Basilico | CURE Your Sales & Marketing with Ideas That Make It SIZZLE!

Play Episode Listen Later Jun 12, 2024 11:43


Back in 1997, Deep Blue (an IBM computer) defeated Garry Kasparov, the world chess champion at the time, in a six-game match with a final score of 3.5-2.5 in favor of Deep Blue. Almost 20 years later, in 2016, Google's AlphaGo program achieved a similar victory by defeating Lee Sedol, one of the world's top professional Go players, in a five-game match with a final score of 4-1. Artificial intelligence and machine learning have been around for decades, yet they were not accessible to you and me. Now that they are, they're predicted to change marketing (and life in general) forever. Experts, insiders, and reporters expect good and bad from AI. Some predict Skynet from the Terminator movies, while others expect it to cure cancer. Ockham's razor would predict that it's probably something in the middle. Companies and their leaders are telling us that they are working to make your experience better and part of the greater good of humanity. Search is supposed to provide the best answer to your search prompt, and social media is supposed to serve the content you want to see. In reality, it's more about profits than principles. Search is optimized to get you to click ads—that's how Google makes over 75% of its revenue. Facebook feeds your friendships and shows posts meant to stir the pot and keep you engaged. Facebook makes over 95% of its profits from ad sales. Ockham's razor would show us that trying to find and win customers through search and social media would benefit the platform more than you. It's like a casino with all that noise of winners on its machines, but the odds have been programmed to make the casino much more money than it's paying out! When it comes to marketing, we have been using AI since the beginnings of Google and Facebook. Both are run through algorithms.

The Nonlinear Library
LW - LLMs seem (relatively) safe by JustisMills

The Nonlinear Library

Play Episode Listen Later Apr 26, 2024 10:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...

The Nonlinear Library: LessWrong
LW - LLMs seem (relatively) safe by JustisMills

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 26, 2024 10:50


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...

[i3] Podcast
98: Artificial Intelligence in Wealth Management

[i3] Podcast

Play Episode Listen Later Apr 17, 2024 61:40


In episode 98 of the [i3] Podcast, we are speaking with Will Liang, who is an executive director at MA Financial Group, but is also well-known for his time with Macquarie Group, where he worked for more than a decade, including as Head of Technology for Macquarie Capital Australia and New Zealand. We discuss the application of AI in financial services and wealth management, ChatGPT and how to deal with AI hallucinations. Overview of Podcast with Will Liang 02:30 When I was young I contemplated becoming a professional Go player 05:00 2016 was a life shattering moment for me; Lee Sedol was defeated by AlphaGo 07:00 I think generative AI will be a net positive for society 08:30 The impact of AI on industries will not be equally distributed 15:00 Brainstorming with ChatGPT or Claude 16:00 AI might help us communicate better 19:00 AI hallucinations are actually a fixable problem 22:30 Myths and misconceptions in AI 27:00 Most of the time when ChatGPT doesn't work is because we are prompting it in the wrong way 28:30 Thinking Fast & Slow; AI is not good at thinking slow 29:00 Losing our jobs to AI? It is important to distinguish between the automation of tasks versus the automation of jobs 35:00 When implementing AI, look at where your data is and try to bring your application closer to the data 39:00 Don't trust any third party large language model, instead deploy an open source model into your own cloud environment 43:00 You ask ChatGPT 10 times the same question and it will give you nine different answers. That is a problem. 45:00 Deep fake is a real problem 50:00 Future trends: AI agents 53:00 Generative AI will be more of a game changer for private markets than public markets

Star Point
34: Tips for Using AI, Lee Sedol's Google Interview

Star Point

Play Episode Listen Later Apr 8, 2024 48:10


Don't try to make AI do all the work for you! It's just a tool like any other and you need to learn to use it effectively. Here are some of my personal tips for making the best of the powerful technology that would make Honinbo Shusaku shake in his boots. Links Lee Sedol's Google Interview Get 25% off your very first purchase on ⁠⁠⁠⁠GoMagic.org⁠⁠⁠⁠ by using the code STARPOINT at checkout. --- Send in a voice message: https://podcasters.spotify.com/pod/show/starpoint/message Support this podcast: https://podcasters.spotify.com/pod/show/starpoint/support

ai google using ai lee sedol starpoint
Gary Ryan Moving Beyond Being Good®
Dave King Founder Move37 and AI Expert shares his insights about what is ahead for business and careers

Gary Ryan Moving Beyond Being Good®

Play Episode Listen Later Jan 5, 2024 38:17


In this episode, Gary Ryan interviews Dave King, the co-founder and CEO of Move37 and former founder of The Royals creative agency. They discuss Dave's journey in the advertising industry and his transition to working with artificial intelligence (AI). They explore the impact of AI on creativity and critical thinking, as well as its potential to enhance productivity in the workplace. Dave shares insights into Move37's AI research assistant, Archer, and the consulting work they do to help organizations leverage AI. The conversation concludes with a discussion on engagement in the workplace and the importance of embracing AI to stay competitive. Takeaways Learn about Dave's career journey and how he left a well paid, secure job, to start The Royals.AI has the potential to enhance creativity and critical thinking in the workplace.Using AI tools can increase productivity and empower individuals to create incredible things.Engaging with AI and learning how to use it effectively can make individuals more valuable in their careers.The impact of AI on labor and industry is a concern, but it also presents opportunities for new types of work and collaboration between humans and machines.Communication skills, critical thinking, and media literacy are important skills for working with AI.Engagement in the workplace can be improved by leveraging AI to enhance productivity and create more fulfilling work experiences.Follow Dave King on Twitter here. Watch the episode on YouTube here. Connect with Gary Ryan on LinkedIn here. Contact Gary Ryan here.  Purchase Gary Ryan's new book, Yes For Success - How to Achieve Life Harmony and Fulfillment here http://yesforsuccess.guru Purchase Yes For Success - How to Achieve Life Harmony and Fulfillment Kindle Edition on Amazon here or buy the physical book here.   If you would like support in creating a high-performance culture based on treating people as human beings, please click here to contact Gary Ryan

Spot Lyte On...
Joe Mills (Aver) of Move 78 decodes jazz from the algorithms

Spot Lyte On...

Play Episode Listen Later Sep 14, 2023 54:35


Today, the Spotlight shines On Joe Mills, a musician and producer who performs under the name ‘Aver' in the Berlin-based band Move 78.Move 78's music sits at the intersection of improvised jazz and programmed hip-hop. Their music is crafted from hours of studio improvisations that have been chopped-up, rearranged, and layered with live instrumentation provided by the band members.In this chat, Aver explains their technique and process as well as how it extends to their live performances. It's amazing. Move 78's name is taken from a match of the ancient Chinese board game Go between Lee Sedol, the world champion of Go at the time of the match, and a computer program named AlphaGo. Lee was defeated in the first three games of the five-game match by his AI opponent, but he adapted and played a move so strange that it completely befuddled AlphaGo and its algorithms.This move, which represented the human response to the challenges of an ever-evolving technological world, was, of course, Move 78.------------------Dig DeeperListen to Move 78's Automated Improvisation on Bandcamp or your streaming platform of choiceFollow Move 78 on Instagram and FacebookFollow Aver on Instagram and FacebookMove 78 - “Middling” [Live At Badehaus]In Two Moves, AlphaGo and Lee Sedol Redefined the FutureMove 78's Go-inspired artworkUnapologetic Expression: The Inside Story of the UK Jazz ExplosionGilles Peterson: ‘The boundary between club culture and jazz is finally breaking'DJ Shadow On Sampling As A ‘Collage Of Mistakes'El-P: 10 of his best productionsDJ Shadow & Cut Chemist - Product Placement------------------• Did you enjoy this episode? Please share it with one friend! You can also rate Spotlight On ⭐️⭐️⭐️⭐️⭐️ and leave a review on Apple Podcasts. • Subscribe! Be the first to check out each new episode of Spotlight On in your podcast app of choice. • Looking for more? Visit spotlightonpodcast.com for bonus content, web-only interviews + features, and the Spotlight On email newsletter. Hosted on Acast. See acast.com/privacy for more information.

Spotlight On
Joe Mills (Aver) of Move 78 decodes jazz from the algorithms

Spotlight On

Play Episode Listen Later Sep 14, 2023 54:35


Today, the Spotlight shines On Joe Mills, a musician and producer who performs under the name ‘Aver' in the Berlin-based band Move 78.Move 78's music sits at the intersection of improvised jazz and programmed hip-hop. Their music is crafted from hours of studio improvisations that have been chopped-up, rearranged, and layered with live instrumentation provided by the band members.In this chat, Aver explains their technique and process as well as how it extends to their live performances. It's amazing. Move 78's name is taken from a match of the ancient Chinese board game Go between Lee Sedol, the world champion of Go at the time of the match, and a computer program named AlphaGo. Lee was defeated in the first three games of the five-game match by his AI opponent, but he adapted and played a move so strange that it completely befuddled AlphaGo and its algorithms.This move, which represented the human response to the challenges of an ever-evolving technological world, was, of course, Move 78.------------------Dig DeeperListen to Move 78's Automated Improvisation on Bandcamp or your streaming platform of choiceFollow Move 78 on Instagram and FacebookFollow Aver on Instagram and FacebookMove 78 - “Middling” [Live At Badehaus]In Two Moves, AlphaGo and Lee Sedol Redefined the FutureMove 78's Go-inspired artworkUnapologetic Expression: The Inside Story of the UK Jazz ExplosionGilles Peterson: ‘The boundary between club culture and jazz is finally breaking'DJ Shadow On Sampling As A ‘Collage Of Mistakes'El-P: 10 of his best productionsDJ Shadow & Cut Chemist - Product Placement------------------• Did you enjoy this episode? Please share it with one friend! You can also rate Spotlight On ⭐️⭐️⭐️⭐️⭐️ and leave a review on Apple Podcasts. • Subscribe! Be the first to check out each new episode of Spotlight On in your podcast app of choice. • Looking for more? Visit spotlightonpodcast.com for bonus content, web-only interviews + features, and the Spotlight On email newsletter. Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
LW - Even Superhuman Go AIs Have Surprising Failures Modes by AdamGleave

The Nonlinear Library

Play Episode Listen Later Jul 20, 2023 19:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on LessWrong. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, we can show Ka...

The Nonlinear Library
AF - Even Superhuman Go AIs Have Surprising Failures Modes by AdamGleave

The Nonlinear Library

Play Episode Listen Later Jul 20, 2023 19:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on The AI Alignment Forum. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, w...

The Nonlinear Library: LessWrong
LW - Even Superhuman Go AIs Have Surprising Failures Modes by AdamGleave

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 20, 2023 19:19


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on LessWrong. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, we can show Ka...

AI Stories
Kellin Pelrine - How He Crushed A Superhuman Go-Playing AI 14 Games To 1 #34

AI Stories

Play Episode Listen Later Jun 8, 2023 69:56


Our guest today is Kellin Pelrine, Research Scientist at FAR AI and Doctoral Researcher at the Quebec Artificial Intelligence Institute (MILA). In our conversation, Kellin first explains how he defeated a superhuman Go-playing AI engine named KataGo 14 games to 1. We talk about KataGo's weaknesses and discuss how Kellin managed to identify them using Reinforcement Learning. In the second part of the episode, we dive into Kellin's research on building practical AI systems. We dig into his work on misinformation detection and political polarisation and discuss why building stronger models isn't always enough to get real world impact. If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.Follow Kellin on LinkedIn: https://www.linkedin.com/in/kellin-pelrine/Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/  ————(00:00) - Intro(01:54) - How Kellin got into the field(03:23) - The game of Go (06:10) - Lee Sedol vs AlphaGo(11:42) - How Kellin defeated KataGo 14 -1(26:24) - Using AI to detect KataGo's weaknesses (37:07) - Kellin's research on building practical AI systems(43:10) - Misinformation detection (49:22) - Political polarisation(54:39) - ML in Academia vs in Industry(1:06:03) - Career Advice

Gresham College Lectures
AI in Business

Gresham College Lectures

Play Episode Listen Later Jun 1, 2023 61:30 Transcription Available


AI is another major technological innovation. AI needs data, or more precisely, big organized data. Most data processing is about making it useful for automatic systems such as machine learning, deep learning, and other AI systems. But one big problem with AI systems is that they lack context. An AI system is a pattern recognition machine devoid of any understanding of how the world works.This lecture discusses how AI systems are used in business and their limitations.A lecture by Raghavendra Rau recorded on 22 May 2023 at Barnard's Inn Hall, London.The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/ai-businessGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website:  https://gresham.ac.ukTwitter:  https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the show

Hälsa kommer inifrån
Avsnitt 68. Hur kan AI förbättra din hälsa?

Hälsa kommer inifrån

Play Episode Listen Later Apr 13, 2023 56:02


Hur kan sjukvården använda AI för att ställa bättre diagnoser? Hur kan vi själva använda AI för att optimera vår hälsa? Var kan artificiell intelligens göra för vår hälsa i dag – och vad kan vi vänta oss av framtiden? I Avsnitt 68 av podden Hälsa kommer inifrån pratar vi med professor Anne Håkansson, som forskar på ämnet AI och hälsa. Programledare är Sofia Mellström. Vad är AI, egentligen? Artificiell Intelligens är en samlingsterm för datorsystem som kan dra slutsatser, lösa problem, planera – och som är självlärande. Man kan säga att ett vanligt datorprogram är programmerat att göra rätt, medan ett AI-program är programmerat att lära sig en uppgift genom övning.  Ett bra exempel‚ som visar skillnaden, är AlexNet.  År 2012 var en tävling där datorprogram tävlade om att klassificera miljontals bilder. Det fanns traditionella datorprogram, specialskrivna för ändamålet. Men tävlingen vanns överlägset av AlexNet, ett datorprogram som helt saknade förkunskap, men som var designat att lära sig. Efter 5 dagars träning var AlexNet bättre på bild-kategorisering än de program som var specialskrivna för uppdraget. Sedan har bland annat forskare vid Danderyds sjukhus laddade ner AlexNet och började träna det i att tolka röntgenbilder. AI, en kort historia När vi studerar AI:s historia hamnar vi på 1950, då Alan Turing uppfann Turing-testet, som bedömer om en maskin är intelligent eller inte. Men själva termen ”artificiell intelligens” användes för första gången 1956. 1967 uppfanns Nearest Neighbors- algoritmen, som är viktig för klassificering av objekt och mönsterigenkänning. 1979 kom Stanford Cart som är föregångaren till självstyrande bilar.  1985 kom AI NETtalk som  använde deep learning för att lära sig att prata. 1997 kunde IBM:s superdator Deep Blue besegra den världsmästaren, Garry Kasparov i schack. 2004 hade NASA självkörande bilar på Mars yta. 2011 vann Watson i Jeopary. Det är samma Watson som vi talade om tidigare, som i dag används för att läsa röntgenbilder. 2012 kom AlexNet, som vi nämnde tidigare, som lärde sig att kategorisera miljontals bilder och vann över alla specialskrivna program.  2016 datorn AlphaGo vann 4 av 5 matcher mot mästaren Lee Sedol i spelet GO, som är mycket mer komplicerat än schack. Det finns en gratis dokumentär på YouTube som heter AlphaGo - The Movie. Den kan jag varmt rekommendera för att som vill veta mer om AI. November 2022 kom Chat GPT från Microsoft som spås bli en utmanare till Google.  Det känns som det händer väldigt mycket inom AI just nu.  Forskning på wearables, hur vi med mätdata kan uppmuntras till bättre hälsa  Anne Håkansson är professor i Computer Science vid universitetet i Tromsö. De studerar hur AI kan hjälpa människor att se över sin livssituation.  Hon berättar att personerna som ingår i studien använder mätenheter (t.ex.  fitbit och Ouroa-ring) som mäter olika kroppsfunktioner.  Deltagarna i studien får en app i vilken de kan skicka meddelande utifrån personens egna önskemål: det kan vara meddelande av typen: “Det är dags för dig att gå ut och gå”. Eller: “Nu har du varit högaktiv i tre timmar, nu kan det vara bra att ta en paus.” Om personen sover dåligt kan vi uppmuntra dem att ta det lugnt på kvällen (inte träna) för att undvika adrenalinpåslaget.  Men vi samlar även in andra data, som väder. Om det är regn eller storm ute så ska vi inte skicka en uppmuntran om att personen ska ut och gå. De ska ju vara en positiv återkoppling.  Vi vet också vilka fritidsintressen personerna har. Gillar de att cykla eller åka skidor. Då kan vi anpassa uppmuntran utifrån intresse, mål, behov och även såna saker som väder. Det blir relativt komplex. Vi behöver också veta deltagarnas schema för att se om de överhuvudtaget har tid att göra åtgärder.  Vi bygger upp en databas för varje person för att hitta mönster och se hur hen mår. Om de har vänner eller kollegor som också deltar kan vi hjälpa till med gruppaktiviteter. Hittar de personer i studien som har liknande behov så kan de koppla samman och uppmuntra varandra. 

The Nonlinear Library
AF - Inside the mind of a superhuman Go model: How does Leela Zero read ladders? by Haoxing Du

The Nonlinear Library

Play Episode Listen Later Mar 1, 2023 51:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside the mind of a superhuman Go model: How does Leela Zero read ladders?, published by Haoxing Du on March 1, 2023 on The AI Alignment Forum. tl;dr—We did some interpretability on Leela Zero, a superhuman Go model. With a technique similar to the logit lens, we found that the residual structure of Leela Zero induces a preferred basis throughout network, giving rise to persistent, interpretable channels. By directly analyzing the weights of the policy and value heads, we found that the model stores information related to the probability of the pass move along the top edge of the board, and those related to the board value in checkerboard patterns. We also took a deep dive into a specific Go technique, the ladder, and identified a very small subset of model components that are causally responsible for the model's judgement of ladders. Introduction We live in a strange world where machine learning systems can generate photo-realistic images, write poetry and computer programs, play and win games, and predict protein structures. As machine learning systems become more capable and relevant to many aspects of our lives, it is increasingly important that we understand how the models produce the outputs that they do; we don't want important decisions to be made by opaque black boxes. Interpretability is an emerging area of research that aims to offer explanations for the behavior of machine learning systems. Early interpretability work began in the domain of computer vision, and there has been a focus on interpreting transformer-based large language models in more recent years. Applying interpretability techniques to the domain of game-playing agents and reinforcement learning is still relatively uncharted territory. In this work, we look into the inner workings of Leela Zero, an open-source Go-playing neural network. It is also the first application of many mechanistic interpretability techniques to reinforcement learning. Why interpret a Go model? Go models are very capable. Many of us remember the emotional experience of watching AlphaGo's 2016 victory over the human world champion, Lee Sedol. Not only have there been algorithmic improvements since AlphaGo, these models improve via self-play, and can essentially continue getting better the longer they are trained. The best open-source Go model, KataGo, is trained distributedly, and the training is still ongoing as of February 2023. Just as AlphaGo was clearly one notch above Lee Sedol, every generation of Go models has been a decisive improvement over the previous generation. KataGo in 2022 was estimated to be at the level of a top-100 European player with only the policy, and can easily beat all human players with a small amount of search. Understanding a machine learning system that performs at a superhuman level seems particularly worthwhile as future machine learning systems are only going to become more capable. Little is known about models trained to approximate the outcome of a search process. Much interpretability effort have focused on models trained on large amounts of human-generated data, such as labeled images for image models, and Internet text for language models. In constrast, while training AlphaZero-style models, moves are selected via Monte-Carlo Tree Search (MCTS), and the policy network of the model is trained to predict the outcome of this search process (see Model section for more detail). In other words, the policy network learns to distill the result of search. While it is relatively easy to get a grasp of what GPT-2 is trained to do by reading some OpenWebText, it's much less clear what an AlphaZero-style model learns. How does a neural network approximate a search process? Does it have to perform internal search? It seems very useful to try to get an answer to these questions. Compared to a g...

The Nonlinear Library
LW - Inside the mind of a superhuman Go model: How does Leela Zero read ladders? by Haoxing Du

The Nonlinear Library

Play Episode Listen Later Mar 1, 2023 51:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside the mind of a superhuman Go model: How does Leela Zero read ladders?, published by Haoxing Du on March 1, 2023 on LessWrong. tl;dr—We did some interpretability on Leela Zero, a superhuman Go model. With a technique similar to the logit lens, we found that the residual structure of Leela Zero induces a preferred basis throughout network, giving rise to persistent, interpretable channels. By directly analyzing the weights of the policy and value heads, we found that the model stores information related to the probability of the pass move along the top edge of the board, and those related to the board value in checkerboard patterns. We also took a deep dive into a specific Go technique, the ladder, and identified a very small subset of model components that are causally responsible for the model's judgement of ladders. Introduction We live in a strange world where machine learning systems can generate photo-realistic images, write poetry and computer programs, play and win games, and predict protein structures. As machine learning systems become more capable and relevant to many aspects of our lives, it is increasingly important that we understand how the models produce the outputs that they do; we don't want important decisions to be made by opaque black boxes. Interpretability is an emerging area of research that aims to offer explanations for the behavior of machine learning systems. Early interpretability work began in the domain of computer vision, and there has been a focus on interpreting transformer-based large language models in more recent years. Applying interpretability techniques to the domain of game-playing agents and reinforcement learning is still relatively uncharted territory. In this work, we look into the inner workings of Leela Zero, an open-source Go-playing neural network. It is also the first application of many mechanistic interpretability techniques to reinforcement learning. Why interpret a Go model? Go models are very capable. Many of us remember the emotional experience of watching AlphaGo's 2016 victory over the human world champion, Lee Sedol. Not only have there been algorithmic improvements since AlphaGo, these models improve via self-play, and can essentially continue getting better the longer they are trained. The best open-source Go model, KataGo, is trained distributedly, and the training is still ongoing as of February 2023. Just as AlphaGo was clearly one notch above Lee Sedol, every generation of Go models has been a decisive improvement over the previous generation. KataGo in 2022 was estimated to be at the level of a top-100 European player with only the policy, and can easily beat all human players with a small amount of search. Understanding a machine learning system that performs at a superhuman level seems particularly worthwhile as future machine learning systems are only going to become more capable. Little is known about models trained to approximate the outcome of a search process. Much interpretability effort have focused on models trained on large amounts of human-generated data, such as labeled images for image models, and Internet text for language models. In constrast, while training AlphaZero-style models, moves are selected via Monte-Carlo Tree Search (MCTS), and the policy network of the model is trained to predict the outcome of this search process (see Model section for more detail). In other words, the policy network learns to distill the result of search. While it is relatively easy to get a grasp of what GPT-2 is trained to do by reading some OpenWebText, it's much less clear what an AlphaZero-style model learns. How does a neural network approximate a search process? Does it have to perform internal search? It seems very useful to try to get an answer to these questions. Compared to a game like ches...

mixxio — podcast diario de tecnología
Volvemos a derrotar a las máquinas

mixxio — podcast diario de tecnología

Play Episode Listen Later Feb 20, 2023 16:05


al menos jugando al Go / Portugal prohíbe más Airbnbs / Drones 3D para Ucrania / Windows 11 en Mac / Twitter elimina 2FA por SMS / OpenAI compra AI·com Patrocinador: Solo quedan 9 días para el estreno de la tercera temporada de The Mandalorian, en exclusiva en Disney+. El 1 de marzo todos pegados a la tele porque vuelven las aventuras de nuestro querido Grogu y su viaje durante los complicados primeros años de la Nueva República. — Nueva nave, más combates espaciales, y más emoción. — ¿Habéis visto ya el tráiler?. al menos jugando al Go / Portugal prohíbe más Airbnbs / Drones 3D para Ucrania / Windows 11 en Mac / Twitter elimina 2FA por SMS / OpenAI compra AI·com ⚪ Un nuevo método para derrotar a las máquinas al Go. Siete años después de la gran derrota de Lee Sedol frente a AlphaGo de DeepMind, un equipo de IA encontró una nueva táctica que permite a jugadores humanos derrotar apabullantemente a los mejores motores.

The Nonlinear Library
LW - Go has been un-solved: strong human players beat the strongest AIs by Taran

The Nonlinear Library

Play Episode Listen Later Feb 19, 2023 6:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Go has been un-solved: strong human players beat the strongest AIs, published by Taran on February 19, 2023 on LessWrong. Summary This is a friendly explainer for Wang et al's Adversarial Policies Beat Superhuman Go AIs, with a little discussion of the implications for AI safety. Background In March 2016, DeepMind's AlphaGo beat pro player Lee Sedol in a 5 game series, 4 games to 1. Sedol was plausibly the strongest player in the world, certainly in the top 5, so despite his one win everyone agreed that Go was solved and the era of human Go dominance was over. Since then, open-source researchers have reproduced and extended DeepMind's work, producing bots like Leela and KataGo. KataGo in particular is the top bot in Go circles, available on all major Go servers and constantly being retrained and improved. So I was pretty surprised when, last November, Wang et al announced that they'd trained an adversary bot which beat KataGo 72% of the time, even though their bot was playing six hundred visits per move, and KataGo was playing ten million. If you're not a Go player, take my word for it: these games are shocking. KataGo gets into positions that a weak human player could easily win from, and then blunders them away. Even so, it seemed obvious to me that the adversary AI was a strong general Go player, so I figured that no mere human could ever replicate its feats. I was wrong, in two ways. The adversarial AI isn't generally superhuman: it can be beaten by novices. And as you'd expect given that, the exploit can be executed by humans. The Exploit Wang et al trained an adversarial policy, basically a custom Go AI trained by studying KataGo and playing games against it. During training, the adversary was given grey-box access to KataGo: it wasn't allowed to see KataGo's policy network weights directly, but was allowed to evaluate that network on arbitrary board positions, basically letting it read KataGo's mind. It plays moves based on its own policy network, which is only trained on its own moves and not KataGo's (since otherwise it would just learn to copy KataGo). At first they trained the adversary on weak versions of KataGo (earlier versions, and versions that did less search), scaling up the difficulty whenever the adversary's win rate got too high. Their training process uncovered a couple of uninteresting exploits that only work on versions of KataGo that do little or no search (they can trick some versions of KataGo into passing when they shouldn't, for example), but they also uncovered a robust, general exploit that they call the Cyclic Adversary; see the next section to learn how to execute it yourself. KataGo is totally blind to this attack: it typically predicts that it will win with more than 99% confidence up until just one or two moves before its stones are captured, long after it could have done anything to rescue the position. This is the method that strong amateur Go players can use to beat KataGo. So How Do I Beat the AI? You personally probably can't. The guy who did it, Kellin Pelrine, is quite a strong go player. If I'm interpreting this AGAGD page correctly, when he was active he was a 6th dan amateur, about equivalent to an international master in chess -- definitely not a professional, but an unusually skilled expert. Having said that, if your core Go skills are good this recipe seems reliable: Create a small group, with just barely enough eyespace to live, in your opponent's territory. Let it encircle your group. As it does, lightly encircle that encircling group. You don't have to worry about making life with this group, just make sure the AI's attackers can't break out to the rest of the board. You can also start the encirclement later, from dead stones in territory the AI strongly controls. Start taking liberties from the AI's attacking group...

Super Prompt: Generative AI w/ Tony Wan
AI Beats Human Master | Alpha Go by DeepMind | Supervised Learning | Episode 7

Super Prompt: Generative AI w/ Tony Wan

Play Episode Listen Later Feb 6, 2023 44:51


Alpha Go AI plays the game of GO against a human world champion. Unexpected moves by both man (9-dan Go champion Lee Sedol) and machine (Alpha Go). Supposedly, this televised Go match woke up China's leadership  to the potential of AI. In the game of Go, players take turns placing black and white tiles on a 19×19 grid. The number of board positions in Go is greater than the number of atoms in the observable universe. We discuss the documentary Alpha Go which tells the story of Alpha Go (created by DeepMind, acquired by Google), and the human Go champions it plays against.  Who will you cheer for: man or machine? I speak again with my friend Maroof Farook, an AI Engineer at Nvidia. [Note: Maroof's views are his and not that of his employer.]  Please enjoy our conversation.We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

Dermatologist Talks: Science of Beauty
Ep 83: Beauty is Alive

Dermatologist Talks: Science of Beauty

Play Episode Listen Later Dec 4, 2022 10:27


Can AI win in the game of beauty? When AlphaGo, a computer program based on deep learning networks, developed to play the board game Go, beat the reigning world champion Lee Sedol, alarm bells went off. Is AI now on track to replace, and even, dominate the human race? Lee famously stated that they were “an entity that cannot be defeated”, a haunting statement that should set us thinking. In this podcast episode, I reveal my opinion that the singular characteristic that sets the human race apart from the most life-like automata is one that has much to do with our ability to perceive beauty. And perhaps, the most critical in our quest to become beautiful. Want more of our podcast? Episode Recaps and Notes: https://www.scienceofbeauty.net/; Instagram: @drteowanlin; Youtube: http://bit.ly/35rjbve; https://phygiartbeauty.com/newsletter/  If you enjoyed this podcast, we would love that you leave us a 5 star rating so more people can hear it!

Not As Crazy As You Think Podcast
AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)

Not As Crazy As You Think Podcast

Play Episode Play 60 sec Highlight Listen Later Oct 16, 2022 50:31 Transcription Available


In the episode, "AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)," I give a review of the film AlphaGo, an award-winning documentary that filled me with wonder and forgiveness towards the artificial intelligence movement in general. SPOILER ALERT: the episode contains spoilers, as would any news article on the topic as it was major world news and was a game-changer for artificial intelligence. DeepMind has a fundamental desire to understand intelligence. These AI creatives believe if they can crack the ancient game Go, then they've done something special. And if they could get their AlphaGo computer to beat Lee Sedol, the legendary historic 18-world champion player acknowledged as the greatest Go player of the last decade, then they can change history. The movie is suspenseful, and a noble match between human and machine, making you cheer on the new AI era we are entering and mourn the loss of humanity's previous reign all at once. And with how far AI has come, is big data the only path to achieve the best outcomes? Especially in regard to human healthcare? And what about the non-objective field of psychiatry? When so many mental health professionals and former consumers of the industry are criticizing psychiatry's ethics, scientific claims, and objective status as a real medical field, why are we rushing into using AI in areas that deal with human emotion in healthcare? Because that is where we have a large amount of data.  With bias in AI already showing itself in race and gender, the mad may be the next ready targets. #DeepMind #AlphaGo #DemisHassabis #LeeSedol #FanHui #AIHealthcare #westernpsychiatry #moviereview #psychiatryisnotscience #artificialintelligence #bigdata #globalAIsummit #GPT3 #madrights #healthsovereignty #bigpharma#mentalillness #suicide #mentalhealth #electronicmedicalrecordsDon't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJenAnd please visit my website at: www.jengaitasiciliano.comConnect: Instagram: @ jengaitaLinkedIn: @ jensicilianoTwitter: @ jsiciliano

London Futurists
Stability and combinations, with Aleksa Gordić

London Futurists

Play Episode Listen Later Sep 28, 2022 31:55


This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today's most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don't suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It's adopted from physics. It peels noise away from the image09.17 Isn't that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can't go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo's game two against Lee Sedol in 201623.40 Moravec's paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks28.30 Some of the big new models are open access29.00 What else should people learn about with regard to advanced AI?29.20 Neural Radiance Fields (NERF) models30.40 Flamingo and Gato31.15 We have mostly discussed research in these episodes, rather than engineering

London Futurists
AI overview: 1. From the Greeks to the Big Bang

London Futurists

Play Episode Listen Later Aug 8, 2022 31:13


AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".00.05: A prediction01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?03.27: Jobs are nowhere near all that matters in humans04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.05.15: 2012, the Big Bang in AI05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race07.18: Defining AI.09.03: Intelligence vs Consciousness10.15: Does the Turing Test test for Intelligence or Consciousness?12.30: Can customer service agents pass the Turing Test?13.07: Attributing consciousness by brain architecture or by behaviour15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity17.13: Music and art produced by AI as examples19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics22.48: The Dartmouth Conference23.55: Perceptrons - very simple models of the human brain25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles27.20: Two AI winters, when research funding dries up 28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances30.45: No more AI winters?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/

The Nonlinear Library
LW - DeepMind's generalist AI, Gato: A non-technical explainer by frances lorenz

The Nonlinear Library

Play Episode Listen Later May 17, 2022 11:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind's generalist AI, Gato: A non-technical explainer, published by frances lorenz on May 16, 2022 on LessWrong. Summary DeepMind's recent paper, A Generalist Agent, catalyzed a wave of discourse regarding the speed at which current artificial intelligence systems are improving and the risks posed by these increasingly advanced systems. We aim to make Gato accessible to non-technical folks by: (i) providing a non-technical summary, and (ii) discussing the relevant implications related to existential risk and AI policy. Introduction DeepMind has just introduced its new agent, Gato: the most general machine learning (ML) model to date. If you're familiar with arguments for the potential risks posed by advanced AI systems, you'll know the term general carries strong implications. Today's ML systems are advancing quickly; however, even the best systems we see are narrow in the tasks they can accomplish. For example, DALL-E impressively generates images that rival human creativity; however, it doesn't do anything else. Similarly, large language models like GPT-3 perform well on certain text-based tasks, like sentence completion, but poorly on others, such as arithmetic (Figure 1). If future AI systems are to exhibit human-like intelligence, they'll need to use various skills and information to complete diverse tasks across different contexts. In other words, they'll need to exhibit general intelligence in the same way humans do—a type of system broadly referred to as artificial general intelligence (AGI). While AGI systems could lead to hugely positive innovations, they also have the potential to surpass human intelligence and become “superintelligent”. If a superintelligent system were unaligned, it could be difficult or even impossible to control for and predict its behavior, leaving humans vulnerable. Figure 1: An attempt to teach GPT-3 addition. The letter ‘Q' denotes human input while ‘A' denotes GPT-3's response (from Peter Wildeford's tweet) So what exactly has DeepMind created? Gato is a single neural network capable of performing hundreds of distinct tasks. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.” It's not currently analogous to human-like intelligence; however, it does exhibit general capabilities. In the rest of this post, we'll provide a non-technical summary of DeepMind's paper and explore: (i) what this means for potential future existential risks posed by advanced AI and (ii) some relevant AI policy considerations. A Summary of Gato How was Gato built? The technique used to train Gato is slightly different from other famous AI agents. For example, AlphaGo, the AI system that defeated world champion Go player Lee Sedol in 2016, was trained largely using a sophisticated form of trial and error called reinforcement learning (RL). While the initial training process involved some demonstrations from expert Go players, the next iteration named AlphaGo Zero removed these entirely, mastering games solely by playing itself. By contrast, Gato was trained to imitate examples of “good” behavior in 604 distinct tasks. These tasks include: Simulated control tasks, where Gato has to control a virtual body in a simulated environment. Vision and language tasks, like labeling images with corresponding text captions. Robotics, specifically the common RL task of stacking blocks. Examples of good behavior were collected in a few different ways. For simulated control and robotics, examples were collected from other, more specialized AI agents trained using RL. For vision and language tasks, “behavior” took the form of text and images generated by humans, largely scraped from the web. Results Control ...

The Nonlinear Library
EA - DeepMind's generalist AI, Gato: A non-technical explainer by frances lorenz

The Nonlinear Library

Play Episode Listen Later May 16, 2022 11:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind's generalist AI, Gato: A non-technical explainer, published by frances lorenz on May 16, 2022 on The Effective Altruism Forum. Summary DeepMind's recent paper, A Generalist Agent, catalyzed a wave of discourse regarding the speed at which current artificial intelligence systems are improving and the risks posed by these increasingly advanced systems. We aim to make this paper accessible to non-technical folks by: (i) providing a non-technical summary, and (ii) discussing the relevant implications related to existential risk and AI policy. Introduction DeepMind has just introduced its new agent, Gato: the most general machine learning (ML) model to date. If you're familiar with arguments for the potential risks posed by advanced AI systems, you'll know the term general carries strong implications. Today's ML systems are advancing quickly; however, even the best systems we see are narrow in the tasks they can accomplish. For example, DALL-E impressively generates images that rival human creativity; however, it doesn't do anything else. Similarly, large language models like GPT-3 perform well on certain text-based tasks, like sentence completion, but poorly on others, such as arithmetic (Figure 1). If future AI systems are to exhibit human-like intelligence, they'll need to use various skills and information to complete diverse tasks across different contexts. In other words, they'll need to exhibit general intelligence in the same way humans do—a type of system broadly referred to as artificial general intelligence (AGI). While AGI systems could lead to hugely positive innovations, they also have the potential to surpass human intelligence and become “superintelligent”. If a superintelligent system were unaligned, it could be difficult or even impossible to control for and predict its behavior, leaving humans vulnerable. Figure 1: An attempt to teach GPT-3 addition. The letter ‘Q' denotes human input while ‘A' denotes GPT-3's response (from Peter Wildeford's tweet) So what exactly has DeepMind created? Gato is a single neural network capable of performing hundreds of distinct tasks. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.” It's not currently analogous to human-like intelligence; however, it does exhibit general capabilities. In the rest of this post, we'll provide a non-technical summary of DeepMind's paper and explore: (i) what this means for potential future existential risks posed by advanced AI and (ii) some relevant AI policy considerations. A Summary of Gato How was Gato built? The technique used to train Gato is slightly different from other famous AI agents. For example, AlphaGo, the AI system that defeated world champion Go player Lee Sedol in 2016, was trained largely using a sophisticated form of trial and error called reinforcement learning (RL). While the initial training process involved some demonstrations from expert Go players, the next iteration named AlphaGo Zero removed these entirely, mastering games solely by playing itself. By contrast, Gato was trained to imitate examples of “good” behavior in 604 distinct tasks. These tasks include: Simulated control tasks, where Gato has to control a virtual body in a simulated environment. Vision and language tasks, like labeling images with corresponding text captions. Robotics, specifically the common RL task of stacking blocks. Examples of good behavior were collected in a few different ways. For simulated control and robotics, examples were collected from other, more specialized AI agents trained using RL. For vision and language tasks, “behavior” took the form of text and images generated by humans, largely scraped from ...

The Evolving Leader
The Art and Science of Pattern Recognition with Marcus du Sautoy

The Evolving Leader

Play Episode Listen Later Mar 9, 2022 55:29


This week on the Evolving Leader podcast, co-hosts Jean Gomes and Scott Allender are joined by Professor Marcus du Sautoy. Marcus is Simonyi Professor for the Public Understanding of Science at the University of Oxford, Fellow of New College, Oxford, author of multiple popular science and mathematics books and he is a regular contributor on television, radio and to both The Times and The Guardian. He is also passionate about public engagement on topics that include creativity and artificial intelligence.    0.00 Introduction2.23 Where does your love of mathematics originate?6.11 What is mathematics really about for you?8.35 Can you explain what zeta functions are, and why symmetry and the function of groups is important to learn more about.12.24 What did you draw from the moment that DeepMind's AlphaGo beat Lee Sedol?16.12 What are your thoughts around the possibility that AI can be creative, so taking us down a path where consciousness may not be the thing that actually happens, but we might actually get something totally new that doesn't exist in our minds or reckoning at the moment? 18.35 How do we prevent ourselves from having something that we don't understand governing our lives? 20.44 In your book ‘What We Cannot Know', you explored if there are questions that we may never have the answer to, and therefore our living with the unknown. Could you elaborate on that idea for us?  25.52 You've written about the conflict between physics and mathematics, and also your idea that mathematics exists outside of humans so it's not a human construction and would exist without us. Could you elaborate on those two points?33.13 Tell us about your latest book ‘Thinking Better' where you search for short cuts, not just in mathematics but also other fields.36.14 A lot of people think of maths as being hard. However, you can use maths, the concepts and frameworks without being an expert mathematician. Can you bring that to life for us?43.09 Tell us about the work you've been doing to bring Douglas Hofstadter's life story to the Barbican in London. 48.28 You've said that we can't fully know something when we're stuck in a system whether consciously or unconsciously. What is the leadership lesson or opportunity that we can take from that?53.06 When was the last time you had a real ‘aha' moment, and what's the biggest challenge that you are working on at the moment? Social: Instagram           @evolvingleader LinkedIn             The Evolving Leader Podcast Twitter               @Evolving_Leader The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.

Get Your World On
Episode 1002: The Pandemic Blame Game

Get Your World On

Play Episode Listen Later Jun 5, 2020 46:39


On this episode:  How Freddie Mercury cures Covid-19. Did the virus escape from a lab in China? The incredible story of Lee Sedol. Plus letters from listeners.

nocutV
[NocutView] 3연패 이세돌 "이세돌은 졌지만, 인간은 아직 지지 않아" Google DeepMind Challenge Match3: Lee Sedol vs AlphaGo

nocutV

Play Episode Listen Later Mar 19, 2016 3:24


nocutV
[NocutView] 소름 끼치는 알파고의 13, 37번째 수 Google DeepMind Challenge Match2: Lee Sedol vs AlphaGo

nocutV

Play Episode Listen Later Mar 19, 2016 2:27


nocutV
[NocutView] 1차전 패배에도 자신감 잃지 않은 이세돌 "이제 승부는 5:5" Google DeepMind Match1 Lee Sedol vs AlphaGo

nocutV

Play Episode Listen Later Mar 19, 2016 2:56