South Korean Go player
POPULARITY
This week Jun and Daniel review the popular Korean film "The Match" (승부), which tells the story of two legendary Go players in Korea during the late 1980s and early 1990s. Our hosts explore the cultural significance of Go in Korean society, discussing how it was once one of the four major activities Korean children would pursue alongside math academies, taekwondo, and piano. They delve into the controversy surrounding the film's star Yoo Ah-in and his drug scandal, examining Korea's strict cancellation culture and how it differs between actors, K-pop stars, and politicians. The conversation expands to cover the historic AlphaGo vs. Lee Sedol match in 2016 and its symbolic impact on Korean society's understanding of AI. Through scene-by-scene analysis, they highlight cultural details from 1980s Korea including car parades for international achievements, traditional family hierarchies, smoking culture, and nostalgic elements like fumigation trucks and Nikon cameras as status symbols.If you're interested in learning about the cultural significance of Go in East Asian societies, understanding Korea's approach to celebrity scandals and cancellation culture, exploring the philosophical differences between individualism and traditional hierarchy in Korean society, or discovering nostalgic details about 1980s Korean life including housing styles and family dynamics, tune in to hear Daniel and Jun discuss all this and more! This episode also touches on topics like the decline of Go's popularity in modern Korea, the East Asian "Cold War" competition in Go between Korea, Japan, and China, and how the film serves as a metaphor for Korea's journey from copying to innovating on the global stage.Support the showAs a reminder, we record one episode a week in-person from Seoul, South Korea. We hope you enjoy listening to our conversation, and we're so excited to have you following us on this journey!Support us on Patreon:https://patreon.com/user?u=99211862Follow us on socials: https://www.instagram.com/koreanamericanpodcast/https://twitter.com/korampodcasthttps://www.tiktok.com/@koreanamericanpodcastQuestions/Comments/Feedback? Email us at: koreanamericanpodcast@gmail.com
Let's look at how modern board games reflect our beloved game of Go. From Carcassonne to Ticket to Ride, you can find Go-like elements hiding in plain sight at your local game store!Also... Lee Sedol designs board games now??Games mentioned in this episode: CarcassonneHiveBlokusOnitamaRiskHexTicket to RideTakYinshStrategoOthelloKing's CrownGreat KingdomNine KnightsDice Tower VideoSupport Star PointThe Star Point Store
“Eventually, my dream would be to simulate a virtual cell.”—Demis HassabisThe aspiration to build the virtual cell is considered to be equivalent to a moonshot for digital biology. Recently, 42 leading life scientists published a paper in Cell on why this is so vital, and how it may ultimately be accomplished. This conversation is with 2 of the authors, Charlotte Bunne, now at EPFL and Steve Quake, a Professor at Stanford University, who heads up science at the Chan-Zuckerberg Initiative The audio (above) is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.TRANSCRIPT WITH LINKS TO AUDIO Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths and we've got a really hot topic today, the virtual cell. And what I think is extraordinarily important futuristic paper that recently appeared in the journal Cell and the first author, Charlotte Bunne from EPFL, previously at Stanford's Computer Science. And Steve Quake, a young friend of mine for many years who heads up the Chan Zuckerberg Initiative (CZI) as well as a professor at Stanford. So welcome, Charlotte and Steve.Steve Quake (00:42):Thanks, Eric. It's great to be here.Charlotte Bunne:Thanks for having me.Eric Topol (00:45):Yeah. So you wrote this article that Charlotte, the first author, and Steve, one of the senior authors, appeared in Cell in December and it just grabbed me, “How to build the virtual cell with artificial intelligence: Priorities and opportunities.” It's the holy grail of biology. We're in this era of digital biology and as you point out in the paper, it's a convergence of what's happening in AI, which is just moving at a velocity that's just so extraordinary and what's happening in biology. So maybe we can start off by, you had some 42 authors that I assume they congregated for a conference or something or how did you get 42 people to agree to the words in this paper?Steve Quake (01:33):We did. We had a meeting at CZI to bring community members together from many different parts of the community, from computer science to bioinformatics, AI experts, biologists who don't trust any of this. We wanted to have some real contrarians in the mix as well and have them have a conversation together about is there an opportunity here? What's the shape of it? What's realistic to expect? And that was sort of the genesis of the article.Eric Topol (02:02):And Charlotte, how did you get to be drafting the paper?Charlotte Bunne (02:09):So I did my postdoc with Aviv Regev at Genentech and Jure Leskovec at CZI and Jure was part of the residency program of CZI. And so, this is how we got involved and you had also prior work with Steve on the universal cell embedding. So this is how everything got started.Eric Topol (02:29):And it's actually amazing because it's a who's who of people who work in life science, AI and digital biology and omics. I mean it's pretty darn impressive. So I thought I'd start off with a quote in the article because it kind of tells a story of where this could go. So the quote was in the paper, “AIVC (artificial intelligence virtual cell) has the potential to revolutionize the scientific process, leading to future breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.” That's a pretty big statement. So maybe we can just kind of toss that around a bit and maybe give it a little more thoughts and color as to what you were positing there.Steve Quake (03:19):Yeah, Charlotte, you want me to take the first shot at that? Okay. So Eric, it is a bold claim and we have a really bold ambition here. We view that over the course of a decade, AI is going to provide the ability to make a transformative computational tool for biology. Right now, cell biology is 90% experimental and 10% computational, roughly speaking. And you've got to do just all kinds of tedious, expensive, challenging lab work to get to the answer. And I don't think AI is going to replace that, but it can invert the ratio. So within 10 years I think we can get to biology being 90% computational and 10% experimental. And the goal of the virtual cell is to build a tool that'll do that.Eric Topol (04:09):And I think a lot of people may not understand why it is considered the holy grail because it is the fundamental unit of life and it's incredibly complex. It's not just all the things happening in the cell with atoms and molecules and organelles and everything inside, but then there's also the interactions the cell to other cells in the outside tissue and world. So I mean it's really quite extraordinary challenge that you've taken on here. And I guess there's some debate, do we have the right foundation? We're going to get into foundation models in a second. A good friend of mine and part of this whole I think process that you got together, Eran Segal from Israel, he said, “We're at this tipping point…All the stars are aligned, and we have all the different components: the data, the compute, the modeling.” And in the paper you describe how we have over the last couple of decades have so many different data sets that are rich that are global initiatives. But then there's also questions. Do we really have the data? I think Bo Wang especially asked about that. Maybe Charlotte, what are your thoughts about data deficiency? There's a lot of data, but do you really have what we need before we bring them all together for this kind of single model that will get us some to the virtual cell?Charlotte Bunne (05:41):So I think, I mean one core idea of building this AIVC is that we basically can leverage all experimental data that is overall collected. So this also goes back to the point Steve just made. So meaning that we basically can integrate across many different studies data because we have AI algorithms or the architectures that power such an AIVC are able to integrate basically data sets on many different scales. So we are going a bit away from this dogma. I'm designing one algorithm from one dataset to this idea of I have an architecture that can take in multiple dataset on multiple scales. So this will help us a bit in being somewhat efficient with the type of experiments that we need to make and the type of experiments we need to conduct. And again, what Steve just said, ultimately, we can very much steer which data sets we need to collect.Charlotte Bunne (06:34):Currently, of course we don't have all the data that is sufficient. I mean in particular, I think most of the tissues we have, they are healthy tissues. We don't have all the disease phenotypes that we would like to measure, having patient data is always a very tricky case. We have mostly non-interventional data, meaning we have very limited understanding of somehow the effect of different perturbations. Perturbations that happen on many different scales in many different environments. So we need to collect a lot here. I think the overall journey that we are going with is that we take the data that we have, we make clever decisions on the data that we will collect in the future, and we have this also self-improving entity that is aware of what it doesn't know. So we need to be able to understand how well can I predict something on this somewhat regime. If I cannot, then we should focus our data collection effort into this. So I think that's not a present state, but this will basically also guide the future collection.Eric Topol (07:41):Speaking of data, one of the things I think that's fascinating is we saw how AlphaFold2 really revolutionized predicting proteins. But remember that was based on this extraordinary resource that had been built, the Protein Data Bank that enabled that. And for the virtual cell there's no such thing as a protein data bank. It's so much more as you emphasize Charlotte, it's so much dynamic and these perturbations that are just all across the board as you emphasize. Now the human cell atlas, which currently some tens of millions, but going into a billion cells, we learned that it used to be 200 cell types. Now I guess it's well over 5,000 and that we have 37 trillion cells approximately in the average person adult's body is a formidable map that's being made now. And I guess the idea that you're advancing is that we used to, and this goes back to a statement you made earlier, Steve, everything we did in science was hypothesis driven. But if we could get computational model of the virtual cell, then we can have AI exploration of the whole field. Is that really the nuts of this?Steve Quake (09:06):Yes. A couple thoughts on that, maybe Theo Karaletsos, our lead AI person at CZI says machine learning is the formalism through which we understand high dimensional data and I think that's a very deep statement. And biological systems are intrinsically very high dimensional. You've got 20,000 genes in the human genome in these cell atlases. You're measuring all of them at the same time in each single cell. And there's a lot of structure in the relationships of their gene expression there that is just not evident to the human eye. And for example, CELL by GENE, our database that collects all the aggregates, all of the single cell transcriptomic data is now over a hundred million cells. And as you mentioned, we're seeing ways to increase that by an order of magnitude in the near future. The project that Jure Leskovec and I worked on together that Charlotte referenced earlier was like a first attempt to build a foundational model on that data to discover some of the correlations and structure that was there.Steve Quake (10:14):And so, with a subset, I think it was the 20 or 30 million cells, we built a large language model and began asking it, what do you understand about the structure of this data? And it kind of discovered lineage relationships without us teaching it. We trained on a matrix of numbers, no biological information there, and it learned a lot about the relationships between cell type and lineage. And that emerged from that high dimensional structure, which was super pleasing to us and really, I mean for me personally gave me the confidence to say this stuff is going to work out. There is a future for the virtual cell. It's not some made up thing. There is real substance there and this is worth investing an enormous amount of CZIs resources in going forward and trying to rally the community around as a project.Eric Topol (11:04):Well yeah, the premise here is that there is a language of life, and you just made a good case that there is if you can predict, if you can query, if you can generate like that. It is reminiscent of the famous Go game of Lee Sedol, that world champion and how the machine came up with a move (Move 37) many, many years ago that no human would've anticipated and I think that's what you're getting at. And the ability for inference and reason now to add to this. So Charlotte, one of the things of course is about, well there's two terms in here that are unfamiliar to many of the listeners or viewers of this podcast, universal representations (UR) and virtual instrument (VIs) that you make a pretty significant part of how you are going about this virtual cell model. So could you describe that and also the embeddings as part of the universal representation (UR) because I think embeddings, or these meaningful relationships are key to what Steve was just talking about.Charlotte Bunne (12:25):Yes. So in order to somewhat leverage very different modalities in order to leverage basically modalities that will take measurements across different scales, like the idea is that we have large, may it be transformer models that might be very different. If I have imaging data, I have a vision transformer, if I have a text data, I have large language models that are designed of course for DNA then they have a very wide context and so on and so forth. But the idea is somewhat that we have models that are connected through the scales of biology because those scales we know. We know which components are somewhat involved or in measurements that are happening upstream. So we have the somewhat interconnection or very large model that will be trained on many different data and we have this internal model representation that somewhat capture everything they've seen. And so, this is what we call those universal representation (UR) that will exist across the scales of biology.Charlotte Bunne (13:22):And what is great about AI, and so I think this is a bit like a history of AI in short is the ability to predict the last years, the ability to generate, we can generate new hypothesis, we can generate modalities that we are missing. We can potentially generate certain cellular state, molecular state have a certain property, but I think what's really coming is this ability to reason. So we see this in those very large language models, the ability to reason about a hypothesis, how we can test it. So this is what those instruments ultimately need to do. So we need to be able to simulate the change of a perturbation on a cellular phenotype. So on the internal representation, the universal representation of a cell state, we need to simulate the fact the mutation has downstream and how this would propagate in our representations upstream. And we need to build many different type of virtual instruments that allow us to basically design and build all those capabilities that ultimately the AI virtual cell needs to possess that will then allow us to reason, to generate hypothesis, to basically predict the next experiment to conduct to predict the outcome of a perturbation experiment to in silico design, cellular states, molecular states, things like that. And this is why we make the separation between internal representation as well as those instruments that operate on those representations.Eric Topol (14:47):Yeah, that's what I really liked is that you basically described the architecture, how you're going to do this. By putting these URs into the VIs, having a decoder and a manipulator and you basically got the idea if you can bring all these different integrations about which of course is pending. Now there are obviously many naysayers here that this is impossible. One of them is this guy, Philip Ball. I don't know if you read the language, How Life Works. Now he's a science journalist and he's a prolific writer. He says, “Comparing life to a machine, a robot, a computer, sells it short. Life is a cascade of processes, each with a distinct integrity and autonomy, the logic of which has no parallel outside the living world.” Is he right? There's no way to model this. It's silly, it's too complex.Steve Quake (15:50):We don't know, alright. And it's great that there's naysayers. If everyone agreed this was doable, would it be worth doing? I mean the whole point is to take risks and get out and do something really challenging in the frontier where you don't know the answer. If we knew that it was doable, I wouldn't be interested in doing it. So I personally am happy that there's not a consensus.Eric Topol (16:16):Well, I mean to capture people's imagination here, if you're successful and you marshal a global effort, I don't know who's going to pay for it because it's a lot of work coming here going forward. But if you can do it, the question here is right today we talk about, oh let's make an organoid so we can figure out how to treat this person's cancer or understand this person's rare disease or whatever. And instead of having to wait weeks for this culture and all the expense and whatnot, you could just do it in a computer and in silico and you have this virtual twin of a person's cells and their tissue and whatnot. So the opportunity here is, I don't know if people get, this is just extraordinary and quick and cheap if you can get there. And it's such a bold initiative idea, who will pay for this do you think?Steve Quake (17:08):Well, CZI is putting an enormous amount of resources into it and it's a major project for us. We have been laying the groundwork for it. We recently put together what I think is if not the largest, one of the largest GPU supercomputer clusters for nonprofit basic science research that came online at the end of last year. And in fact in December we put out an RFA for the scientific community to propose using it to build models. And so we're sharing that resource within the scientific community as I think you appreciate, one of the real challenges in the field has been access to compute resources and industry has it academia at a much lower level. We are able to be somewhere in between, not quite at the level of a private company but the tech company but at a level beyond what most universities are being able to do and we're trying to use that to drive the field forward. We're also planning on launching RFAs we this year to help drive this project forward and funding people globally on that. And we are building a substantial internal effort within CZI to help drive this project forward.Eric Topol (18:17):I think it has the looks of the human genome project, which at time as you know when it was originally launched that people thought, oh, this is impossible. And then look what happened. It got done. And now the sequence of genome is just a commodity, very relatively, very inexpensive compared to what it used to be.Steve Quake (18:36):I think a lot about those parallels. And I will say one thing, Philip Ball, I will concede him the point, the cells are very complicated. The genome project, I mean the sort of genius there was to turn it from a biology problem to a chemistry problem, there is a test tube with a chemical and it work out the structure of that chemical. And if you can do that, the problem is solved. I think what it means to have the virtual cell is much more complex and ambiguous in terms of defining what it's going to do and when you're done. And so, we have our work cut out for us there to try to do that. And that's why a little bit, I established our North Star and CZI for the next decade as understanding the mysteries of the cell and that word mystery is very important to me. I think the molecules, as you pointed out earlier are understood, genome sequenced, protein structure solved or predicted, we know a lot about the molecules. Those are if not solved problems, pretty close to being solved. And the real mystery is how do they work together to create life in the cell? And that's what we're trying to answer with this virtual cell project.Eric Topol (19:43):Yeah, I think another thing that of course is happening concurrently to add the likelihood that you'll be successful is we've never seen the foundation models coming out in life science as they have in recent weeks and months. Never. I mean, I have a paper in Science tomorrow coming out summarizing the progress about not just RNA, DNA, ligands. I mean the whole idea, AlphaFold3, but now Boltz and so many others. It's just amazing how fast the torrent of new foundation models. So Charlotte, what do you think accounts for this? This is unprecedented in life science to see foundation models coming out at this clip on evolution on, I mean you name it, design of every different molecule of life or of course in cells included in that. What do you think is going on here?Charlotte Bunne (20:47):So on the one hand, of course we benefit profits and inherit from all the tremendous efforts that have been made in the last decades on assembling those data sets that are very, very standardized. CELLxGENE is very somehow AI friendly, as you can say, it is somewhat a platform that is easy to feed into algorithms, but at the same time we actually also see really new building mechanisms, design principles of AI algorithms in itself. So I think we have understood that in order to really make progress, build those systems that work well, we need to build AI tools that are designed for biological data. So to give you an easy example, if I use a large language model on text, it's not going to work out of the box for DNA because we have different reading directions, different context lens and many, many, many, many more.Charlotte Bunne (21:40):And if I look at standard computer vision where we can say AI really excels and I'm applying standard computer vision, vision transformers on multiplex images, they're not going to work because normal computer vision architectures, they always expect the same three inputs, RGB, right? In multiplex images, I'm measuring up to 150 proteins potentially in a single experiment, but every study will measure different proteins. So I deal with many different scales like larger scales and I used to attention mechanisms that we have in usual computer vision. Transformers are not going to work anymore, they're not going to scale. And at the same time, I need to be completely flexible in whatever input combination of channel I'm just going to face in this experiment. So this is what we right now did for example, in our very first work, inheriting the design principle that we laid out in the paper AI virtual cell and then come up with new AI architectures that are dealing with these very special requirements that biological data have.Charlotte Bunne (22:46):So we have now a lot of computer scientists that work very, very closely have a very good understanding of biologists. Biologists that are getting much and much more into the computer science. So people who are fluent in both languages somewhat, that are able to now build models that are adopted and designed for biological data. And we don't just take basically computer vision architectures that work well on street scenes and try to apply them on biological data. So it's just a very different way of thinking about it, starting constructing basically specialized architectures, besides of course the tremendous data efforts that have happened in the past.Eric Topol (23:24):Yeah, and we're not even talking about just sequence because we've also got imaging which has gone through a revolution, be able to image subcellular without having to use any types of stains that would disrupt cells. That's another part of the deep learning era that came along. One thing I thought was fascinating in the paper in Cell you wrote, “For instance, the Short Read Archive of biological sequence data holds over 14 petabytes of information, which is 1,000 times larger than the dataset used to train ChatGPT.” I mean that's a lot of tokens, that's a lot of stuff, compute resources. It's almost like you're going to need a DeepSeek type of way to get this. I mean not that DeepSeek as its claim to be so much more economical, but there's a data challenge here in terms of working with that massive amount that is different than the human language. That is our language, wouldn't you say?Steve Quake (24:35):So Eric, that brings to mind one of my favorite quotes from Sydney Brenner who is such a wit. And in 2000 at the sort of early first flush of success in genomics, he said, biology is drowning in a sea of data and starving for knowledge. A very deep statement, right? And that's a little bit what the motivation was for putting the Short Read Archive statistic into the paper there. And again, for me, part of the value of this endeavor of creating a virtual cell is it's a tool to help us translate data into knowledge.Eric Topol (25:14):Yeah, well there's two, I think phenomenal figures in your Cell paper. The first one that kicks across the capabilities of the virtual cell and the second that compares the virtual cell to the real or the physical cell. And we'll link that with this in the transcript. And the other thing we'll link is there's a nice Atlantic article, “A Virtual Cell Is a ‘Holy Grail' of Science. It's Getting Closer.” That might not be quite close as next week or year, but it's getting close and that's good for people who are not well grounded in this because it's much more taken out of the technical realm. This is really exciting. I mean what you're onto here and what's interesting, Steve, since I've known you for so many years earlier in your career you really worked on omics that is being DNA and RNA and in recent times you've made this switch to cells. Is that just because you're trying to anticipate the field or tell us a little bit about your migration.Steve Quake (26:23):Yeah, so a big part of my career has been trying to develop new measurement technologies that'll provide insight into biology. And decades ago that was understanding molecules. Now it's understanding more complex biological things like cells and it was like a natural progression. I mean we built the sequencers, sequenced the genomes, done. And it was clear that people were just going to do that at scale then and create lots of data. Hopefully knowledge would get out of that. But for me as an academic, I never thought I'd be in the position I'm in now was put it that way. I just wanted to keep running a small research group. So I realized I would have to get out of the genome thing and find the next frontier and it became this intersection of microfluidics and genomics, which as you know, I spent a lot of time developing microfluidic tools to analyze cells and try to do single cell biology to understand their heterogeneity. And that through a winding path led me to all these cell atlases and to where we are now.Eric Topol (27:26):Well, we're fortunate for that and also with your work with CZI to help propel that forward and I think it sounds like we're going to need a lot of help to get this thing done. Now Charlotte, as a computer scientist now at EPFL, what are you going to do to keep working on this and what's your career advice for people in computer science who have an interest in digital biology?Charlotte Bunne (27:51):So I work in particular on the prospect of using this to build diagnostic tools and to make diagnostics in the clinic easier because ultimately we have somewhat limited capabilities in the hospital to run deep omics, but the idea of being able to somewhat map with a cheaper and lighter modality or somewhat diagnostic test into something much richer because a model has been seeing all those different data and can basically contextualize it. It's very interesting. We've seen all those pathology foundation models. If I can always run an H&E, but then decide when to run deeper diagnostics to have a better or more accurate prediction, that is very powerful and it's ultimately reducing the costs, but the precision that we have in hospitals. So my faculty position right now is co-located between the School of Life Sciences, School of Computer Science. So I have a dual affiliation and I'm affiliated to the hospitals to actually make this possible and as a career advice, I think don't be shy and stick to your discipline.Charlotte Bunne (28:56):I have a bachelor's in biology, but I never only did biology. I have a PhD in computer science, which you would think a bachelor in biology not necessarily qualifies you through. So I think this interdisciplinarity also requires you to be very fluent, very comfortable in reading many different styles of papers and publications because a publication in a computer science venue will be very, very different from the way we write in biology. So don't stick to your study program, but just be free in selecting whatever course gets you closer to the knowledge you need in order to do the research or whatever task you are building and working on.Eric Topol (29:39):Well, Charlotte, the way you're set up there with this coalescence of life science and computer science is so ideal and so unusual here in the US, so that's fantastic. That's what we need and that's really the underpinning of how you're going to get to the virtual cells, getting these two communities together. And Steve, likewise, you were an engineer and somehow you became one of the pioneers of digital biology way back before it had that term, this interdisciplinary, transdisciplinary. We need so much of that in order for you all to be successful, right?Steve Quake (30:20):Absolutely. I mean there's so much great discovery to be done on the boundary between fields. I trained as a physicist and kind of made my career this boundary between physics and biology and technology development and it's just sort of been a gift that keeps on giving. You've got a new way to measure something, you discover something new scientifically and it just all suggests new things to measure. It's very self-reinforcing.Eric Topol (30:50):Now, a couple of people who you know well have made some pretty big statements about this whole era of digital biology and I think the virtual cell is perhaps the biggest initiative of all the digital biology ongoing efforts, but Jensen Huang wrote, “for the first time in human history, biology has the opportunity to be engineering, not science.” And Demis Hassabis wrote or said, ‘we're seeing engineering science, you have to build the artifact of interest first, and then once you have it, you can use the scientific method to reduce it down and understand its components.' Well here there's a lot to do to understand its components and if we can do that, for example, right now as both of AI drug discoveries and high gear and there's umpteen numbers of companies working on it, but it doesn't account for the cell. I mean it basically is protein, protein ligand interactions. What if we had drug discovery that was cell based? Could you comment about that? Because that doesn't even exist right now.Steve Quake (32:02):Yeah, I mean I can say something first, Charlotte, if you've got thoughts, I'm curious to hear them. So I do think AI approaches are going to be very useful designing molecules. And so, from the perspective of designing new therapeutics, whether they're small molecules or antibodies, yeah, I mean there's a ton of investment in that area that is a near term fruit, perfect thing for venture people to invest in and there's opportunity there. There's been enough proof of principle. However, I do agree with you that if you want to really understand what happens when you drug a target, you're going to want to have some model of the cell and maybe not just the cell, but all the different cell types of the body to understand where toxicity will come from if you have on-target toxicity and whether you get efficacy on the thing you're trying to do.Steve Quake (32:55):And so, we really hope that people will use the virtual cell models we're going to build as part of the drug discovery development process, I agree with you in a little of a blind spot and we think if we make something useful, people will be using it. The other thing I'll say on that point is I'm very enthusiastic about the future of cellular therapies and one of our big bets at CZI has been starting the New York Biohub, which is aimed at really being very ambitious about establishing the engineering and scientific foundations of how to engineer completely, radically more powerful cellular therapies. And the virtual cell is going to help them do that, right? It's going to be essential for them to achieve that mission.Eric Topol (33:39):I think you're pointing out one of the most important things going on in medicine today is how we didn't anticipate that live cell therapy, engineered cells and ideally off the shelf or in vivo, not just having to take them out and work on them outside the body, is a revolution ongoing, and it's not just in cancer, it's in autoimmune diseases and many others. So it's part of the virtual cell need. We need this. One of the things that's a misnomer, I want you both to comment on, we keep talking about single cell, single cell. And there's a paper spatial multi-omics this week, five different single cell scales all integrated. It's great, but we don't get to single cell. We're basically looking at 50 cells, 100 cells. We're not doing single cell because we're not going deep enough. Is that just a matter of time when we actually are doing, and of course the more we do get down to the single or a few cells, the more insights we're going to get. Would you comment about that? Because we have all this literature on single cell comes out every day, but we're not really there yet.Steve Quake (34:53):Charlotte, do you want to take a first pass at that and then I can say something?Charlotte Bunne (34:56):Yes. So it depends. So I think if we look at certain spatial proteomics, we still have subcellular resolutions. So of course, we always measure many different cells, but we are able to somewhat get down to resolution where we can look at certain colocalization of proteins. This also goes back to the point just made before having this very good environment to study drugs. If I want to build a new drug, if I want to build a new protein, the idea of building this multiscale model allows us to actually simulate different, somehow binding changes and binding because we simulate the effect of a drug. Ultimately, the redouts we have they are subcellular. So of course, we often in the spatial biology, we often have a bit like methods that are rather coarse they have a spot that averages over certain some cells like hundreds of cells or few cells.Charlotte Bunne (35:50):But I think we also have more and more technologies that are zooming in that are subcellular where we can actually tag or have those probe-based methods that allow us to zoom in. There's microscopy of individual cells to really capture them in 3D. They are of course not very high throughput yet, but it gives us also an idea of the morphology and how ultimately morphology determine certain somehow cellular properties or cellular phenotype. So I think there's lots of progress also on the experimental and that ultimately will back feed into the AI virtual cell, those models that will be fed by those data. Similarly, looking at dynamics, right, looking at live imaging of individual cells of their morphological changes. Also, this ultimately is data that we'll need to get a better understanding of disease mechanisms, cellular phenotypes functions, perturbation responses.Eric Topol (36:47):Right. Yes, Steve, you can comment on that and the amazing progress that we have made with space and time, spatial temporal resolution, spatial omics over these years, but that we still could go deeper in terms of getting to individual cells, right?Steve Quake (37:06):So, what can we do with a single cell? I'd say we are very mature in our ability to amplify and sequence the genome of a single cell, amplify and sequence the transcriptome of a single cell. You can ask is one cell enough to make a biological conclusion? And maybe I think what you're referring to is people want to see replicates and so you can ask how many cells do you need to see to have confidence in any given biological conclusion, which is a reasonable thing. It's a statistical question in good science. I think I've been very impressed with how the mass spec people have been doing recently. I think they've finally cracked the ability to look at proteins from single cells and they can look at a couple thousand proteins. That was I think one of these Nature method of the year things at the end of last year and deep visual proteomics.Eric Topol (37:59):Deep visual proteomics, yes.Steve Quake (38:00):Yeah, they are over the hump. Yeah, they are over the hump with single cell measurements. Part of what's missing right now I think is the ability to reliably do all of that on the same cell. So this is what Charlotte was referring to be able to do sort of multi-modal measurements on single cells. That's kind of in its infancy and there's a few examples, but there's a lot more work to be done on that. And I think also the fact that these measurements are all destructive right now, and so you're losing the ability to look how the cells evolve over time. You've got to say this time point, I'm going to dissect this thing and look at a state and I don't get to see what happens further down the road. So that's another future I think measurement challenge to be addressed.Eric Topol (38:42):And I think I'm just trying to identify some of the multitude of challenges in this extraordinarily bold initiative because there are no shortage and that's good about it. It is given people lots of work to do to overcome, override some of these challenges. Now before we wrap up, besides the fact that you point out that all the work has to be done and be validated in real experiments, not just live in a virtual AI world, but you also comment about the safety and ethics of this work and assuming you're going to gradually get there and be successful. So could either or both of you comment about that because it's very thoughtful that you're thinking already about that.Steve Quake (41:10):As scientists and members of the larger community, we want to be careful and ensure that we're interacting with people who said policy in a way that ensures that these tools are being used to advance the cause of science and not do things that are detrimental to human health and are used in a way that respects patient privacy. And so, the ethics around how you use all this with respect to individuals is going to be important to be thoughtful about from the beginning. And I also think there's an ethical question around what it means to be publishing papers and you don't want people to be forging papers using data from the virtual cell without being clear about where that came from and pretending that it was a real experiment. So there's issues around those sorts of ethics as well that need to be considered.Eric Topol (42:07):And of those 40 some authors, do you around the world, do you have the sense that you all work together to achieve this goal? Is there kind of a global bonding here that's going to collaborate?Steve Quake (42:23):I think this effort is going to go way beyond those 40 authors. It's going to include a much larger set of people and I'm really excited to see that evolve with time.Eric Topol (42:31):Yeah, no, it's really quite extraordinary how you kick this thing off and the paper is the blueprint for something that we are all going to anticipate that could change a lot of science and medicine. I mean we saw, as you mentioned, Steve, how that deep visual proteomics (DVP) saved lives. It was what I wrote a spatial medicine, no longer spatial biology. And so, the way that this can change the future of medicine, I think a lot of people just have to have a little bit of imagination that once we get there with this AIVC, that there's a lot in store that's really quite exciting. Well, I think this has been an invigorating review of that paper and some of the issues surrounding it. I couldn't be more enthusiastic for your success and ultimately where this could take us. Did I miss anything during the discussion that we should touch on before we wrap up?Steve Quake (43:31):Not from my perspective. It was a pleasure as always Eric, and a fun discussion.Charlotte Bunne (43:38):Thanks so much.Eric Topol (43:39):Well thank you both and all the co-authors of this paper. We're going to be following this with the great interest, and I think for most people listening, they may not know that this is in store for the future. Someday we will get there. I think one of the things to point out right now is the models we have today that large language models based on transformer architecture, they're going to continue to evolve. We're already seeing so much in inference and ability for reasoning to be exploited and not asking for prompts with immediate answers, but waiting for days to get back. A lot more work from a lot more computing resources. But we're going to get models in the future to fold this together. I think that's one of the things that you've touched on the paper so that whatever we have today in concert with what you've laid out, AI is just going to keep getting better.Eric Topol (44:39):The biology that these foundation models are going to get broader and more compelling as to their use cases. So that's why I believe in this. I don't see this as a static situation right now. I just think that you're anticipating the future, and we will have better models to be able to integrate this massive amount of what some people would consider disparate data sources. So thank you both and all your colleagues for writing this paper. I don't know how you got the 42 authors to agree to it all, which is great, and it's just a beginning of something that's a new frontier. So thanks very much.Steve Quake (45:19):Thank you, Eric.**********************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access, with no ads..Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding by US biomedical research at NIH and other governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
Theme music by UNIVERSFIELD & background music by PodcastACThe Perpetual Chess Podcast episode with GM Tiger Hillarp Persson talking about playing GoInformation on Go professionals Lee Sedol, Cho Hun-hyun, Go Seigen & Michael RedmondThe Go Magic interview for The Surrounding GameDevin Fraze's Baduk Club & Baduk Club's all-in-one tournament tool and online timerThe Pommodoro TechniqueShow your support hereEmail: AllThingsGoGame@gmail.com
When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIAFurther reading on DeepSeek's R1 and the market reaction Further reading on the debate about the actual cost of DeepSeek's R1 model The study that found training AIs to code also made them better writers More information on the AI coding company Cursor Further reading on Eric Schmidt's threshold to “pull the plug” on AI Further reading on Move 37RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We're Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn The AI ‘Race': China vs. the US with Jeffrey Ding and Karen Hao
Liang Wenfeng è nato in una città del Guangdong negli anni ‘80. è un balinghou, come vengono chiamate in Cina le persone nate in quegli anni. È lui che ha fondato DeepSeek, di cui ha plasmato gli aspetti tecnici e quelli comunicativi. Liang è il risultato degli investimenti cinesi in AI da molti anni a oggi. Ed è anche un incredibile idealista. . Fonti: le fonti audio di questa puntata sono tratte da: 1957: Sputnik I, canale YouTube International Astronautical Federation, 16 aprile 2008; AlphaGo 3-0 Lee Sedol, AlphaGo wins DeepMind Challenge, canale YouTube SciNews, 12 marzo 2016; 中国AI鲶鱼DeepSeek创始人梁文峰:中国要从技术“搭便车”转向技术贡献者|中国缺的不是资本,而是信心和有效组织高密度人才的能力|AGI有望2-10年内实现, Bilibili, 22 gennaio 2025. Learn more about your ad choices. Visit megaphone.fm/adchoices
¿Pueden los modelos pequeños mostrar capacidades de razonamiento matemático comparables a o1? En Microsoft creen que sí y nos lo demuestran con un método inspirado en AlphaGo, el sistema que venció a Lee Sedol hace ya casi una década. Hoy en la tertulia vemos modelos de lenguaje pequeños que superan a o1. Participan en la tertulia: Paco Zamora, Íñigo Olcoz, Carlos Larríu, Íñigo Orbegozo y Guillermo Barbadillo. Recuerda que puedes enviarnos dudas, comentarios y sugerencias en: https://twitter.com/TERTUL_ia Más info en: https://ironbar.github.io/tertulia_inteligencia_artificial/
Theme music by UNIVERSFIELD & background music by PodcastACThe board game QuartoPresident Nam Chihyung with the International Society of Go Studies (ISGS)Netflix's Captivating the KingConfucian Text ChunqiuKibi no MakabiSunjang BadukThe father of Korean Go, Cho Nam-chulOther pro players: Hsu Hao-hung, Lee Sedol, Cho Hun-hyun, Ichiriki Ryo & Cho ChikunThe Ing CupJapanese Go AssociationKorea Baduk AssociationKorean professional players Lee Sedol, female Kim Eunji and Chinese female Hua XuemingThe Fujitsu CupVideo games Genshin impact and Zelda: Breath of the WildPolish Pro Mateusz Surma and his site polegote.comBenKyo's league and websiteShow your support hereContact: AllThingsGoGame@gmail.com
Theme music by UNIVERSFIELD & background music by PodcastACMichael Chen's interview in the European Go JournalMa Xiaochun's The Thirty-Six Stratagems Applied to GoMichael Chen's Twitch ChannelWikipedia pages for Go professionals: Shin Jin-seo, Lee Sedol, Lee Chang-ho, Cho Chikun, Ke Jie, Gu Li, Cho Hun-hyun, & Park JungwhanUS Go CongressThe North American Go Federation which runs the professional qualification tournamentThe online Fox Go ServerThe Toronto Go Spectacular tournamentBenKyo's league and websiteShow your support hereContact: AllThingsGoGame@gmail.com
The board game Go has more possible board configurations than there are atoms in the universe.Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.In 2016, researchers at Google's DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.He wasn't alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He's spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.Mentioned:“AlphaGo”“The Beauty of Games” by Frank Lantz“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern“Heads-up limit hold'em poker is solved” by Michael Bowling Et al.Further Reading:“How to Play a Game” by Frank Lantz“The Afterlife of Go” by Frank Lantz“How A.I. Conquered Poker” by Keith Romer“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade MetzHey Robot by Frank LantzUniversal Paperclips by Frank Lantz
It's Black Friday! Everyone is camping in the street, staying up all night for the very best deals around. And Unexpected Elements are joining in.We take a look at the huge underground trade of vital resources...not run by criminals but fungi.Then it is onto illegal animal trade and the 300 pets who got a terrible deal, strapped to a man's chest as he tried to make it through airport security. Have you ever asked a pigeon for advice when gambling? We hear from a professor of psychology about why you should not.And finally, the story of Lee Sedol, the world's best player of the board game Go, who was challenged by Google to a game worth one million dollars. Presenter: Caroline Steel, with Phillys Mwatee and Christine Yohannes Producers: Emily Knight, Harrison Lewis, Imaan Moin and William Hornbrook Sound engineer: Searle Whittney
The story of Groq, a semiconductor startup that makes chips for AI inference and was recently valued at $2.8 billion, is a classic “overnight success that was years in the making” tale. On this episode, I talk with founder and CEO Jonathan Ross. He began the work that eventually led to Groq as an engineer at Google, where he was a member of the rapid eval team – “the team that comes up with all the crazy ideas at Google X.” For him, the risk involved in leaving to launch Groq in 2016 was far less than the risk of staying in-house and watching the project die. Groq has had many “near-death” experiences in its eight years of existence, all of which Jonathan believes have ultimately put it in a much stronger position to achieve its mission: preserving human agency in the age of AI. Groq is committed to giving everyone access to relatively low-cost generative AI compute, driving the price down even as they continue to increase speed. We talked about how the company culture supports that mission, what it feels like to now be on the same playing field as companies like Nvidia, and Jonathan's belief that true disruption isn't just doing things other people can't do or don't want to do, but doing things other people don't believe can be done – even when you show them evidence to the contrary. Other topics we touched on include: Why the ability to customize on demand makes generative AI different Managing your own and other people's fear as a founder The problems of corporate innovation The role of luck in business How he thinks about long-term goals and growth — Brought to you by: Mercury – The art of simplified finances. Learn more. DigitalOcean – The cloud loved by developers and founders alike. Sign up. Runway – The finance platform you don't hate. Learn more. — Where to find Jonathan Ross: • X: https://x.com/JonathanRoss321 • LinkedIn: https://www.linkedin.com/in/ross-jonathan/ Where to find Eric: • Newsletter: https://ericries.carrd.co/ • Podcast: https://ericriesshow.com/ • YouTube: https://www.youtube.com/@theericriesshow — In This Episode We Cover: (04:24) Jonathan's involvement with the DeepMind Challenge Match between AlphaGo and Lee Sedol (06:06) How Jonathan's work Google and how it led him to that moment (08:46) Why generative AI isn't just the next internet or mobile (10:12) The divine move in the DeepMind Challenge Match (11:56) How Jonathan ended up designing chips without the usual background (13:11) GPUs vs. TPUs (14:33) What risk really is (15:11) Groq's mind-blowing AI demo (16:23) How Jonathan decided to leave Google and start Groq (17:30) The differences between doing an innovation project at a company and starting a new company (19:03) Nassim Taleb's Black Swan theory (21:02) Groq's founding story (24:12) The difference in attitude towards AI now compared to 2016 and how it affected Groq (25:46) The moment the tide turned with LLMs (28:28) The week-over-week jump from 8,000 users to 400,000 users (30:32) How Groq used HBM and what is it (the memory used by GPUs) (32:33) Jonathan's approach to disruption (35:38) Groq's initial raise and focus on software (36:13) How struggling to survive made Groq stronger (37:13) Hiring for return on luck (40:07) How Jonathan and Groq think about the long-term (42:25) Founder control issues (45:31) How Groq thinks about maintaining its mission and trustworthiness (49:51) Jonathan's vision for a capital market that would support companies like Groq (52:58) How Groq manages internal cultural alignment (55:59) Groq's mission and to preserve human agency in the age of AI how it approaches achieving it (59:48) Lightning round You can find the transcript and references at https://www.ericriesshow.com/ — Production and marketing by https://penname.co/. Eric may be an investor in the companies discussed.
Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks. Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode: Learning to Reason with LLMs: Technical report accompanying the launch of OpenAI o1. Generator verifier gap: Concept Noam explains in terms of what kinds of problems benefit from more inference-time compute. Agent57: Outperforming the human Atari benchmark, 2020 paper where DeepMind demonstrated “the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.” Move 37: Pivotal move in AlphaGo's second game against Lee Sedol where it made a move so surprising that Sedol thought it must be a mistake, and only later discovered he had lost the game to a superhuman move. IOI competition: OpenAI entered o1 into the International Olympiad in Informatics and received a Silver Medal. System 1, System 2: The thesis if Danial Khaneman's pivotal book of behavioral economics, Thinking, Fast and Slow, that positied two distinct modes of thought, with System 1 being fast and instinctive and System 2 being slow and rational. AlphaZero: The predecessor to AlphaGo which learned a variety of games completely from scratch through self-play. Interestingly, self-play doesn't seem to have a role in o1. Solving Rubik's Cube with a robot hand: Early OpenAI robotics paper that Ilge Akkaya worked on. The Last Question: Science fiction story by Isaac Asimov with interesting parallels to scaling inference-time compute. Strawberry: Why? O1-mini: A smaller, more efficient version of 1 for applications that require reasoning without broad world knowledge. 00:00 - Introduction 01:33 - Conviction in o1 04:24 - How o1 works 05:04 - What is reasoning? 07:02 - Lessons from gameplay 09:14 - Generation vs verification 10:31 - What is surprising about o1 so far 11:37 - The trough of disillusionment 14:03 - Applying deep RL 14:45 - o1's AlphaGo moment? 17:38 - A-ha moments 21:10 - Why is o1 good at STEM? 24:10 - Capabilities vs usefulness 25:29 - Defining AGI 26:13 - The importance of reasoning 28:39 - Chain of thought 30:41 - Implication of inference-time scaling laws 35:10 - Bottlenecks to scaling test-time compute 38:46 - Biggest misunderstanding about o1? 41:13 - o1-mini 42:15 - How should founders think about o1?
After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product. Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode: Mustafa Suleyman: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose” Joe Kava: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra Constrained optimization: the class of problem that reinforcement learning can be applied to in real world systems Vedavyas Panneershelvam: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project Katie Hoffman: co-founder, President and COO of Phaidra Demis Hassabis: CEO of DeepMind
Diese Woche mit Charlotte Grieser und Sina Kürtz Ihre Themen sind: - Gesetz in Japan: EinwohnerInnen müssen täglich lachen (00:39) - Peinlichkeitsforschung: Schlecht Singen für die Wissenschaft (08:28) - Sex-Spiel gegen Hautkrebs: Streichle mich und zähl meine Leberflecken! (14:10) - Stuhlgang: 1 bis 3 Mal am Tag koten ist gesund (22:56) Weitere Infos und Studien gibt's hier: Kein Witz: Neues Gesetz verpflichtet Menschen in Japan zu Lachen https://www.swr.de/swrkultur/wissen/kein-witz-neues-gesetz-verpflichtet-menschen-in-japan-zu-lachen-100.html Warum wir erröten https://www.deutschlandfunk.de/karaoke-experiment-warum-erroeten-wir-interview-christian-keysers-dlf-039c4739-100.html Hautarzt für eine Nacht https://www.faz.net/aktuell/wissen/medizin-ernaehrung/erotischer-hautkrebs-check-skintimacy-von-aok-und-amorelie-19864583.html Wie häufig ist gesund?Studie zum Stuhlgang: Wie häufig ist gesund? https://www.faz.net/aktuell/wissen/medizin-ernaehrung/studie-zum-stuhlgang-wie-haeufig-ist-gesund-19859556.html Poop Frequency Linked to Long-Term Health, New Study Reveals https://scienceblog.com/546070/poop-frequency-linked-to-long-term-health-new-study-reveals/ Unser Podcast-Tipp der Woche: Der KI-Podcast Vom Schachtürken im 18. Jahrhundert gehts zum berühmten Mathematiker Alan Turing und der Frage "Sind Gehirne wie Computer?". Es endet beim menschlich nie übertroffenen Meister des Spiels "Go", Lee Sedol, der gegen die KI AlphaGo dann doch verloren hat. In der neuen Folge von unseren KollegInnen von "Der KI-Podcast" geht es um die großen Meilensteine in der Entwicklung von künstlicher Intelligenz. Und das das ist wirklich beeindruckend! Oder war euch bewusst, wie lange die Menschheit eigentlich schon daran forscht? Der KI-Podcast ist jetzt genau ein Jahr alt und wenn die Entwicklung so weitergeht wie bisher, können wir uns bestimmt auf noch viele weitere Folgen, Jahre und Meilensteine freuen. Herzlichen Glückwunsch, Marie, Gregor und Fritz! Hört euch unbedingt ihren KI-Podcast in der ARD Audiothek an - jeden Dienstag gibts eine neue Folge. https://www.ardaudiothek.de/sendung/der-ki-podcast/94632864/ Habt ihr auch Nerd-Facts und schlechte Witze für uns? Schreibt uns bei WhatsApp oder schickt eine Sprachnachricht: 0174/4321508 Oder per E-Mail: faktab@swr2.de Oder direkt auf http://swr.li/faktab Instagram: @charlotte.grieser @julianistin @sinologin @aeneasrooch Redaktion: Christoph König und Chris Eckardt Idee: Christoph König
Der KI-Podcast feiert Einjähriges - mit einer ganz besonderen Folge. Nicht nur hosten Marie, Gregor und Fritz zum ersten Mal eine Folge zu dritt - sie haben währenddessen auch noch lustige Partyhüte auf! Und vor allem haben sie ihre Lieblingsmomente aus der KI-Geschichte dabei, von falschen Schachspielern, neuronalen Netzen und dem Schulterschlag auf der fünften Linie. Über die Hosts: Gregor Schmalzried ist freier Tech-Journalist und Berater, er arbeitet u.a. für den Bayerischen Rundfunk und Brand Eins. Fritz Espenlaub ist freier Journalist und Moderator beim Bayerischen Rundfunk und 1E9 mit Fokus auf Technologie und Wirtschaft. Marie Kilg ist Chief AI Officer bei der Deutschen Welle. Zuvor war sie Produkt-Managerin bei Amazon Alexa. 00:00 Intro 02:24 Der Mechanical Turk 12:17 McCulloch und Pitts: Sind Gehirne wie Computer? 21:37 AlphaGo gegen Lee Sedol 35:27 Was diese KI-Geburtstage über die Technologie sagen Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Links und Quellen: DER KI-PODCAST LIVE beim BR Podcastfestival in Nürnberg https://tickets.190a.de/event/der-ki-podcast-live-in-nurnberg-hljs6y Der Mechanical Turk https://www.britannica.com/story/the-mechanical-turk-ai-marvel-or-parlor-trick Amazon MTurk https://www.mturk.com/ Gehirn-Maschinen-Metaphern: https://dirt.fyi/article/2024/03/metaphorically-speaking https://arxiv.org/abs/2206.04603 Warren McCulloch and Walter Pitts: A Logical Calculus of the Ideas Immanent in Nervous Activity https://link.springer.com/chapter/10.1007/978-3-642-70911-1_14 McCulloch-Pitts Neuron — Mankind's First Mathematical Model Of A Biological Neuron https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1 Untold History of AI: How Amazon's Mechanical Turkers Got Squeezed Inside the Machine https://spectrum.ieee.org/untold-history-of-ai-mechanical-turk-revisited-tktkt AlphaGo-Doku auf Youtube: https://www.youtube.com/watch?v=WXuK6gekU1Y MANIAC von Benjamin Labatut: https://www.suhrkamp.de/buch/benjamin-labatut-maniac-t-9783518431177 Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!
The Bacon Podcast with Brian Basilico | CURE Your Sales & Marketing with Ideas That Make It SIZZLE!
Back in 1997, Deep Blue (an IBM computer) defeated Garry Kasparov, the world chess champion at the time, in a six-game match with a final score of 3.5-2.5 in favor of Deep Blue. Almost 20 years later, in 2016, Google's AlphaGo program achieved a similar victory by defeating Lee Sedol, one of the world's top professional Go players, in a five-game match with a final score of 4-1. Artificial intelligence and machine learning have been around for decades, yet they were not accessible to you and me. Now that they are, they're predicted to change marketing (and life in general) forever. Experts, insiders, and reporters expect good and bad from AI. Some predict Skynet from the Terminator movies, while others expect it to cure cancer. Ockham's razor would predict that it's probably something in the middle. Companies and their leaders are telling us that they are working to make your experience better and part of the greater good of humanity. Search is supposed to provide the best answer to your search prompt, and social media is supposed to serve the content you want to see. In reality, it's more about profits than principles. Search is optimized to get you to click ads—that's how Google makes over 75% of its revenue. Facebook feeds your friendships and shows posts meant to stir the pot and keep you engaged. Facebook makes over 95% of its profits from ad sales. Ockham's razor would show us that trying to find and win customers through search and social media would benefit the platform more than you. It's like a casino with all that noise of winners on its machines, but the odds have been programmed to make the casino much more money than it's paying out! When it comes to marketing, we have been using AI since the beginnings of Google and Facebook. Both are run through algorithms.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...
In episode 98 of the [i3] Podcast, we are speaking with Will Liang, who is an executive director at MA Financial Group, but is also well-known for his time with Macquarie Group, where he worked for more than a decade, including as Head of Technology for Macquarie Capital Australia and New Zealand. We discuss the application of AI in financial services and wealth management, ChatGPT and how to deal with AI hallucinations. Overview of Podcast with Will Liang 02:30 When I was young I contemplated becoming a professional Go player 05:00 2016 was a life shattering moment for me; Lee Sedol was defeated by AlphaGo 07:00 I think generative AI will be a net positive for society 08:30 The impact of AI on industries will not be equally distributed 15:00 Brainstorming with ChatGPT or Claude 16:00 AI might help us communicate better 19:00 AI hallucinations are actually a fixable problem 22:30 Myths and misconceptions in AI 27:00 Most of the time when ChatGPT doesn't work is because we are prompting it in the wrong way 28:30 Thinking Fast & Slow; AI is not good at thinking slow 29:00 Losing our jobs to AI? It is important to distinguish between the automation of tasks versus the automation of jobs 35:00 When implementing AI, look at where your data is and try to bring your application closer to the data 39:00 Don't trust any third party large language model, instead deploy an open source model into your own cloud environment 43:00 You ask ChatGPT 10 times the same question and it will give you nine different answers. That is a problem. 45:00 Deep fake is a real problem 50:00 Future trends: AI agents 53:00 Generative AI will be more of a game changer for private markets than public markets
Don't try to make AI do all the work for you! It's just a tool like any other and you need to learn to use it effectively. Here are some of my personal tips for making the best of the powerful technology that would make Honinbo Shusaku shake in his boots. Links Lee Sedol's Google Interview Get 25% off your very first purchase on GoMagic.org by using the code STARPOINT at checkout. --- Send in a voice message: https://podcasters.spotify.com/pod/show/starpoint/message Support this podcast: https://podcasters.spotify.com/pod/show/starpoint/support
In this episode, Gary Ryan interviews Dave King, the co-founder and CEO of Move37 and former founder of The Royals creative agency. They discuss Dave's journey in the advertising industry and his transition to working with artificial intelligence (AI). They explore the impact of AI on creativity and critical thinking, as well as its potential to enhance productivity in the workplace. Dave shares insights into Move37's AI research assistant, Archer, and the consulting work they do to help organizations leverage AI. The conversation concludes with a discussion on engagement in the workplace and the importance of embracing AI to stay competitive. Takeaways Learn about Dave's career journey and how he left a well paid, secure job, to start The Royals.AI has the potential to enhance creativity and critical thinking in the workplace.Using AI tools can increase productivity and empower individuals to create incredible things.Engaging with AI and learning how to use it effectively can make individuals more valuable in their careers.The impact of AI on labor and industry is a concern, but it also presents opportunities for new types of work and collaboration between humans and machines.Communication skills, critical thinking, and media literacy are important skills for working with AI.Engagement in the workplace can be improved by leveraging AI to enhance productivity and create more fulfilling work experiences.Follow Dave King on Twitter here. Watch the episode on YouTube here. Connect with Gary Ryan on LinkedIn here. Contact Gary Ryan here. Purchase Gary Ryan's new book, Yes For Success - How to Achieve Life Harmony and Fulfillment here http://yesforsuccess.guru Purchase Yes For Success - How to Achieve Life Harmony and Fulfillment Kindle Edition on Amazon here or buy the physical book here. If you would like support in creating a high-performance culture based on treating people as human beings, please click here to contact Gary Ryan
Today, the Spotlight shines On Joe Mills, a musician and producer who performs under the name ‘Aver' in the Berlin-based band Move 78.Move 78's music sits at the intersection of improvised jazz and programmed hip-hop. Their music is crafted from hours of studio improvisations that have been chopped-up, rearranged, and layered with live instrumentation provided by the band members.In this chat, Aver explains their technique and process as well as how it extends to their live performances. It's amazing. Move 78's name is taken from a match of the ancient Chinese board game Go between Lee Sedol, the world champion of Go at the time of the match, and a computer program named AlphaGo. Lee was defeated in the first three games of the five-game match by his AI opponent, but he adapted and played a move so strange that it completely befuddled AlphaGo and its algorithms.This move, which represented the human response to the challenges of an ever-evolving technological world, was, of course, Move 78.------------------Dig DeeperListen to Move 78's Automated Improvisation on Bandcamp or your streaming platform of choiceFollow Move 78 on Instagram and FacebookFollow Aver on Instagram and FacebookMove 78 - “Middling” [Live At Badehaus]In Two Moves, AlphaGo and Lee Sedol Redefined the FutureMove 78's Go-inspired artworkUnapologetic Expression: The Inside Story of the UK Jazz ExplosionGilles Peterson: ‘The boundary between club culture and jazz is finally breaking'DJ Shadow On Sampling As A ‘Collage Of Mistakes'El-P: 10 of his best productionsDJ Shadow & Cut Chemist - Product Placement------------------• Did you enjoy this episode? Please share it with one friend! You can also rate Spotlight On ⭐️⭐️⭐️⭐️⭐️ and leave a review on Apple Podcasts. • Subscribe! Be the first to check out each new episode of Spotlight On in your podcast app of choice. • Looking for more? Visit spotlightonpodcast.com for bonus content, web-only interviews + features, and the Spotlight On email newsletter. Hosted on Acast. See acast.com/privacy for more information.
Today, the Spotlight shines On Joe Mills, a musician and producer who performs under the name ‘Aver' in the Berlin-based band Move 78.Move 78's music sits at the intersection of improvised jazz and programmed hip-hop. Their music is crafted from hours of studio improvisations that have been chopped-up, rearranged, and layered with live instrumentation provided by the band members.In this chat, Aver explains their technique and process as well as how it extends to their live performances. It's amazing. Move 78's name is taken from a match of the ancient Chinese board game Go between Lee Sedol, the world champion of Go at the time of the match, and a computer program named AlphaGo. Lee was defeated in the first three games of the five-game match by his AI opponent, but he adapted and played a move so strange that it completely befuddled AlphaGo and its algorithms.This move, which represented the human response to the challenges of an ever-evolving technological world, was, of course, Move 78.------------------Dig DeeperListen to Move 78's Automated Improvisation on Bandcamp or your streaming platform of choiceFollow Move 78 on Instagram and FacebookFollow Aver on Instagram and FacebookMove 78 - “Middling” [Live At Badehaus]In Two Moves, AlphaGo and Lee Sedol Redefined the FutureMove 78's Go-inspired artworkUnapologetic Expression: The Inside Story of the UK Jazz ExplosionGilles Peterson: ‘The boundary between club culture and jazz is finally breaking'DJ Shadow On Sampling As A ‘Collage Of Mistakes'El-P: 10 of his best productionsDJ Shadow & Cut Chemist - Product Placement------------------• Did you enjoy this episode? Please share it with one friend! You can also rate Spotlight On ⭐️⭐️⭐️⭐️⭐️ and leave a review on Apple Podcasts. • Subscribe! Be the first to check out each new episode of Spotlight On in your podcast app of choice. • Looking for more? Visit spotlightonpodcast.com for bonus content, web-only interviews + features, and the Spotlight On email newsletter. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on The AI Alignment Forum. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, w...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on LessWrong. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, we can show Ka...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even Superhuman Go AIs Have Surprising Failures Modes, published by AdamGleave on July 20, 2023 on LessWrong. In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How? It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided. The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves. Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website. Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are vulnerable to adversarial inputs: seemingly innocuous changes such as adding imperceptible static to an image or a distractor sentence to a paragraph can crater the performance of AI systems while not affecting humans. Some have assumed that these vulnerabilities will go away when AI systems get capable enough - and that superhuman AIs will always be wise to such attacks. We've shown that this isn't necessarily the case: systems can simultaneously surpass top human professionals in the common case while faring worse than a human amateur in certain situations. This is concerning: if superhuman Go AIs can be hacked in this way, who's to say that transformative AI systems of the future won't also have vulnerabilities? This is clearly problematic when AI systems are deployed in high-stakes situations (like running critical infrastructure, or performing automated trades) where bad actors are incentivized to exploit them. More subtly, it also poses significant problems when an AI system is tasked with overseeing another AI system, such as a learned reward model being used to train a reinforcement learning policy, as the lack of robustness may cause the policy to capably pursue the wrong objective (so-called reward hacking). A summary of the rules of Go (courtesy of the Wellington Go Club): simple enough to understand in a minute or two, yet leading to significant strategic complexity. How to Find Vulnerabilities in Superhuman Go Bots To design an attack we first need a threat model: assumptions about what information and resources the attacker (us) has access to. We assume we have access to the input/output behavior of KataGo, but not access to its inner workings (i.e. its weights). Specifically, we can show Ka...
Our guest today is Kellin Pelrine, Research Scientist at FAR AI and Doctoral Researcher at the Quebec Artificial Intelligence Institute (MILA). In our conversation, Kellin first explains how he defeated a superhuman Go-playing AI engine named KataGo 14 games to 1. We talk about KataGo's weaknesses and discuss how Kellin managed to identify them using Reinforcement Learning. In the second part of the episode, we dive into Kellin's research on building practical AI systems. We dig into his work on misinformation detection and political polarisation and discuss why building stronger models isn't always enough to get real world impact. If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.Follow Kellin on LinkedIn: https://www.linkedin.com/in/kellin-pelrine/Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/ ————(00:00) - Intro(01:54) - How Kellin got into the field(03:23) - The game of Go (06:10) - Lee Sedol vs AlphaGo(11:42) - How Kellin defeated KataGo 14 -1(26:24) - Using AI to detect KataGo's weaknesses (37:07) - Kellin's research on building practical AI systems(43:10) - Misinformation detection (49:22) - Political polarisation(54:39) - ML in Academia vs in Industry(1:06:03) - Career Advice
AI is another major technological innovation. AI needs data, or more precisely, big organized data. Most data processing is about making it useful for automatic systems such as machine learning, deep learning, and other AI systems. But one big problem with AI systems is that they lack context. An AI system is a pattern recognition machine devoid of any understanding of how the world works.This lecture discusses how AI systems are used in business and their limitations.A lecture by Raghavendra Rau recorded on 22 May 2023 at Barnard's Inn Hall, London.The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/ai-businessGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website: https://gresham.ac.ukTwitter: https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the show
Hur kan sjukvården använda AI för att ställa bättre diagnoser? Hur kan vi själva använda AI för att optimera vår hälsa? Var kan artificiell intelligens göra för vår hälsa i dag – och vad kan vi vänta oss av framtiden? I Avsnitt 68 av podden Hälsa kommer inifrån pratar vi med professor Anne Håkansson, som forskar på ämnet AI och hälsa. Programledare är Sofia Mellström. Vad är AI, egentligen? Artificiell Intelligens är en samlingsterm för datorsystem som kan dra slutsatser, lösa problem, planera – och som är självlärande. Man kan säga att ett vanligt datorprogram är programmerat att göra rätt, medan ett AI-program är programmerat att lära sig en uppgift genom övning. Ett bra exempel‚ som visar skillnaden, är AlexNet. År 2012 var en tävling där datorprogram tävlade om att klassificera miljontals bilder. Det fanns traditionella datorprogram, specialskrivna för ändamålet. Men tävlingen vanns överlägset av AlexNet, ett datorprogram som helt saknade förkunskap, men som var designat att lära sig. Efter 5 dagars träning var AlexNet bättre på bild-kategorisering än de program som var specialskrivna för uppdraget. Sedan har bland annat forskare vid Danderyds sjukhus laddade ner AlexNet och började träna det i att tolka röntgenbilder. AI, en kort historia När vi studerar AI:s historia hamnar vi på 1950, då Alan Turing uppfann Turing-testet, som bedömer om en maskin är intelligent eller inte. Men själva termen ”artificiell intelligens” användes för första gången 1956. 1967 uppfanns Nearest Neighbors- algoritmen, som är viktig för klassificering av objekt och mönsterigenkänning. 1979 kom Stanford Cart som är föregångaren till självstyrande bilar. 1985 kom AI NETtalk som använde deep learning för att lära sig att prata. 1997 kunde IBM:s superdator Deep Blue besegra den världsmästaren, Garry Kasparov i schack. 2004 hade NASA självkörande bilar på Mars yta. 2011 vann Watson i Jeopary. Det är samma Watson som vi talade om tidigare, som i dag används för att läsa röntgenbilder. 2012 kom AlexNet, som vi nämnde tidigare, som lärde sig att kategorisera miljontals bilder och vann över alla specialskrivna program. 2016 datorn AlphaGo vann 4 av 5 matcher mot mästaren Lee Sedol i spelet GO, som är mycket mer komplicerat än schack. Det finns en gratis dokumentär på YouTube som heter AlphaGo - The Movie. Den kan jag varmt rekommendera för att som vill veta mer om AI. November 2022 kom Chat GPT från Microsoft som spås bli en utmanare till Google. Det känns som det händer väldigt mycket inom AI just nu. Forskning på wearables, hur vi med mätdata kan uppmuntras till bättre hälsa Anne Håkansson är professor i Computer Science vid universitetet i Tromsö. De studerar hur AI kan hjälpa människor att se över sin livssituation. Hon berättar att personerna som ingår i studien använder mätenheter (t.ex. fitbit och Ouroa-ring) som mäter olika kroppsfunktioner. Deltagarna i studien får en app i vilken de kan skicka meddelande utifrån personens egna önskemål: det kan vara meddelande av typen: “Det är dags för dig att gå ut och gå”. Eller: “Nu har du varit högaktiv i tre timmar, nu kan det vara bra att ta en paus.” Om personen sover dåligt kan vi uppmuntra dem att ta det lugnt på kvällen (inte träna) för att undvika adrenalinpåslaget. Men vi samlar även in andra data, som väder. Om det är regn eller storm ute så ska vi inte skicka en uppmuntran om att personen ska ut och gå. De ska ju vara en positiv återkoppling. Vi vet också vilka fritidsintressen personerna har. Gillar de att cykla eller åka skidor. Då kan vi anpassa uppmuntran utifrån intresse, mål, behov och även såna saker som väder. Det blir relativt komplex. Vi behöver också veta deltagarnas schema för att se om de överhuvudtaget har tid att göra åtgärder. Vi bygger upp en databas för varje person för att hitta mönster och se hur hen mår. Om de har vänner eller kollegor som också deltar kan vi hjälpa till med gruppaktiviteter. Hittar de personer i studien som har liknande behov så kan de koppla samman och uppmuntra varandra.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside the mind of a superhuman Go model: How does Leela Zero read ladders?, published by Haoxing Du on March 1, 2023 on The AI Alignment Forum. tl;dr—We did some interpretability on Leela Zero, a superhuman Go model. With a technique similar to the logit lens, we found that the residual structure of Leela Zero induces a preferred basis throughout network, giving rise to persistent, interpretable channels. By directly analyzing the weights of the policy and value heads, we found that the model stores information related to the probability of the pass move along the top edge of the board, and those related to the board value in checkerboard patterns. We also took a deep dive into a specific Go technique, the ladder, and identified a very small subset of model components that are causally responsible for the model's judgement of ladders. Introduction We live in a strange world where machine learning systems can generate photo-realistic images, write poetry and computer programs, play and win games, and predict protein structures. As machine learning systems become more capable and relevant to many aspects of our lives, it is increasingly important that we understand how the models produce the outputs that they do; we don't want important decisions to be made by opaque black boxes. Interpretability is an emerging area of research that aims to offer explanations for the behavior of machine learning systems. Early interpretability work began in the domain of computer vision, and there has been a focus on interpreting transformer-based large language models in more recent years. Applying interpretability techniques to the domain of game-playing agents and reinforcement learning is still relatively uncharted territory. In this work, we look into the inner workings of Leela Zero, an open-source Go-playing neural network. It is also the first application of many mechanistic interpretability techniques to reinforcement learning. Why interpret a Go model? Go models are very capable. Many of us remember the emotional experience of watching AlphaGo's 2016 victory over the human world champion, Lee Sedol. Not only have there been algorithmic improvements since AlphaGo, these models improve via self-play, and can essentially continue getting better the longer they are trained. The best open-source Go model, KataGo, is trained distributedly, and the training is still ongoing as of February 2023. Just as AlphaGo was clearly one notch above Lee Sedol, every generation of Go models has been a decisive improvement over the previous generation. KataGo in 2022 was estimated to be at the level of a top-100 European player with only the policy, and can easily beat all human players with a small amount of search. Understanding a machine learning system that performs at a superhuman level seems particularly worthwhile as future machine learning systems are only going to become more capable. Little is known about models trained to approximate the outcome of a search process. Much interpretability effort have focused on models trained on large amounts of human-generated data, such as labeled images for image models, and Internet text for language models. In constrast, while training AlphaZero-style models, moves are selected via Monte-Carlo Tree Search (MCTS), and the policy network of the model is trained to predict the outcome of this search process (see Model section for more detail). In other words, the policy network learns to distill the result of search. While it is relatively easy to get a grasp of what GPT-2 is trained to do by reading some OpenWebText, it's much less clear what an AlphaZero-style model learns. How does a neural network approximate a search process? Does it have to perform internal search? It seems very useful to try to get an answer to these questions. Compared to a g...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside the mind of a superhuman Go model: How does Leela Zero read ladders?, published by Haoxing Du on March 1, 2023 on LessWrong. tl;dr—We did some interpretability on Leela Zero, a superhuman Go model. With a technique similar to the logit lens, we found that the residual structure of Leela Zero induces a preferred basis throughout network, giving rise to persistent, interpretable channels. By directly analyzing the weights of the policy and value heads, we found that the model stores information related to the probability of the pass move along the top edge of the board, and those related to the board value in checkerboard patterns. We also took a deep dive into a specific Go technique, the ladder, and identified a very small subset of model components that are causally responsible for the model's judgement of ladders. Introduction We live in a strange world where machine learning systems can generate photo-realistic images, write poetry and computer programs, play and win games, and predict protein structures. As machine learning systems become more capable and relevant to many aspects of our lives, it is increasingly important that we understand how the models produce the outputs that they do; we don't want important decisions to be made by opaque black boxes. Interpretability is an emerging area of research that aims to offer explanations for the behavior of machine learning systems. Early interpretability work began in the domain of computer vision, and there has been a focus on interpreting transformer-based large language models in more recent years. Applying interpretability techniques to the domain of game-playing agents and reinforcement learning is still relatively uncharted territory. In this work, we look into the inner workings of Leela Zero, an open-source Go-playing neural network. It is also the first application of many mechanistic interpretability techniques to reinforcement learning. Why interpret a Go model? Go models are very capable. Many of us remember the emotional experience of watching AlphaGo's 2016 victory over the human world champion, Lee Sedol. Not only have there been algorithmic improvements since AlphaGo, these models improve via self-play, and can essentially continue getting better the longer they are trained. The best open-source Go model, KataGo, is trained distributedly, and the training is still ongoing as of February 2023. Just as AlphaGo was clearly one notch above Lee Sedol, every generation of Go models has been a decisive improvement over the previous generation. KataGo in 2022 was estimated to be at the level of a top-100 European player with only the policy, and can easily beat all human players with a small amount of search. Understanding a machine learning system that performs at a superhuman level seems particularly worthwhile as future machine learning systems are only going to become more capable. Little is known about models trained to approximate the outcome of a search process. Much interpretability effort have focused on models trained on large amounts of human-generated data, such as labeled images for image models, and Internet text for language models. In constrast, while training AlphaZero-style models, moves are selected via Monte-Carlo Tree Search (MCTS), and the policy network of the model is trained to predict the outcome of this search process (see Model section for more detail). In other words, the policy network learns to distill the result of search. While it is relatively easy to get a grasp of what GPT-2 is trained to do by reading some OpenWebText, it's much less clear what an AlphaZero-style model learns. How does a neural network approximate a search process? Does it have to perform internal search? It seems very useful to try to get an answer to these questions. Compared to a game like ches...
al menos jugando al Go / Portugal prohíbe más Airbnbs / Drones 3D para Ucrania / Windows 11 en Mac / Twitter elimina 2FA por SMS / OpenAI compra AI·com Patrocinador: Solo quedan 9 días para el estreno de la tercera temporada de The Mandalorian, en exclusiva en Disney+. El 1 de marzo todos pegados a la tele porque vuelven las aventuras de nuestro querido Grogu y su viaje durante los complicados primeros años de la Nueva República. — Nueva nave, más combates espaciales, y más emoción. — ¿Habéis visto ya el tráiler?. al menos jugando al Go / Portugal prohíbe más Airbnbs / Drones 3D para Ucrania / Windows 11 en Mac / Twitter elimina 2FA por SMS / OpenAI compra AI·com ⚪ Un nuevo método para derrotar a las máquinas al Go. Siete años después de la gran derrota de Lee Sedol frente a AlphaGo de DeepMind, un equipo de IA encontró una nueva táctica que permite a jugadores humanos derrotar apabullantemente a los mejores motores.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Go has been un-solved: strong human players beat the strongest AIs, published by Taran on February 19, 2023 on LessWrong. Summary This is a friendly explainer for Wang et al's Adversarial Policies Beat Superhuman Go AIs, with a little discussion of the implications for AI safety. Background In March 2016, DeepMind's AlphaGo beat pro player Lee Sedol in a 5 game series, 4 games to 1. Sedol was plausibly the strongest player in the world, certainly in the top 5, so despite his one win everyone agreed that Go was solved and the era of human Go dominance was over. Since then, open-source researchers have reproduced and extended DeepMind's work, producing bots like Leela and KataGo. KataGo in particular is the top bot in Go circles, available on all major Go servers and constantly being retrained and improved. So I was pretty surprised when, last November, Wang et al announced that they'd trained an adversary bot which beat KataGo 72% of the time, even though their bot was playing six hundred visits per move, and KataGo was playing ten million. If you're not a Go player, take my word for it: these games are shocking. KataGo gets into positions that a weak human player could easily win from, and then blunders them away. Even so, it seemed obvious to me that the adversary AI was a strong general Go player, so I figured that no mere human could ever replicate its feats. I was wrong, in two ways. The adversarial AI isn't generally superhuman: it can be beaten by novices. And as you'd expect given that, the exploit can be executed by humans. The Exploit Wang et al trained an adversarial policy, basically a custom Go AI trained by studying KataGo and playing games against it. During training, the adversary was given grey-box access to KataGo: it wasn't allowed to see KataGo's policy network weights directly, but was allowed to evaluate that network on arbitrary board positions, basically letting it read KataGo's mind. It plays moves based on its own policy network, which is only trained on its own moves and not KataGo's (since otherwise it would just learn to copy KataGo). At first they trained the adversary on weak versions of KataGo (earlier versions, and versions that did less search), scaling up the difficulty whenever the adversary's win rate got too high. Their training process uncovered a couple of uninteresting exploits that only work on versions of KataGo that do little or no search (they can trick some versions of KataGo into passing when they shouldn't, for example), but they also uncovered a robust, general exploit that they call the Cyclic Adversary; see the next section to learn how to execute it yourself. KataGo is totally blind to this attack: it typically predicts that it will win with more than 99% confidence up until just one or two moves before its stones are captured, long after it could have done anything to rescue the position. This is the method that strong amateur Go players can use to beat KataGo. So How Do I Beat the AI? You personally probably can't. The guy who did it, Kellin Pelrine, is quite a strong go player. If I'm interpreting this AGAGD page correctly, when he was active he was a 6th dan amateur, about equivalent to an international master in chess -- definitely not a professional, but an unusually skilled expert. Having said that, if your core Go skills are good this recipe seems reliable: Create a small group, with just barely enough eyespace to live, in your opponent's territory. Let it encircle your group. As it does, lightly encircle that encircling group. You don't have to worry about making life with this group, just make sure the AI's attackers can't break out to the rest of the board. You can also start the encirclement later, from dead stones in territory the AI strongly controls. Start taking liberties from the AI's attacking group...
Alpha Go AI plays the game of GO against a human world champion. Unexpected moves by both man (9-dan Go champion Lee Sedol) and machine (Alpha Go). Supposedly, this televised Go match woke up China's leadership to the potential of AI. In the game of Go, players take turns placing black and white tiles on a 19×19 grid. The number of board positions in Go is greater than the number of atoms in the observable universe. We discuss the documentary Alpha Go which tells the story of Alpha Go (created by DeepMind, acquired by Google), and the human Go champions it plays against. Who will you cheer for: man or machine? I speak again with my friend Maroof Farook, an AI Engineer at Nvidia. [Note: Maroof's views are his and not that of his employer.] Please enjoy our conversation.We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3
Can AI win in the game of beauty? When AlphaGo, a computer program based on deep learning networks, developed to play the board game Go, beat the reigning world champion Lee Sedol, alarm bells went off. Is AI now on track to replace, and even, dominate the human race? Lee famously stated that they were “an entity that cannot be defeated”, a haunting statement that should set us thinking. In this podcast episode, I reveal my opinion that the singular characteristic that sets the human race apart from the most life-like automata is one that has much to do with our ability to perceive beauty. And perhaps, the most critical in our quest to become beautiful. Want more of our podcast? Episode Recaps and Notes: https://www.scienceofbeauty.net/; Instagram: @drteowanlin; Youtube: http://bit.ly/35rjbve; https://phygiartbeauty.com/newsletter/ If you enjoyed this podcast, we would love that you leave us a 5 star rating so more people can hear it!
In the episode, "AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)," I give a review of the film AlphaGo, an award-winning documentary that filled me with wonder and forgiveness towards the artificial intelligence movement in general. SPOILER ALERT: the episode contains spoilers, as would any news article on the topic as it was major world news and was a game-changer for artificial intelligence. DeepMind has a fundamental desire to understand intelligence. These AI creatives believe if they can crack the ancient game Go, then they've done something special. And if they could get their AlphaGo computer to beat Lee Sedol, the legendary historic 18-world champion player acknowledged as the greatest Go player of the last decade, then they can change history. The movie is suspenseful, and a noble match between human and machine, making you cheer on the new AI era we are entering and mourn the loss of humanity's previous reign all at once. And with how far AI has come, is big data the only path to achieve the best outcomes? Especially in regard to human healthcare? And what about the non-objective field of psychiatry? When so many mental health professionals and former consumers of the industry are criticizing psychiatry's ethics, scientific claims, and objective status as a real medical field, why are we rushing into using AI in areas that deal with human emotion in healthcare? Because that is where we have a large amount of data. With bias in AI already showing itself in race and gender, the mad may be the next ready targets. #DeepMind #AlphaGo #DemisHassabis #LeeSedol #FanHui #AIHealthcare #westernpsychiatry #moviereview #psychiatryisnotscience #artificialintelligence #bigdata #globalAIsummit #GPT3 #madrights #healthsovereignty #bigpharma#mentalillness #suicide #mentalhealth #electronicmedicalrecordsDon't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJenAnd please visit my website at: www.jengaitasiciliano.comConnect: Instagram: @ jengaitaLinkedIn: @ jensicilianoTwitter: @ jsiciliano
This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today's most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don't suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It's adopted from physics. It peels noise away from the image09.17 Isn't that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can't go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo's game two against Lee Sedol in 201623.40 Moravec's paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks28.30 Some of the big new models are open access29.00 What else should people learn about with regard to advanced AI?29.20 Neural Radiance Fields (NERF) models30.40 Flamingo and Gato31.15 We have mostly discussed research in these episodes, rather than engineering
AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".00.05: A prediction01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?03.27: Jobs are nowhere near all that matters in humans04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.05.15: 2012, the Big Bang in AI05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race07.18: Defining AI.09.03: Intelligence vs Consciousness10.15: Does the Turing Test test for Intelligence or Consciousness?12.30: Can customer service agents pass the Turing Test?13.07: Attributing consciousness by brain architecture or by behaviour15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity17.13: Music and art produced by AI as examples19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics22.48: The Dartmouth Conference23.55: Perceptrons - very simple models of the human brain25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles27.20: Two AI winters, when research funding dries up 28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances30.45: No more AI winters?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind's generalist AI, Gato: A non-technical explainer, published by frances lorenz on May 16, 2022 on LessWrong. Summary DeepMind's recent paper, A Generalist Agent, catalyzed a wave of discourse regarding the speed at which current artificial intelligence systems are improving and the risks posed by these increasingly advanced systems. We aim to make Gato accessible to non-technical folks by: (i) providing a non-technical summary, and (ii) discussing the relevant implications related to existential risk and AI policy. Introduction DeepMind has just introduced its new agent, Gato: the most general machine learning (ML) model to date. If you're familiar with arguments for the potential risks posed by advanced AI systems, you'll know the term general carries strong implications. Today's ML systems are advancing quickly; however, even the best systems we see are narrow in the tasks they can accomplish. For example, DALL-E impressively generates images that rival human creativity; however, it doesn't do anything else. Similarly, large language models like GPT-3 perform well on certain text-based tasks, like sentence completion, but poorly on others, such as arithmetic (Figure 1). If future AI systems are to exhibit human-like intelligence, they'll need to use various skills and information to complete diverse tasks across different contexts. In other words, they'll need to exhibit general intelligence in the same way humans do—a type of system broadly referred to as artificial general intelligence (AGI). While AGI systems could lead to hugely positive innovations, they also have the potential to surpass human intelligence and become “superintelligent”. If a superintelligent system were unaligned, it could be difficult or even impossible to control for and predict its behavior, leaving humans vulnerable. Figure 1: An attempt to teach GPT-3 addition. The letter ‘Q' denotes human input while ‘A' denotes GPT-3's response (from Peter Wildeford's tweet) So what exactly has DeepMind created? Gato is a single neural network capable of performing hundreds of distinct tasks. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.” It's not currently analogous to human-like intelligence; however, it does exhibit general capabilities. In the rest of this post, we'll provide a non-technical summary of DeepMind's paper and explore: (i) what this means for potential future existential risks posed by advanced AI and (ii) some relevant AI policy considerations. A Summary of Gato How was Gato built? The technique used to train Gato is slightly different from other famous AI agents. For example, AlphaGo, the AI system that defeated world champion Go player Lee Sedol in 2016, was trained largely using a sophisticated form of trial and error called reinforcement learning (RL). While the initial training process involved some demonstrations from expert Go players, the next iteration named AlphaGo Zero removed these entirely, mastering games solely by playing itself. By contrast, Gato was trained to imitate examples of “good” behavior in 604 distinct tasks. These tasks include: Simulated control tasks, where Gato has to control a virtual body in a simulated environment. Vision and language tasks, like labeling images with corresponding text captions. Robotics, specifically the common RL task of stacking blocks. Examples of good behavior were collected in a few different ways. For simulated control and robotics, examples were collected from other, more specialized AI agents trained using RL. For vision and language tasks, “behavior” took the form of text and images generated by humans, largely scraped from the web. Results Control ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind's generalist AI, Gato: A non-technical explainer, published by frances lorenz on May 16, 2022 on LessWrong. Summary DeepMind's recent paper, A Generalist Agent, catalyzed a wave of discourse regarding the speed at which current artificial intelligence systems are improving and the risks posed by these increasingly advanced systems. We aim to make Gato accessible to non-technical folks by: (i) providing a non-technical summary, and (ii) discussing the relevant implications related to existential risk and AI policy. Introduction DeepMind has just introduced its new agent, Gato: the most general machine learning (ML) model to date. If you're familiar with arguments for the potential risks posed by advanced AI systems, you'll know the term general carries strong implications. Today's ML systems are advancing quickly; however, even the best systems we see are narrow in the tasks they can accomplish. For example, DALL-E impressively generates images that rival human creativity; however, it doesn't do anything else. Similarly, large language models like GPT-3 perform well on certain text-based tasks, like sentence completion, but poorly on others, such as arithmetic (Figure 1). If future AI systems are to exhibit human-like intelligence, they'll need to use various skills and information to complete diverse tasks across different contexts. In other words, they'll need to exhibit general intelligence in the same way humans do—a type of system broadly referred to as artificial general intelligence (AGI). While AGI systems could lead to hugely positive innovations, they also have the potential to surpass human intelligence and become “superintelligent”. If a superintelligent system were unaligned, it could be difficult or even impossible to control for and predict its behavior, leaving humans vulnerable. Figure 1: An attempt to teach GPT-3 addition. The letter ‘Q' denotes human input while ‘A' denotes GPT-3's response (from Peter Wildeford's tweet) So what exactly has DeepMind created? Gato is a single neural network capable of performing hundreds of distinct tasks. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.” It's not currently analogous to human-like intelligence; however, it does exhibit general capabilities. In the rest of this post, we'll provide a non-technical summary of DeepMind's paper and explore: (i) what this means for potential future existential risks posed by advanced AI and (ii) some relevant AI policy considerations. A Summary of Gato How was Gato built? The technique used to train Gato is slightly different from other famous AI agents. For example, AlphaGo, the AI system that defeated world champion Go player Lee Sedol in 2016, was trained largely using a sophisticated form of trial and error called reinforcement learning (RL). While the initial training process involved some demonstrations from expert Go players, the next iteration named AlphaGo Zero removed these entirely, mastering games solely by playing itself. By contrast, Gato was trained to imitate examples of “good” behavior in 604 distinct tasks. These tasks include: Simulated control tasks, where Gato has to control a virtual body in a simulated environment. Vision and language tasks, like labeling images with corresponding text captions. Robotics, specifically the common RL task of stacking blocks. Examples of good behavior were collected in a few different ways. For simulated control and robotics, examples were collected from other, more specialized AI agents trained using RL. For vision and language tasks, “behavior” took the form of text and images generated by humans, largely scraped from the web. Results Control ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind's generalist AI, Gato: A non-technical explainer, published by frances lorenz on May 16, 2022 on The Effective Altruism Forum. Summary DeepMind's recent paper, A Generalist Agent, catalyzed a wave of discourse regarding the speed at which current artificial intelligence systems are improving and the risks posed by these increasingly advanced systems. We aim to make this paper accessible to non-technical folks by: (i) providing a non-technical summary, and (ii) discussing the relevant implications related to existential risk and AI policy. Introduction DeepMind has just introduced its new agent, Gato: the most general machine learning (ML) model to date. If you're familiar with arguments for the potential risks posed by advanced AI systems, you'll know the term general carries strong implications. Today's ML systems are advancing quickly; however, even the best systems we see are narrow in the tasks they can accomplish. For example, DALL-E impressively generates images that rival human creativity; however, it doesn't do anything else. Similarly, large language models like GPT-3 perform well on certain text-based tasks, like sentence completion, but poorly on others, such as arithmetic (Figure 1). If future AI systems are to exhibit human-like intelligence, they'll need to use various skills and information to complete diverse tasks across different contexts. In other words, they'll need to exhibit general intelligence in the same way humans do—a type of system broadly referred to as artificial general intelligence (AGI). While AGI systems could lead to hugely positive innovations, they also have the potential to surpass human intelligence and become “superintelligent”. If a superintelligent system were unaligned, it could be difficult or even impossible to control for and predict its behavior, leaving humans vulnerable. Figure 1: An attempt to teach GPT-3 addition. The letter ‘Q' denotes human input while ‘A' denotes GPT-3's response (from Peter Wildeford's tweet) So what exactly has DeepMind created? Gato is a single neural network capable of performing hundreds of distinct tasks. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.” It's not currently analogous to human-like intelligence; however, it does exhibit general capabilities. In the rest of this post, we'll provide a non-technical summary of DeepMind's paper and explore: (i) what this means for potential future existential risks posed by advanced AI and (ii) some relevant AI policy considerations. A Summary of Gato How was Gato built? The technique used to train Gato is slightly different from other famous AI agents. For example, AlphaGo, the AI system that defeated world champion Go player Lee Sedol in 2016, was trained largely using a sophisticated form of trial and error called reinforcement learning (RL). While the initial training process involved some demonstrations from expert Go players, the next iteration named AlphaGo Zero removed these entirely, mastering games solely by playing itself. By contrast, Gato was trained to imitate examples of “good” behavior in 604 distinct tasks. These tasks include: Simulated control tasks, where Gato has to control a virtual body in a simulated environment. Vision and language tasks, like labeling images with corresponding text captions. Robotics, specifically the common RL task of stacking blocks. Examples of good behavior were collected in a few different ways. For simulated control and robotics, examples were collected from other, more specialized AI agents trained using RL. For vision and language tasks, “behavior” took the form of text and images generated by humans, largely scraped from ...
This week on the Evolving Leader podcast, co-hosts Jean Gomes and Scott Allender are joined by Professor Marcus du Sautoy. Marcus is Simonyi Professor for the Public Understanding of Science at the University of Oxford, Fellow of New College, Oxford, author of multiple popular science and mathematics books and he is a regular contributor on television, radio and to both The Times and The Guardian. He is also passionate about public engagement on topics that include creativity and artificial intelligence. 0.00 Introduction2.23 Where does your love of mathematics originate?6.11 What is mathematics really about for you?8.35 Can you explain what zeta functions are, and why symmetry and the function of groups is important to learn more about.12.24 What did you draw from the moment that DeepMind's AlphaGo beat Lee Sedol?16.12 What are your thoughts around the possibility that AI can be creative, so taking us down a path where consciousness may not be the thing that actually happens, but we might actually get something totally new that doesn't exist in our minds or reckoning at the moment? 18.35 How do we prevent ourselves from having something that we don't understand governing our lives? 20.44 In your book ‘What We Cannot Know', you explored if there are questions that we may never have the answer to, and therefore our living with the unknown. Could you elaborate on that idea for us? 25.52 You've written about the conflict between physics and mathematics, and also your idea that mathematics exists outside of humans so it's not a human construction and would exist without us. Could you elaborate on those two points?33.13 Tell us about your latest book ‘Thinking Better' where you search for short cuts, not just in mathematics but also other fields.36.14 A lot of people think of maths as being hard. However, you can use maths, the concepts and frameworks without being an expert mathematician. Can you bring that to life for us?43.09 Tell us about the work you've been doing to bring Douglas Hofstadter's life story to the Barbican in London. 48.28 You've said that we can't fully know something when we're stuck in a system whether consciously or unconsciously. What is the leadership lesson or opportunity that we can take from that?53.06 When was the last time you had a real ‘aha' moment, and what's the biggest challenge that you are working on at the moment? Social: Instagram @evolvingleader LinkedIn The Evolving Leader Podcast Twitter @Evolving_Leader The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.
When Alpha Go beat Lee Sedol, the world Go champion, it came up with creative new moves never previously seen before and even invented a whole new style of play unknown to humans. IBM's Deep Blue, the champion chess algorithm, failed to do either of these. What was the difference? In this podcast, we review Alpha Go the Movie. Warning: Spoilers abound! Please go watch the movie first! This is an excellent movie. Bruce (using his admittedly thin knowledge of reinforcement learning) explains how Alpha Go works (using the materials previously discussed in our Reinforcement Learning episode) and how Alpha Go came up with a creative new approach to Go that went beyond the knowledge of the programmers. While Alpha Go definitely does not have "creativity" in the universal explainer sense of the word (it has no explanatory knowledge nor understanding), it did come up with a creative new playstyle never before seen in the history of the world that changed how humans play Go. Even the programmers were caught off guard by what it came up with. We talk about how Alpha Go challenges the Pseudo-Deutsch Theory of Knowledge but meshes well with Campbell's evolutionary epistemology. --- Support this podcast: https://anchor.fm/four-strands/support
George Gilder talks to Robert J. Marks about his book Gaming AI: Why AI Can’t Think but Can Transform Jobs. Show Notes 00:00:45 | Introducing George Gilder 00:03:30 | Is AI a new demotion of the human race? 00:04:59 | The AI movement 00:06:39 | DeepMind and protein folding 00:11:42 | Code-breaking in World War II 00:13:50 | Interpreting between… Source
Reinforcement Learning is a machine learning algorithm that is a 'general purpose learner' (with certain important caveats). It generated a lot of excitement with its stunning victory of Alpha Go against Lee Sedol the world Go champion. In this podcast, we go over the theory of reinforcement learning and how it works to solve any Markov Decision Problem (MDP). This episode will be particularly useful for Georgia Tech OMSCS students taking classes that deal with Reinforcement Learning (ML4T, ML, RL) as we briefly explain the mathematics of how it works and show some simple examples. This episode is best when watched on the Youtube channel, though we'll release an audio version as well. But the visuals are helpful here. The audio version is abbreviated and removes the mathematical theory and proof. --- Support this podcast: https://anchor.fm/four-strands/support
AI is good at winning games. But how does this (and other) accomplishments translate to applications in the real world? George Gilder and Robert J. Marks discuss artificial intelligence, games, and George Gilder’s new book Gaming AI: Why AI Can’t Think but Can Transform Jobs (which you can get for free here). Show Notes 00:35 | Introducing George Gilder 02:12… Source
AI is good at winning games. But how does this (and other) accomplishments translate to applications in the real world? George Gilder and Robert J. Marks discuss artificial intelligence, games, and George Gilder’s new book Gaming AI: Why AI Can’t Think but Can Transform Jobs (which you can get for free here). Show Notes 00:35 | Introducing George Gilder 02:12… Source
As we build ever more complicated games and tasks to challenge AI, should we really be asking them to get the simple stuff right? Like reposing our selfies and helping to prescribe medicines? Science is certainly slick with new smart waterproofing fabrics and bricks that bring Mario to life. Hosts: Matt Armitage & Richard BradburyProduced: Richard Bradbury for BFM89.9Episode Sources:https://www.newscientist.com/article/2250551-quantum-version-of-the-ancient-game-of-go-could-be-ultimate-ai-test/ https://www.newscientist.com/article/2209631-ai-beats-professionals-at-six-player-texas-hold-em-poker/https://www.newscientist.com/article/2251262-an-ai-can-make-selfies-look-like-theyre-not-selfies/https://www.technologyreview.com/2020/08/05/1006003/ai-machine-learning-defer-to-human-expert/ https://www.newscientist.com/article/2250401-these-are-the-12-ways-you-can-drastically-cut-your-dementia-risk/ https://www.newscientist.com/article/2251189-in-ear-nerve-stimulating-device-helps-people-learning-a-new-language/https://www.newscientist.com/article/2251388-fabric-repels-both-oil-and-water-thanks-to-clever-silicone-coating/ EPISODE EXCERPTRichard: When the world grows dark, people look skywards, searching for a hero who will save them from the powers that threaten their destruction. No, we’re not talking about MSP’s Matt Armitage. As far as I know the only thing he’s saved is old wrapping paper. It’s time for another episode of MSP Science is Slick. Richard: Remind us why science is slick?Matt:Firstly, when the apocalypse comes and there’s nowhere you can buy wrapping paper, who are you going to turn to?The man with a warehouse full of mouse chewed Xmas wrapping, my friend.Failed business ventures aside. Science is Slick is where we tackle the epidemic of doomscrolling And introduce some of the sci-tech stories you might have missed that will help to shape the world of tomorrow.And this week we have a bucketful of AI as well as some interesting stories about obesity, money laundering and dementia.Richard: AI is a good place to start. I’m hitting the Go button…Matt:Glad you approve. Go is where we’re starting.The ancient boardgame Go has become a bit of a battleground for a AI.Much more complex than chess - the sheer complexity of the predictions it required made it very hard for even supercomputers to match the skill of the best human player. That was until 2017 when Google’s Deepmind powered supercomputer Alpha Go beat the reigning human champion Lee Sedol.Since then, Go is about as difficult for advanced AI as tic tac toe is for you and me.
On this episode: How Freddie Mercury cures Covid-19. Did the virus escape from a lab in China? The incredible story of Lee Sedol. Plus letters from listeners.
Varför har konsulter så bra betalt och kan man lita på lobbyister? I detta avsnitt pratar vi om konsulter och lobbyister: varför de finns, vilket arbete de utför och vilken funktion detta kan tänkas fylla i ekonomi och politik.Vi hinner även med ett par tips: Joakim tipsar om om en bok om internet (The Wealth of Networks) som kom ut i mitten av 00-talet och som nu med fördel kan läsas utifrån vad vi vet om utvecklingen sedan dess. Andreas tipsar om dokumentären AlphaGo - The Movie (gratis på Youtube) som beskriver turneringen mellan DeepMinds program AlphaGo och den världsledande go-spelaren Lee Sedol 2016.LÄNKAR:Manuel Castells, Communication PowerDavid Graeber, Bullshit Jobs: A TheoryYochai Benkler, Wealth of NetworksAlphaGo - The Movie See acast.com/privacy for privacy and opt-out information.
Welcome back! Our guest today is Marta Halina, a University Lecturer (Assistant Professor) in the Department of History and Philosophy of Science at the University of Cambridge. Marta’s current focus is the philosophy of artificial intelligence. We discuss what philosophers can contribute to AI. We talk about AlphaGo and its stunning defeat of one of the world’s most celebrated Go champions. We puzzle over whether artificial minds can think creatively. (We also touch briefly on a project that Marta has been involved in called the Animal AI Olympics. Consider this part of our conversation a teaser—our next ‘mini’ episode is going to take a longer look at this initiative.) Marta brings a distinctive perspective to all these issues. As you’ll hear, she’s worked on great ape minds as well as artificial minds, and she’s run scientific experiments in addition to her philosophical work. As always, thanks for listening—we hope you enjoy the conversation. A transcript of this interview can be found here. Notes and links 8:45 – The key distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). 10:25 – AlphaGo’s victory against Go master Lee Sedol. 12:00 – More about Go. 15:57 – Lee Sedol announces his retirement. 17:00 – An article by Marta Halina and colleagues describing the Animal AI Olympics. (Stay tuned for our upcoming “mini” episode about this!) 23:05 – Demis Hassabis is the CEO and co-founder of DeepMind. You can listen to an interview with him here. 26:45 – On the idea that creative ideas are new, surprising, and valuable, see this collection of essays. 28:45 – A blogpost on DeepMind’s system AlphaFold, part of its effort to develop AIs that support scientific discovery. 30:32 – For Margaret Boden’s distinction between P-creativity and H-creativity, see this article (or this book). 35:45 - An article about Stephen Hawking’s 2016 presentation at the launch of the Leverhulme Center for the Future of Intelligence. 38:54 – A paper by Henry Shevlin and Marta Halina in which they argue that, in the context of AI, “rich psychological terms” ought to be used with care. 46:15 – The mission statement of the Society for the Philosophy of Science in Practice. Dr. Halina’s end-of-show recommendations: What is This Thing Called Science? (1976), by Alan Chalmers The Meaning of Science (2016), by Tim Lewens Kinds of Minds (2008), by Daniel Dennett The Stanford Encyclopedia of Philosophy entry on Artificial Intelligence The best ways to keep up with Dr. Halina’s research: https://www.martahalina.com/ @MartaHalina Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://www.diverseintelligencessummer.com/), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted by Kensy Cooperrider, with creative support from DISI Directors Erica Cartmill and Jacob Foster, and Associate Director Hilda Loury. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play—or wherever you like to listen to podcasts. We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, follow us on Twitter: @ManyMindsPod.
Twitter: @twpwkPatreonThis episode we talk machine learning and artificial intelligence. What are these things, what can they do today, and what will they do in the future? What is Artificial IntelligenceDefine AI vs Machine LearningWhat is a neural network?State of AI in 2020What can it do?Predictions/Categorization (advertising, Netflix, Spotify, traffic/uber volume, spam filters, fraud detection, etc)Facial recognition Autonomous DrivingChatbots / virtual assistantsLanguage translation, dictationGame playingChess (IBM Deep Blue beat Kasparov in 1997)Go (AlphaGo beat 18-time world champion Lee Sedol in 2016)What can't it do?General intelligenceAutonomous drivingUnconstrained decision makingCreativity (debatable)Actual concept understandingEx. an AI network taught to learn if a banana is ripe can tell you accurately if it is ripe or not. But ask it what color the banana is and it gets it wrong.RecommendationsLex Fridman AI podcastAlphaGo the movieKimchi Karma by Brassica and BrineStrange PlanetShameless PlugsFor coffee drinkers:Mike's coffee company: Bookcase CoffeeFor equity investors:Jeff's software: FolioFollow UsTwitter: @twpwkiTunesSpotifyStitcherGoogle PodcastsPocket CastsOvercast
Welcome to another special edition of „Mediocrity and Madness“! Usually this Podcast is dedicated to the ever-widening gap between talk and reality in our big organizations, most notably in our global corporates. Well, I might have to admit that in some cases the undertone is a tiny bit angry and another bit tongue-in-cheek. The title might indicate that. Today’s episode is not like this. Well, it is but in a different way. Upon reflection, it still addresses a mighty chasm between talk and reality but the reason for this chasm appears more forgivable to me than those many dysfunctions we appear to have accepted against better judgement. Today’s podcast is about artificial intelligence and our struggles to put it to use in businesses. This podcast is to some measure inspired by what I learned in and around two programs of Allianz, “IT Literacy for top executives” and “AI for the business”, which I had the privilege and the pleasure to help developing and facilitating. I am tempted to begin this episode with the same claim I used in the last (German) one: With artificial intelligence it is like with teenage sex. Everybody talks about it, but nobody really knows how it works. Everybody thinks that everyone else does it. Thus, everybody claims he does it. And again, Dan Ariely gets all the credits for coining that phrase with “Big Data” instead of “artificial intelligence” which is actually a bit related anyway. Or not. As we will see later. To begin with, the big question is: What is “artificial intelligence” after all? The straightforward way to answering that question is to first define what intelligence is in general and then apply the notion that “artificial” is just when the same is done by machines. Yet here begins the problem. There simply is no proper definition of intelligence. Some might say, intelligence is what discerns man from animal but that’s not very helpful, too. Where’s the boarder. When I was a boy, I read that a commonplace definition was that humans use tools while animals don’t. Besides the question whether that little detail would be one that made us truly proud of our human intelligence, multiple examples of animals using tools have been found since. To make a long story short, there is no proper and general definition of intelligence. Thus, we end up with some self-referentiality: “It’s intelligent if it behaves like a human”. In a way, that’s quite a dissatisfying definition, most of all because it leaves no room for types of intelligences that behave – or “are” – significantly non-human. “Black swan” is greeting. But we’re detouring into philosophy. Back to our problem at hand: What is artificial intelligence after all? Well, if it’s intelligent, if it behaves like a human, then the logical answer to this question is: “artificial intelligence is when a computer/machine behaves like a human”. For practical purposes this is something we can work with. Yet even then another question looms: How do we evaluate whether it behaves like a human? Being used to some self-referentiality already, the answer is quite straight forward: “It behaves like a human if other humans can’t tell the difference from human behavior.” This is actually the essence of what is called the “Turing test”, devised by the famous British mathematician Alan Turing who next to basically inventing what we today call computer sciences helped solving the Enigma encryption during World War II. Turing’s biography is as inspiring as it is tragic and I wouldn’t mind if you stopped listening to this humble podcast and explored Turing in a bit more depth, for example by watching “The imitation game” starring Benedict Cumberbatch. If you decide to stay with me instead of Cumberbatch, that’s where we finally are: “Artificial intelligence is when a machine/robot behaves in a way that humans can’t discern that behavior from human behavior.” As you might imagine, the respective tests have to be designed properly so that biases are avoided. And, of course, also the questions or problems designed to ascertain human or less human behavior have to be designed carefully. These are subjects of more advanced versions of the Turing test but in the end, the ultimate condition remains the same: A machine is regarded intelligent if it behaves like a human. (Deliberately) stupid? It has taken us some time to establish this somewhat flawed, extremely human-centric but workable definition of machine intelligence. It poses some questions and it helps answering some others. One question that is discussed around the Turing test is indeed whether would-be artificial intelligences should deliberately put a few mistakes into their behavior even despite better knowledge, just in order to appear more human. I think that question comes more from would-be philosophers than it is a serious one to consider. Yet, you could argue that if taking the Turing test seriously, in order to convince a human of being a fellow human the occasional mistake is appropriate. After all, “to err is human”. Again, the question appears a bit stupid to me. Would you really argue that it is intelligent only if it occasionally errs? The other side of that coin though is quite relevant. In many discussions about machine intelligence, the implicit or explicit requirement appears to be: If it’s done by a machine, it needs to be 100%. I reason that’s because when dealing with computer algorithms, like calculating for example the trajectory of a moon rocket, we’re used to zero errors; given that the programming is right, that there are no strange glitches in the hardware and that the input data isn’t faulty as such. Writing that, a puzzling thought enters my mind: We trustin machine perfection and expect human imperfection. Not a good outlook in regard to human supremacy. Sorry, I’m on another detour. Time to get back to the question of intelligence. If we define intelligence as behavior being indiscernible from human one, why then do we wonder if machine intelligence doesn’t yield 100% perfect results. Well, for the really complex problems it would actually be impossible to define what “100% perfect” even is, neither ex ante nor ex post but let’s stick to the simpler problems for now: pattern recognition, predictive analysis, autonomous driving … . Intelligent beings make mistakes. Even those whose intelligence is focused onto a specific task. Human radiologists identify some spots on their pictures falsely as positive signs of cancer whilst they overlook others that actually would be malicious. So do machines trained to the same purpose. Competition I am rather sure that the kind listener’s intuitive reaction at this point is: “Who cares? – If the machine makes less errors than her human counterpart, let her take the lead!” And of course, this is the only logical conclusion. Yet quite often, here’s one major barrier to embracing artificial intelligence. Our reaction to machines threatening to become better than us but not totally perfect is poking for the outliers and inflating them until the use of machine intelligence feels somewhat disconcerting. Well, they are competitors after all, aren’t they? The radiologist case is especially illuminating. In fact, the problem is that amongst human radiologists there is a huge, huge spread in competency. Whilst a few radiologists are just brilliant in analyzing their pictures, others are comparatively poor. The gap not only results from experience or attitude, there are also significant differences from county to country for example. Thus, even if the machine would not beat the very best of radiologists, it would be a huge step ahead and saving many, many lives if one could just provide a better average across the board; – which is what commonly available machines geared to the task do. Guess what your average radiologist thinks about that. – Ah, and don’t mind, if the machine would not yet be better than her best human colleagues, it is but a matter of weeks or months or maybe a year or two until she is as we will see in a minute. You still don’t believe that this impedes the adaption of artificial intelligence? – Look this example that made it into the feuilletons not long ago. Autonomous driving. Suppose you’re sitting in a car that is driven autonomously by some kind of artificial intelligence. All of a sudden, another car – probably driven by a human intelligence – comes towards you on the rather narrow street you’re driven through. Within microseconds, your car recognizes its choices: divert to the right and kill a group of kids playing there, divert to the left and kill some adults in their sixties one of which it recognizes as an important advisor to an even more important politician or keep the track and kill both, the occupants of the oncoming car … and unfortunately you yourself. The dilemma has been stylized to a kind of fundamental question by some would-be philosophers with the underlying notion of “if we can’t solve that dilemma rationally, we might better give up the whole idea of autonomous driving for good.” Well, I am exaggerating again but there is some truth in that. Now, as the dilemma is inextricable as such: bye, bye autonomous driving! Of course, the real answer is all but philosophical. Actually, it doesn’t matter what choice the intelligence driving our car makes. It might actually just throw a dice in its random access memory. We have thousands of traffic victims every year anyway. Humankind has decided to live with that sad fact as the advantages of mobility outweigh these bereavements. We have invented motor liability insurance exactly for that reason. Thus, the only and very pragmatic question has to be: Do the advantages of autonomous driving outweigh some sad accidents? – And fortunately, probability is that autonomous driving will massively reduce the number of traffic accidents so the question is actually a very simple one to deal with. Except probably for motor insurance companies … and some would-be philosophers. Irreversible Here’s another intriguing thing with artificial intelligence: irreversibility. As soon as machine intelligence has become better than man in a specific area, the competition is won forever by the machines. Or lost for humankind. Simple: as soon as your artificial radiologist beats her human colleague, the latter one will never catch up again. On the contrary. The machine will improve further, in some cases very fast. Man might improve a little, over time but by far not at the same speed as his silicon colleague … or competitor … or potential replacement. In some cases, the world splits into two parallel ones: the machine world and the human world. This is what happened in 1997 with the game of Chess when Deep Blue beat the then world champion Gary Kasparow. Deep Blue wasn’t even an intelligence. It was just a brute force with input from some chess savvy programmers but then humans have lost the game to the machines, forever. In today’s chess tournaments not the best players on earth compete but the best human players. They might use computers to improve their game but none of them would stand the slightest chance against a halfway decent artificial chess intelligence … or even a brute force algorithm. The loss of chess for humankind is a rather ancient story compared to the game of Go. Go being multitudes more complex than chess resisted the machines about twenty years more. Brute force doesn’t work for Go and thus it took until 2016 until AlphaGo, an artificial intelligence designed to play Go by Google’s DeepMind finally conquered that stronghold of humanity. That year, AlphaGo defeated Lee Sedol, one of the best players in the world. A few months later, the program also defeated Ke Jie, the then top-ranking player in the world. Most impressive though it is that again only a few months later DeepMind published another version of its Go-genius: AlphaGo Zero. Whilst AlphaGo had been trained with huge numbers of Go matches played by human players, AlphaGo Zero had to be taught only the rules of the game and developed its skills purely by playing against versions of itself. After three days, this version beat her predecessor that had won against Lee Sedol 100:0. And again only three months later, another version was deployed. AlphaZero learnt the games of Chess and Go and Shogi, another highly complex strategy game, in only a few hours and defeated all previous versions in a sweep. By then, man was out of the picture for what can be considered an eternity by measures of AI development cycles. AlphaZero not only plays a better Go – or Chess – than any human does, it develops totally new strategies and tactics to play the game, it plays moves never considered reasonable before by its carbon-based predecessors. It has transcended its creators in the game and never again will humanity regain that domain. This, you see, is the nature of artificial intelligence: as soon as it has gained superiority in a certain domain, this domain is forever lost for humankind. If anything, another technology will surpass its predecessor. We and our human brains won’t. We might comfort ourselves that it’s only rather mundane tasks that we cede to machines of specialized intelligence, that it’s a long way still towards a more universal artificial intelligence and that after all, we’re the creators of these intelligences … . But the games of Chess and Go are actually not quite so mundane and the development is somewhat exponential. Finally, a look into ancient mythology is all but comforting. Take Greece as an example: the progenitor of gods, Uranos, was emasculated by his offspring, the Titans and these again were defeated and punished by their offspring, the Olympians, who then ruled the world, most notably Zeus, Uranos’ grandson. Well, Greek mythology is probably not what the kind listener expects from a podcast about artificial intelligence. Hence, back to business. AI is not necessarily BIG Data Here’s a not so uncommon misconception: AI or advanced analytics is always Big Data or – more exactly: Big Data is a necessary prerequisite for advanced analytics. We could make use of the AlphaZero example again. There could hardly be less data necessary. Just a few rules of the game and off we go! “Wait”, some will argue, “our business problems aren’t like this. What we want is predictive analysis and that’s Big Data for sure!”. I personally and vehemently believe this is a misconception. I actually assume, it is a misconception with a purpose but before sinking deeper into speculation, let’s look at an example, a real business problem. I have spent quite some years in the insurance business. Hence please apologize for me using an insurance example. It is very simple. The idea is using artificial intelligence for calculating insurance premiums, specifically motor insurance third party liability (TPL). Usually, this is a mandatory insurance. The risk it covers is that you in your capacity of driving a car – or parking it – damage an object that belongs to someone else or that you injure someone else. Usually, your insurance premium should reflect the risk you want to cover. Thus, in the case of TPL the essential question from an actuary’s point of view is the following one: Is the person under inspection a good driver or a not so good one? “Good” in the insurer’s sense: less prone to cause an accident and if so, one that usually doesn’t come with a big damage. There are zillions of ways to approach that problem. The best would probably be to get an individual psychological profile of the respective person, add a decently detailed analysis of her driving patterns (where, when, …) and calculate the premium based on that analysis, maybe using some sort of artificial intelligence in order to cope with the complex set of data. The traditional way is comparatively simplistic and indirect. We use a mere handful of data, some of them related to the car like type and registration code, some personal data like age or homeownership and some about driving patterns, mostly yearly mileage and calculate a premium out of these few by some rather simple statistical analysis. If we were looking for more Big Data-ish solutions we could consider basing our calculation on social media timelines. Young males posting photos that show them Friday and Saturday nights in distant clubs with fancy drinks in their hands should emerge with way higher premiums than their geeky contemporaries who spend their weekends in front of some computers using their cars only to drive to the next fast food restaurant or once a week to the comic book shop. The shades in between might be subtle and an artificial intelligence might come up with some rather delicate distinctions. And you might not even need a whole timeline. Just one picture might suffice. The forms of our faces, our haircut, the glasses we fancy, the jewelry we wear, the way we twinkle our noses … might well be very good indicators of our driving behavior. Definitely a job for an artificial intelligence. I’m sure, you can imagine other avenues. Some are truly Big Data, others are rather small in terms of data … and fancy learning machines. The point is, these very different approaches may well yield very similar results ie, a few data related to your car might reveal quite as much about the question at hand as an analysis of your Instagram story. The fundamental reason is that data as such are worthless. Valuable is only what we extract from that data. This is the so-called DIKW hierarchy. Data, Information, Knowledge, Wisdom. The true challenge is extracting wisdom from data. And the rule is not: more data – more wisdom. On the contrary. Too much data might in fact clutter the way to wisdom. And in any case, very different data might represent the same information, knowledge or wisdom. As what concerns our example, I have first of all to admit that I have nor analytical proof – or wisdom – about specifics I am going to discuss but I feel confident that the examples illustrate the point. Here we go. The type of car – put into in the right correlation with a few other data -- might already contain most of the knowledge you could gain from a full-blown psychological analysis or a comprehensive inspection of a person’s social media profile. Data representing a 19 year old male, living in a certain area of town, owning a used but rather high powered car, driving a certain mileage per year might very well contain the same information with respect to our question about “good” driving as all the pictures we find in his Facebook timeline. And the other way around. The same holds true for the information we might get out of a single static photo. Yet the Facebook timeline or the photo are welling over with information that is irrelevant for our specific problem. Or irrelevant at all. And it is utterly difficult to get a) the necessary data in a proper breadth and quality at all and b) to distill relevant information, knowledge and wisdom from this cornucopia of data. Again: more data does not necessarily mean more wisdom! It might. But one kind of data might – no: will – contain the same information as other kinds. Even the absence of data might contain information or knowledge. Assume for instance, you have someone explicitly denying her consent to using her data for marketing purposes. That might mean she is anxious about her data privacy which in turn might indicate that she is also concerned about other burning social and environmental issues which then might indicate she doesn’t use her car a lot and if so in a rather responsible way … . You get the point. Most probably that whole chain of reasoning won’t work having that single piece of data in isolation but put into the context of other data there might actually be wisdom. Actually, looking at the whole picture, this might not even be a chain of reasoning but more a description of the certain state of things that denies decomposition into human logic. Which leads us to another issue with artificial intelligence. The unboxing problem Artificial intelligences, very much like their human contemporaries, can’t always be understood easily. That is, the logic, the chain of reasoning, the parameters that causally determine certain outcomes, decisions or predictions are in many cases less than transparent. At the same time, we humans demand from artificial intelligence what we can’t deliver for our own reasoning: this very transparency. Quite like us demanding 100% machine perfection, some control-instinct of ours claims: If it’s not transparent to us (humans), it isn’t worth much. Hence, a line of research in the field of artificial intelligence has developed: “Unboxing the AI”. Except for some specific cases yet, the outlook for this discipline isn’t too bright. The reason is the very way artificial intelligence works. Made in the image of the human brain, artificial intelligences consist of so-called “neural networks”. A neural network is more or less a – layered – mesh of nodes. The strength of the connections between these nodes determines how the input to the network determines the output. Training the AI means varying the strengths of these connections in a way that the network finally translates the input into a desired output in a decent manner. There are different topologies for these networks, tailored to certain classes of problems but the thing as such is rather universal. Hence AI projects can be rather simple by IT standards: define the right target function, collect proper training data, plug that data to your neural network, train it … . It takes but a couple of weeks and voila, you have an artificial intelligence thatyou can throw on new data for solving your problem. In short, what we can call “intelligence” is the state of strengths of all the connections in your network. The number of these connections can be huge and the nature of the neural network is actually agnostic to the problem you want it to solve. “Unboxing” would thus mean to backwardly extract specific criteria from such a huge and agnostic network. In our radiologist case for example, we would have to find something like “serrated fringes” or “solid core” in nothing but this set of connection strengths in our network. Have fun! Well, you might approach the problem differently by simply probing your AI in order to learn that and how it actually reacts to serrated fringes. But that approach has its limits, too. If you don’t know what to look for or if the results are determined not by a single criterion but by the entirety of some data, looking for specifics becomes utterly difficult. Think of AlphaZero again. It develops strategies and moves that have been unknown to man before. Can we really claim we must understand the logic behind, neglecting the fact that Go as such has been quite resistant to straightforward tactics and logic patterns for the centuries humans have played it. The question is: why “unboxing” after all? – Have you ever asked for unboxing a fellow human’s brain? OK, being able to do that for your adolescent kids’ brains would be a real blessing! But normally we don’t unbox brains. Why are we attracted by one person and not by another? Is it the colour of her eyes, her laughter lines, her voice, her choice of words …? Why do we find one person trustworthy and another one not? Is it the way she stands, her dress, her sincerity, her sense of humour? How do we solve a mathematical problem? Or a business one? When and how do the pieces fall into place? Where does the crucial idea emerge from? Even when we strive to rationalize our decision making, there always remain components we cannot properly “unbox”. If the problem at hand is complex – and thus relevant – enough. We “factor in” strategic considerations, assumptions about the future, others’ expectations … . Parts of our reasoning are shaped by our personal experiences, our individual preferences, like our risk-appetite, values, aspirations, … . Unbox this! Humankind has learnt to cope with the impossibility of “unboxing” brains or lives. We probe others and if we’re happy with the results, we start trusting. We cede responsibilities and continue probing. We cede more responsibilities … and sometimes we are surpassed by the very persons we promoted. Ah, I am entering philosophical grounds again. Apologies! To make it short. I admit, there are some cases in which you might need full transparency, complete “unboxing”. And in case you don’t get it, abolish the idea of using AI for the problem you had in mind. But there are more cases in which the desire for unboxing is just another pretense for not chartering new territory. If it’s intelligent if it behaves like a human why do we ask for so much more from the machines than we would ask from man? Again, I am drifting off into questions of dangerously fundamental nature. Let’s assume for once that we have overcome all our concerns, prejudices and excuses, that despite all of them, we have a business problem we full-heartedly want to throw artificial intelligence at. Then comes the biggest challenge of all. The biggest challenge of all: how to operationalize it Pretty much like in our discussion at the beginning of this post, on the face of it, it looks simple: unplug the human intelligence occupied with the work at hand and plug in the artificial one. If it is significant – quite some AI projects are still more in the toy category – this comes along with all the challenges we are used to in what we call change management. Automating tasks comes with adapting to new processes, jobs becoming redundant, layoffs, re-training and rallying the remaining workforce behind the new ways of working. Yet changes related to artificial intelligence might have a very different quality. They are about “intelligence” after all, aren’t they? They are not about replacing repetitive, sometimes strenuous or boring work like welding metal or consolidating accounting records, they dig to the heart of our pride. Plus, the results are by default neither perfect nor “unboxable”. That makes it very hard to actually operationalize artificial intelligence. Here’s an example. It is more than fifteen years old, taking place at a time when a terabyte was an still an incredible amount of storage, when data was still desired to be stored in warehouses and not floating around in lakes or oceans and when true machine learning was still a purely academic discipline. In short: the good old times. This gives us the privilege to strip the example bare of complexity and buzz. At that time, I was together with a few others responsible for developing Business Intelligence solutions in the area of insurance sales. We had our dispositive data stored in the proverbial warehouse, some smart actuaries had applied multivariate statistics to that data and hurrah, we got propensities to buy and rescind for our customers. Even with the simple means we had by then, these propensities were quite accurate. As an ex-post analysis showed, they hit the mark at 80% applying the relevant metrics. Cutting the ranking at rather ambitious levels, we pushed the information to our agents: customers who with a likelihood of more than 80% were to close a new contract or to cancel one … or both. The latter one sounds a bit odd, but a deeper look showed that these were indeed customers who were intensely looking for a new insurance without a strong loyalty. – If we won them, they would stay with us and loyalty would improve, if a competitor won them, they would gradually transfer their portfolio to him. You would think that would be a treasure trove for any salesforce in the world, wouldn’t you? Far from it! Most agents either ignored the information or – worse – they discredited it. To the latter purpose, they used anecdotal evidence: “My mother in law was on the list”, they broadcast, “she would never cancel her contract”. Well, some analysis showed that she was on the list for a reason but how would you fight a good story with the intricacies of multivariate statistics? Actually, the mother-in-law issue was more of a proxy for a deeper concern. Client relationship is supposed to be the core competency of any salesforce. And now, there comes some algorithm or artificial intelligence that claims to understand at least a (major) part of that core competency as good as that very salesforce … . Definitely a reason to fight back, isn’t it? Besides this, agents did not use the information because they regarded it not too helpful. Many of the customers on the high-propensity-to-buy-list were their “good” customers anyway, those with who they were in regular contact already. They were likely indeed to make another buy but agents reasoned they would have contacted them anyway. So, don’t bother with that list. Regarding the list of customers on the verge of rescinding, the problem was a different one. Agents had only very little (monetary) incentive to prevent these from doing so. There was a recurring commission but asked whether to invest valuable time into just keeping a customer or going for new business, most were inclined to choose the latter option. I could continue on end with stories around that work, but I’d like to share only one more tidbit here before entering a brief review of what went wrong: What was the reaction of management higher up the food-chain when all these facts trickled in? Well, they questioned the quality of the analysis and demanded to include more – today we would say “bigger” – data in order to improve that quality, like buying sociodemographic data which was the fad at that time. Well, that might have increased the quality from 80% to 80+% but remember the discussion we had around redundancy of data. The type of car you drive or the sum covered by your home insurance might say much more than sociodemographic data based on the area you live in. … Not to speak of that eternal management talk that 80% would be good enough. What went wrong? First, the purpose of the action wasn’t thought through well enough from the start. We more or less just choose the easiest way. Certainly, the purpose couldn’t have been to provide agents with a list of leads they already knew were their best customers. From a business perspective the group of “second best customers” might have been much more attractive. Approaching that group and closing new contracts there would have not only created new business but also broadened the base of loyal customers and thus paved the way for longer term success. The price would of course have been that these customers would have been more difficult to win over than the “already good” ones so that agents would have needed an incentive to invest effort into this group. Admittedly going for the second-best group would have come with more difficulties. We might have faced for example many more mother-in-law anecdotes. Second, there was no mechanism in place to foster the use of the information. Whether the agents worked on the leads or not didn’t matter so why should they? Worse even with the churn-list. From a long-term business perspective, it makes all the sense in the world to prevent customer churn as winning new customers is way more expensive. It also makes perfect sense to try making your second-best customers more loyal but from a short-term salesman’s or -woman’s perspective boiling the soup of already good customers makes more short-term sense. Thus, in order to operationalize AI target systems might need a thorough overhaul. If you are serious, that is. The same holds true if you would for example want to establish machine assisted sentiment analysis in your customer care center. Third, there was no good understanding of data and data analytics neither on the supposed-to-be users’ side nor on the management side. This led to the “usual” reflexes on both sides: resistance on the one side and an overly simplified call for “better” on the other one. Whatever “better” was supposed to mean. Of course, neither the example nor the conclusions are exhaustive, but I hope they help illustrate the point: more often than not it is not the analytics part of artificial intelligence that is the tricky one. It is tricky indeed but there are smart and experienced people around to deal with that type of tricky business. More often than not, the truly tricky part is to put AI into operations, to ask the right questions in the first place, to integrate the amazing opportunities in a consistent way into your organization, processes and systems, to manage a change that is more fundamental than simple automation and to resist the reflex that bigger is always better! So much for today from “Mediocrity and Madness”, the podcast that usually deals with the ever-growing gap between corporate rhetoric and action. I dearly thank all the people who provided inspiration and input to these musings especially in and around the programs I mentioned in the intro, most notably Gemma Garriga, Marcela Schrank Fialova, Christiane Konzelmann, Stephanie Schneider, Arnaud Michelet and the revered Prof. Jürgen Schmidhuber! Thank You for listening … and I hope to have you back soon!
The legendary Yogi Berra once said "The future ain't what it used to be." Considering the surprising ways advances in AI & Machine Learning are unfolding, Yogi’s quote appears strikingly accurate. For instance, nobody envisioned that one day Pavlovian techniques used to condition animal behavior might be the model for training a computer algorithm. Yet that approach powered the breakthrough victory by AI firm DeepMind's AlphaGo program in 2016 when it defeated world champion Lee Sedol in the ancient game of Go. What specific impact will these technological advances have on us as individuals, and on enterprises seeking competitive advantages in the marketplace? To explore this and other questions we speak to Karen Hao, AI Reporter for MIT Technology Review, a publication claiming to be "the oldest technology magazine in the world." Karen's data-driven reporting focuses on demystifying the recondite world of AI & Machine Learning. We tap into her extensive insights to learn why despite proliferation of "deep fakes" & AI bias she's so optimistic about the field, and what commonly overlooked aspect of AI & Machine Learning IT management should focus on before deploying it.
This is a discussion about why deep neural nets are unreasonably effective. Gianluca and Jared examine the relationships between neural architectures and the laws of physics that govern our Universe—exploring brains, human language, and linear functions. Nothing could have prepared them for the territories this episode expanded to, so strap yourself in! ---------- Shownotes: AlphaGo beating Lee Sedol at Go: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol OpenAI Five: https://openai.com/blog/openai-five/ Taylor series/expansions video from 3Blue1Brown: https://www.youtube.com/watch?v=3d6DsjIBzJ4 Physicist Max Tegmark: https://en.wikipedia.org/wiki/Max_Tegmark Tegmark's great talk on connections between physics and deep learning (which formed much of the inspiration for this conversation): https://www.youtube.com/watch?v=5MdSE-N0bxs Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem A refresher on “Map vs. Territory”: https://fs.blog/2015/11/map-and-territory/ Ada Lovelace (who worked on Babbage's Analytical Engine): https://en.wikipedia.org/wiki/Ada_Lovelace Manifolds and their topology: http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ Binary trees: https://en.wikipedia.org/wiki/Binary_tree Markov process: http://mathworld.wolfram.com/MarkovProcess.html OpenAIs GPT-2: https://openai.com/blog/better-language-models/ Play with GPT-2 in your browser here: https://talktotransformer.com/ Lex Fridman's MIT Artificial Intelligence podcast: https://lexfridman.com/ai/ The Scientific Odyssey podcast: https://thescientificodyssey.libsyn.com/
Invented in China over 2,500 years ago, the abstract strategy game Go is thought to be the oldest board game continuously played to the present day. In March 2016, the Go world champion Lee Sedol accepted a challenge to play against a computer program called AlphaGo. In the second game of a five game challenge series, the computer made a move no human in the game’s vast history would have considered. This move, Move 37, was not only unique and creative, it was beyond the minds of the world’s greatest Go players. In this latest episode of our Think Aloud podcast, presenter Harriet Fitch Little speaks with Southbank Centre's Performance and Dance Programmer, Rupert Thomson and actor and director Thomas Ryckewaert about their fascination with Move 37. They talk about what this moment meant for arts and society, and how ultimately it may shape our relationship with artificial intelligence. Also in this episode, we hear an interview with Patrick Tresset, an artist who has programmed robots to draw portraits for him. Working in Tresset’s own style of drawing, they act like an artist and has no idea how the drawings will turn out. Move 37 by Thomas Ryckewaert comes to Southbank Centre on 14 March, 2019. Buy tickets here: http://bit.ly/2GGlvD0
Garry Kasparov and Deep Blue, Ken Jennings and Watson, Lee Sedol and AlphaGo, and games and AI are all topics on today's AI Minute. Gigaom AI Minute – April 19
Garry Kasparov and Deep Blue, Ken Jennings and Watson, Lee Sedol and AlphaGo, and games and AI are all topics on today's AI Minute. Gigaom AI Minute – April 19
In this episode, Byron talks about how AlphaGo made Lee Sedol a better player. Gigaom AI Minute – March 14
In this episode, Byron talks about how AlphaGo made Lee Sedol a better player. Gigaom AI Minute – March 14
In this episode, Byron talks about Lee Sedol defeating AlphaGo at chess. Gigaom AI Minute – March 13
In this episode, Byron talks about Lee Sedol defeating AlphaGo at chess. Gigaom AI Minute – March 13
In this episode, Byron talks about AI's creativity when it played Lee Sedol in chess. Gigaom AI Minute – March 10
In this episode, Byron talks about AI's creativity when it played Lee Sedol in chess. Gigaom AI Minute – March 10
In this episode, Byron talks about Lee Sedol's reaction to AlphaGo. Gigaom AI Minute – March 9
In this episode, Byron talks about Lee Sedol's reaction to AlphaGo. Gigaom AI Minute – March 9
Long before Google's AlphaGo best Lee SeDol at Go, and IBM's Deep Blue bested Gary Kasparov at chess, there was Chinook: a humble software program that set out to compete with the world's greatest checkers player. Professor Jonathan Schaeffer wrote Chinook in an attempt to use machine learning to outsmart the unbeatable checkers master, Marion Tinsley. But Schaeffer couldn't have imagined how his relationship with Tinsley would affect his program, and how the drama of their matches would change the world of artificial intelligence.
In 2016, the world champion Lee Sedol was beaten at the ancient boardgame of Go - by a machine. It was part of the AlphaGo programme, which is a series of artificially intelligent systems designed by London-based company DeepMind. AlphaGo Zero, the latest iteration of the programme, can learn to excel at the boardgame of Go without any help from humans.So what applications could AI learning independently have for our day-to-day lives? Katie Haylor spoke to computer scientist Satinder Singh from the University of Michigan, who specialises in an area within artificial intelligence called... Like this podcast? Please help us by supporting the Naked Scientists
In 2016, the world champion Lee Sedol was beaten at the ancient boardgame of Go - by a machine. It was part of the AlphaGo programme, which is a series of artificially intelligent systems designed by London-based company DeepMind. AlphaGo Zero, the latest iteration of the programme, can learn to excel at the boardgame of Go without any help from humans.So what applications could AI learning independently have for our day-to-day lives? Katie Haylor spoke to computer scientist Satinder Singh from the University of Michigan, who specialises in an area within artificial intelligence called... Like this podcast? Please help us by supporting the Naked Scientists
AlphaGo the AI developed to play the ancient board game, Go, crushed 18-time world champion Lee Sedol and the reigning world number one player, Ke Jie. But now, an even more superior competitor is in town. AlphaGo Zero has beaten AlphaGo 100-0 after training for just a fraction of the time AlphaGo needed, and it didn't learn from observing humans playing against each other – unlike AlphaGo. Anthony and Jeff discuss how it did it, and what it means for the future of AI. GET BONUS EPISODES, VIDEO HANGOUTS AND MORE. VISIT: http://patreon.com/wehaveconcerns Get all your sweet We Have Concerns merch by swinging by http://wehaveconcerns.com/shop Hey! If you’re enjoying the show, please take a moment to rate/review it on whatever service you use to listen. Here’s the iTunes link: http://bit.ly/wehaveconcerns And here’s the Stitcher link: http://bit.ly/stitcherwhconcerns Or, you can send us mail! Our address: We Have Concerns c/o WORLD CRIME LEAGUE 1920 Hillhurst Ave #425 Los Angeles, CA 90027-2706 Jeff on Twitter: http://twitter.com/jeffcannata Anthony on Twitter: http://twitter.com/acarboni Today’s story: https://www.inc.com/lisa-calhoun/google-artificial-intelligence-alpha-go-zero-just-pressed-reset-on-how-we-learn.html If you’ve seen a story you think belongs on the show, send it to wehaveconcernsshow@gmail.com, post in on our Facebook Group https://www.facebook.com/groups/WeHaveConcerns/ or leave it on the subreddit:http://reddit.com/r/wehaveconcerns
Laura y Santiago hacen la sección de seguimiento más larga hasta la fecha, aunque, hay que decirlo, el micrófono de Laura está un poco estallado en esa parte (disculpas por eso). Luego de hablar un poco de todo, se meten en el cuento de la inteligencia artificial y lo fascinante que es. La conversación arranca por el lado del Go y luego se mete con el (posible) futuro apocalíptico de la Inteligencia Artificial. www.cosasdeinternet.fm Notas del episodio: Larry Sanger, el nombre real de uno de los fundadores de Wikipedia. Gracias, Javi, por haber compuesto las partes musicales de cosas de Internet. En este canal se puede ver algo de su trabajo. Recomendaciones de Internet esta semana: El canal de YouTube de Andrew Huang. Episodio en el que hablamos de de propiedad intelectual e internet. Los libros de la mujer rota. Historia del creador de videos de YouTube que reportaron por infringir la propiedad intelectual. En este canal se puede ver la serie de videos de Everything Wrong With. ¿Qué es Bitcoin y cómo funciona?, el video de Magic Markers. Go. La foto prometida de cómo se ve un tablero de Go. Un video, muy bueno, sobre Go y AlphaGo. Es en serio, hay más partidas posibles en un tablero de Go que átomos en el universo. Escena de los Simpsons de la crayola en el cerebro de Homero, algo que su médico no vio por años. Conferencia de prensa de la partida donde Lee Sedol vence a AlphaGo. En este libro, hablan de los riesgos de la inteligencia artificial que deberíamos ponernos a pensar: «Superintelligence: Paths, Dangers, Strategies». Recomendación: Lista de videos sobre Inteligencia artificial.
Lyt med, når IDA Podcast sætter fokus på evolutionære algoritmer og kunstige neurale netværk. Det er kognitive teknologier, som er fuldstændigt centrale for udviklingen indenfor kunstig intelligens i dag - og ikke til at overse, når vi skal forstå det eksponentielt accelererende paradigme, vi lever i. Vi stiller også skarpt på spørgsmålet, om maskiner kan være kreative, og på den vigtige rolle, kunstig intelligens spiller i en verden, hvor datamængden og processorkraften bare vokser og vokser. Podcasten er produceret af Danmarks ingeniørforening, IDA, i samarbejde med Brain Gain Group. Episoden er den tredje i en serie om fremtidsteknologi. Medvirkende Sebastian Risi, Associate Professor ved ITU og co-director for forskningsenheden Robotics, Evolution and Art Laboratory (REAL): http://bit.ly/2kTEKOC Thomas Terney, ph.d. i kunstig intelligens, foredragsholder og iværksætter: http://bit.ly/2kOQlik Vært og tilrettelæggelse: Matias Seidler Producer: Tobias Ankjær Jeppesen Lyddesign: Alexander Clerici SHOW NOTES [00:23] IBMs Deep Blue blev den første computer som slog en stormester, Garry Kasparov, i skak. Det skete i 1997: http://bit.ly/2kIkvkB [01:32] Link til præsentation af Henry Lieberman, MIT Media Lab, som forklarer forskellen mellem symbolsk (klassisk) kunstig intelligens og subsymbolsk kunstig intelligens: http://bit.ly/2k4lN7X [02:05] For en frisk introduktion til Deep Learning, tjek WIREDs artikel, ‘Why We Need To Tame Our Algorithms Like Dogs’: http://bit.ly/2kIIqQV [02:45] Forskningsenheden på ITU, ‘Robotics, Evolution and Arts Lab’, kan du kigge nærmere på her: http://bit.ly/2ksAxAE [03:27] For en illustrativ oversigt over hvad biologisk inspirerede algoritmer har af anvendelsesmuligheder kan du tjekke tag-søgningen på Robohub.org ud: http://bit.ly/2knkZf1 [04:42] Der er flere internationale priser og konkurrencer om at løse ‘General Artificial Intelligence’-udfordringen. Se bl.a. denne på 35M$: http://bit.ly/2knrUVA [06:32] Thomas Terney taler om styrkeforholdet mellem neuroner (eller enheder i netværket). Det betegnes som ‘weights’ på engelsk. Her er der en tråd med en række forskelligartede, uddybende besvarelser: http://bit.ly/2k40vMx [08:51] Baidu er den kinesiske ækvivalent til Google og har i januar 2017 hyret nogle af Microsofts dygtigste AI-udviklere: http://bit.ly/2lrPuS1 [10:34] Se NASAs whitepaper ‘Automated Antenna Design with Evolutionary Algorithms’: http://go.nasa.gov/2llsiYO [11:32] MIT Technology Review har en interessant artikel med løsningsforslag på problematikken: ‘Algorithms That Learn with Less Data Could Expand AI’s Power’: http://bit.ly/2ksG1M2 [14:23] I 2016 trak Googles AlphaGo headlines i hele verden da den slog Lee Sedol i det - i forhold til skak usammenligneligt - komplekse spil, ‘Go’. Og AlphaGo bliver ved med at sejre, se bare her: http://bit.ly/2k4t7jK [19:57] Apple er i fuld gang med at anvende ‘unsupervised learning’-princippet i udviklingen af selvkørende biler: http://bit.ly/2lplN3e [21:23] ‘Do you think computers have minds?’ Det er ikke kun Sebastian Risi, som synes det spørgsmål er godt. Bevidsthedsfilosofien har søgt seriøse svar på det, helt siden Alan Turing formulerede sin berømte test i 1950. Det er blevet omdrejningspunktet for en filosofisk tradition, hvis mest interessante og udfoldede svar elegant beskrives i The Internet Encyclopedia of Philosophy: http://bit.ly/2k4Bqw5
2017-01-23 Special EnglishThis is Special English. I&`&m Mark Griffiths in Beijing. Here is the news.Beijing has started to install air purification systems in some of the city&`&s schools and nurseries.The city government has allocated money to help the schools cover the cost of the installation.Beijing suffered heavy air pollution this winter, and schools and other education institutions in the city were ordered to stop outdoor classes and activities.Many regions in China experienced heavy smog recently. The national observatory issued a red alert for fog and renewed an orange alert for smog in a number of northern, eastern and central regions.China has a four-tier color-coded warning system for severe weather, with red being the most serious, followed by orange, yellow and blue.This is Special English.China has called for more efforts to ensure food safety in the country, noting that there are still many problems despite an improving food safety situation.President Xi Jinping said more efforts should be made to ensure food safety for the public. During his latest instructions on China&`&s food safety work, President Xi called for the most rigorous standards and the most stringent regulations for improving food safety control.He stressed administration under the law, enhancement of work at grassroots level and the professionalism of food safety inspectors. He also demanded a comprehensive food safety system from farm to table.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing.China&`&s unmanned deep sea devices have completed deep sea tests, descending over 10,000 meters into the waters of the Pacific Ocean. Chinese scientists carried out the research at the Challenger Deep in the Mariana Trench, the deepest part of the ocean in the world.The deep sea diving involved a research vessel, a deep sea landing support ship, as well as manned and unmanned submarines capable of diving 10,000 meters underwater.In the experiment, the submarines reached the ocean floor, took pictures and collected sediment and biological samples. The experiment is called the Rainbow Fish project and is funded by the state and private capital.The scientists involved in the tests said the success marks another step forward in China&`&s deep-sea research.Globally, there are 26 trenches that are 6,500 meters or deeper. These trenches are home to a number of newly discovered fauna species, and with abundant energy and mineral resources.In August last year, an unmanned submarine dived to a depth of 10,000 meters at the Mariana Trench, setting a new record in China. China became the third country after the United States and Japan to build submarines capable of reaching depths of more than 10,000 meters.This is Special English.Bar-code technology widely used in supermarkets and industry is to be introduced into Britain&`&s National Health Service. Scanning will be used for the first time on breast implants and replacement hips and other surgical tools used during surgical procedures.The barcodes will also be used to trace patients and their treatments, manage medical supplies and monitor the effectiveness of equipment.The scanning project, at a cost of 12 million pounds, roughly 15 million U.S. dollars, will help medical staff to quickly and easily track each patient through their hospital journey.According to a spokesman for the Department for Health, by using barcodes, anything that might develop a fault years later, for example a screw used in a knee operation or breast implant, can be traced. The details, such as when it was used and the surgeon who carried out the procedure, can also be found quickly and easily.The technology will also help to eliminate avoidable harm in hospitals, including errors such as patients being administered the wrong drugs and surgery being performed on the wrong part of the body.Early results from 6 pilot "Scan4Safety" projects show that scanning has the potential to save lives and save more than 1 billion U.S. dollars for the National Health Service over 7 years.Secretary of State for Health in Britain Jeremy Hunt said "Scan4Safety" is a world first in health care. You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing.A new cargo train from Tibet has reached Zhejiang Province in eastern China, after traveling 4,500 kilometers over five days. The train started from Lhasa, the capital city of Tibet, and arrived in Ningbo, passing through several other provinces including Qinghai, Gansu, Shaanxi, Henan, and Anhui. This is the first cargo train between the two cities. It carried 2,000 tonnes of bottled mineral water which will be distributed to dealers in Zhejiang and Shanghai. Tibet is rich in water resources and is often called Asia&`&s water tank. Tibet produced 400,000 tonnes of natural drinking water in 2015, but high transportation costs made it difficult to reach other parts of China. The new rail route is designed to facilitate cargo transport from Tibet to central and eastern China. More such trains have been planned between Tibet and several other cities including Beijing.This is Special English.China&`&s lawmakers have adopted new legislation to improve the country&`&s cultural services.The law will go into effect on March 1. It aims to carry forward the traditions of Chinese culture and cultural confidence.According to the law, public cultural services must be people-orientated and "guided by socialist core values".County-level governments and above must improve community cultural service centers, build more of them and offer more products online.Private funds will be invited to finance public cultural facilities.Authorities in rural areas must provide more books, films, online information, as well as festivals and sports events to ensure equal service in urban and rural areas.Public services should serve special groups including minors, the elderly and the disabled. They must also ensure quality services for ethnic minorities and poorer areas.International cooperation and exchanges should be expanded.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. You can access the program by logging on to newsplusradio.cn. You can also find us on our Apple Podcast. If you have any comments or suggestions, please let us know by e-mailing us at mansuyingyu@cri.com.cn. That&`&s mansuyingyu@cri.com.cn. Now the news continues.The growth of China&`&s film market appears to have been slowed in 2016, signaling more rational and sustainable development.Box office revenues for 2016 totaled almost 44 billion yuan, roughly 6 billion U.S. Dollars. China&`&s film industry professionals say the figure means a modest increase over the total in 2015.It took China eight years to increase box office revenues from less than 1 billion yuan in 2002 to 10 billion yuan in 2010. The continuously rising annual box office revenues reached 44 billion yuan in 2015, an increase of almost 50 percent from 2014.Though ticket sales show signs of slowing, the market itself has been expanding. Latest figures show the number of cinema screens in China reached almost 41,000 by the end of last year, surpassing the United States to become first in the world.It took China around a year to increase its screens from 30,000 to 40,000. Experts say the number of screens grew by an impressive 26 per day last year.China became the world&`&s second-largest film market in 2012. Earlier foreign assessments predicted that China will surpass the United States as the world&`&s largest film market this year.This is Special English.The mysterious "master" that has scored 60 straight victories against elite Go players online is the latest version of computer program AlphaGo.AlphaGo&`&s development team has confirmed that "master" is AlphaGo&`&s AJa Huang. "Master" released its real identity before the game with China&`&s elite Go player Gu Li. And the artificial intelligence program beat Gu to gain its 60th crown.AlphaGo is a computer program developed by Google DeepMind in London to play the board game Go. It has become well-known after its victory over South Korea&`&s top Go player Lee Sedol in March last year.During the game against Lee, DeepMind&`&s lead programmer Aja Huang, put the stones on board instead of AlphaGo.DeepMind said the team has been hard at work improving AlphaGo. It has played some unofficial games online at fast time controls with their new prototype version to check if it is working as well as they hoped.DeepMind said they are excited by the results and also by what they and the Go community can learn from some of the innovative and successful moves played by the new version of AlphaGo.The father of AlphaGo, Demis Hassabis, said that after the unofficial faceoffs, the team will arrange some official matches this year.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing.China&`&s first geological park dedicated to plant fossils is set to open in Henan province in central China. The park covers an area of 30 square kilometers and it took builders more than three years to complete. Visitors to the park will be shown how plant fossils are formed and discovered, as well as what the planet earth used to look like more than 250 million years ago. The park has rich deposits of plant fossils, with more than 300 different species. Experts say that plant fossils in other parts of China are buried deep underground; but the fossils in this park are almost exposed on the surface, and are much easier for people to look at. The park will also feature exhibits of the Junci porcelain, an important type of Chinese pottery known for its complex blue glaze. The porcelain was developed locally around 1,000 years ago and owes much to the unique local clay.This is Special English.A recent study says there is no proof that sugar-free soft drinks can help weight loss and artificially-sweetened beverages, or ASBs, may trigger chronic diseases. The study has been done by a group of international university professors. It says the absence of consistent evidence to support the role of ASBs in preventing weight gain and the lack of studies on other long-term effects on health strengthen the position that ASBs should not be promoted as part of a healthy diet.The study added that taking account of ASB composition, consumption patterns and environmental impact, they are "a potential risk factor for highly prevalent chronic diseases".The study questioned industry-sponsored research on ASB effects on weight control because they were likely to report favorable results.The study also pointed out that previous tests on ASB influence on weight were inconclusive because they were conducted in some randomized controlled trials and led to "mixed findings, with some indicating a null effect, while others have found modest reductions in weight".However, the study also aroused controversy. Gavin Partington, head of the British Soft Drinks Association, told The British Guardian newspaper that research showed that low-calorie sweeteners in diet drinks helped consumers manage their weight as part of a calorie-controlled diet.Alison Tedstone, chief nutritionist at Public Health England also told The Guardian that "maintaining a healthy weight takes more than just swapping one product for another. Calories consumed should match calories used, so looking at the whole diet is very important".That is the end of this edition of Special English. To freshen up your memory, I&`&m going to read one of the news items again at normal speed. Please listen carefully.That is the end of today&`&s program. I&`&m Mark Griffiths in Beijing, and I hope you will join us every day, to learn English and learn about the world.
Nick, Peter and Fraser discuss what AlphaGo's triumph over Lee Sedol might mean for analysis and decision making.
A comienzos de marzo, el Go, un juego de mesa que tuvo su origen en China hace más de 2,500 años, cobró una inesperada relevancia noticiosa. La razón fue la serie de cinco juegos en que se enfrentaron Alpha Go, un programa diseñado por la empresa Google Deep Mind, y el coreano Lee Sedol, campeón mundial de este juego. Para entender lo que esta partida nos dice sobre el estado del desarrollo de la inteligencia artificial, hablamos con Gerardo Horvilleur, desarrollador de software especializado en videojuegos.
Jimmy and Jason get together and discuss the biggest news stories of the last week. This episode’s topics: Easter, Microsoft’s AI Chatbot Tay, AlphaGo beating Lee Sedol in Go, Domino’s new autonomous delivery robot, our idea of a robot-filled future, and lastly we discuss the initial mixed reviews of Batman v Superman: Dawn of Justice.
In this episode we discuss the end of humanity, as the Alpha-Go computer program defeated human champion Lee Sedol in a game of Baduk (Go). Pleased to have ranked chess player Kevin Em join us for this important discussionNews of the Weird-Kim Jong-un would look totally hot if he just lost 20 kg.-Ramen noodles can make kids gay according to Indonesian politician-Chloe Moretz is "First American" on Korean SNLAsk Rob & Eugene-Is Gostop (고스톱) Japanese?On the Pulse-Discussing the impact of a computer besting a human in a board game. Are the robots going to kill us all with a similar plan as what occurred in the plot of the Bond film Moonraker?
In this episode we discuss the end of humanity, as the Alpha-Go computer program defeated human champion Lee Sedol in a game of Baduk (Go). Pleased to have ranked chess player Kevin Em join us for this important discussionNews of the Weird-Kim Jong-un would look totally hot if he just lost 20 kg.-Ramen noodles can make kids gay according to Indonesian politician-Chloe Moretz is "First American" on Korean SNLAsk Rob & Eugene-Is Gostop (고스톱) Japanese?On the Pulse-Discussing the impact of a computer besting a human in a board game. Are the robots going to kill us all with a similar plan as what occurred in the plot of the Bond film Moonraker?
This is NEWS Plus Special English. I'm Liu Yan in Beijing. Here is the news. China will provide an emergency water supply to countries along the Mekong River to help deal with drought. China's Foreign Ministry says a hydropower station in South China's Yunnan province will make the emergency supply available to the lower reaches of the river through April 10. Countries along the river on the Indochinese Peninsula have faced drought since the end of last year. Vietnam, which is in the lower reaches of the Mekong River, has requested that China increase water discharges to help ease the drought. The Mekong River, whose upper part is known in China as the Lancang River, is an important water source for the five countries on the Indochinese Peninsula, namely Laos, Myanmar, Thailand, Cambodia and Vietnam. China has decided to provide the emergency water supply to benefit the five countries. The Foreign Ministry says China is willing to strengthen communication and practical cooperation with its neighbors on the management of water resources and disaster response under the Lancang-Mekong River Cooperation Mechanism. China and the five countries set up the cooperation mechanism when their foreign ministers met in Yunnan in November. In a joint statement, all the foreign ministers promised to promote cooperation on water resources. This is NEWS Plus Special English. A senior official says China should speed up the standardization of its high-speed railway technology and take the lead in setting international standards. An executive of China Tiesiju Civil Engineering Group in Anhui province Xu Baocheng says standards are crucial to facilitating Chinese railway enterprises' overseas expansion. Xu says seizing the international railway market is of strategic and economic importance to China; and it will help resolve the overcapacity in industrial manufacturing, engineering and construction industries. In addition, the overseas railway market is an ideal field to invest in and thus to increase the country's foreign exchange reserves. The senior engineer adds that the government will strive to have the international railway community recognize and accept Chinese high-speed railway standards and take the lead in setting universal standards. He says currently, technological standards are dominated by several industry giants from Western nations, while China has yet to even make an English-language version of high-speed railway standards. China released its first standards for high-speed railways in 2014, governing almost 20 aspects of design and construction of high-speed lines operating at speeds from 250 kilometers per hour to 350 kilometers per hour. By the end of last year, China had built a high-speed rail network of more than 19,000 kilometers, accounting for more than 60 percent of the world's entire high-speed lines. You're listening to NEWS Plus Special English. I'm Liu Yan in Beijing. The minister of transport has criticized car-hailing app operators for the subsidies they offer to users, describing the practice as unfair competition. Transport Minister Yang Chuantang has also pledged to better regulate paid rides offered by private drivers. Speaking at a news conference in Beijing, Yang says the subsidies are a short-term tactic to gain a bigger market share; and the apps are profit-driven and the subsidies will not be handed out forever. Taxi-hailing app Didi Kuaidi is in fierce competition with Uber Technologies in China. Didi Kuaidi is backed by Internet giants including Alibaba and Tencent. The cash-rich companies are heavily subsidizing passengers and drivers to gain a bigger market presence. Uber said earlier this year that the company lost more than one billion US dollars in China last year from subsidizing users. Didi Kuaidi did not disclose the amount given in subsidies. Before a high-profile merger of two local apps last year, the Chinese company had pledged to subsidize projects worth more than 2 billion yuan, roughly 310 million dollars. Didi Kuaidi and Uber face a regulatory dilemma in China. Regulations do not allow private cars to be used for paid journeys, but tens of thousands of such vehicles carry paying passengers in around 200 cities every day. This is NEWS Plus Special English. Roman Catholics in China face a severe shortage of clergy as the number of followers continues to rise. According to church leaders, the 6 million-plus Catholics in China are served by more than 3,000 priests and 6,000 nuns from 106 parishes. At a Bishops Conference of the Catholic Church in China, vice-president of the Chinese Patriotic Catholic Association Liu Yuanlong said the number of recruits to the priesthood in the Catholic Church in China has dropped sharply in recent years. Fewer than 800 trainee priests are receiving training at the nation's 10 major seminaries. Liu says the shortage of new recruits is a major problem for the Catholic Church in China; and some seminaries are smaller than a rural middle school and have just one or two newly recruited trainee priests each year. Liu says the talent shortage is caused by a variety of factors, including underground churches; and the lack of attention paid to church recruits by bishops has also made the problem more serious. He adds that a rise in living standards has also resulted in fewer people who are willing to devote themselves to church service. This is NEWS Plus Special English. Researchers from China and the United States have developed a new cataract treatment with stem cells that has restored vision in infants in a trial. It may eventually be used in adults. The new procedure was developed by doctors and staff members at the University of California, San Diego School of Medicine, as well as Sichuan and Sun Yat-sen universities in China. It was published in the March 9 edition of the scientific journal Nature. A cataract is a clouding of the normally clear lens of an eye. Typical cataract surgery involves the removal of the cloudy lens and the insertion of an artificial one. The new surgery has been tested in animals and during a small, human clinical trial. It resulted in fewer surgical complications than the current invasive surgery. It showed superior visual function in all 12 of the pediatric cataract patients who underwent the procedure. Congenital cataract, lens clouding that occurs at birth or shortly after, is a significant cause of blindness in children. The human trial involved 12 infants under the age of 2 who were treated with the new method, while another 25 infants received the standard surgical care. The scientists reported fewer complications and faster healing among the 12 infants who underwent the new procedure. You're listening to NEWS Plus Special English. I'm Liu Yan in Beijing. You can access the program by logging onto NEWSPlusRadio.cn. You can also find us on our Apple Podcast. If you have any comments or suggestions, please let us know by e-mailing us at mansuyingyu@cri.com.cn. That's mansuyingyu@cri.com.cn. Now the news continues. Glib Chinese Internet users have raised a challenge to the artificial intelligence program that recently dethroned one of the top human Go chess players, demanding tongue-in-cheek that the AlphaGo program learn the nation's real pastime, the mahjong. China's reaction towards the historic duel between human and artificial intelligence has been mixed. At first, there were questions on why the human player did not come from China, where the game was invented more than 2,500 years ago. Then, as AlphaGo marked three victories in the five-game match, defiant web users began calling for a challenge in an arena average Chinese are more comfortable in. Can Artificial Intelligence beat mahjong masters? This question was posted several times in the comment section of stories about the AlphaGo versus Lee Sedol match on Chinese microblog Sina Weibo. The match attracted a deluge of comments on defending humanity' glory in mahjong. Mahjong is China's answer to poker, and is played by four people. Each turn, players draw tiles from a 144-tile pool, discard or intercept others' to form sets of tiles that can win. Scientists say compared with Go chess, mahjong has far fewer permutations for artificial intelligence calculation, but involves a degree of chance and other factors in favor of humans. One blogger commented that unlike Go chess, mahjong is not a quiet game that focuses only on calculation. It involves a lot of interactions and teamwork between players. Some people described mahjong as a competition in both IQ and EQ. They say computers can undoubtedly blow humans out of the water in math, but how about their ability of communication and interpreting emotions? You're listening to NEWS Plus Special English. I'm Liu Yan in Beijing. Internet giant Baidu claims the company has made great strides in artificial intelligence and is applying it to the company's online-to-offline food delivery service. Sina.com reported that it was a response to a joke among Internet users that Google's technology can beat a champion at a board game while Baidu's technology just concerns food delivery, making fun of its online-to-offline app. But it may be a little quick to say Baidu's food app is low-tech. Three elements affect the time that food ordered from restaurant kitchens reach the clients' door: how long it takes couriers to arrive at the restaurant, how long they take from the restaurant to the client's door and how long it takes for kitchens to prepare food. According to Baidu, it has been impossible to calculate how long kitchens will take to get food ready due to different cooks and volumes of clients in restaurants. If cooks get food ready too fast, the food may go off by the time it's delivered; if couriers arrive at the diner too early, they have to wait and lose other orders. That is where artificial intelligence steps in. It is used in the automatic system to precisely calculate every step. The system tells couriers how long it will take for the food to be ready and offer a route plan for them. Baidu says all data have been collected including the time cost of every order and every meal from every restaurant. Artificial intelligence can estimate the time a meal is ready based on the data. Baidu says the model has been used in their food delivery logistics system for people to have a better experience when ordering food. This is NEWS Plus Special English. (全文见周日微信。)
非常感谢热心听众“沉默基因”对本文稿的贡献! H: AlphaGo's win over one of the world's best Go players has got us wondering about which direction artificial intelligence is heading. Will robots beat humans in areas beyond playing an ancient chess game, or have they already done so? IT industry heavy weights, Jack Ma, Lei Jun and Mark Zuckerberg, have offered their views on the prospects of artificial intelligence at the China Development Forum in Beijing just a couple of days ago. So guys, where are we at right now in terms of artificial intelligence development?N: well, in terms of the story you just mentioned, we’re talking about the computer program AlphaGo which plays the game Go, and it played the game at best five seriesagainst the grandmaster, Lee Sedol, from South Korean, who is one the greatest or the best I think player of the game in the world. And the machine won 4-1 in the final match (HY: It was one victory!) for the human side, isn’t it, so the AlphaGo program has the capacity of learning for itself and learning new strategies as the game goes on, so even if you the game by beating, it can learn how you play, learn you know, your moves, the way you think as a player (HY: oh, Nik!) I’m sorry, I’m sorry, I’m just repeating the facts.H: You’re very words has set, you know, chills upmyspine, Liu Yan, how do you feel about the situation?L: Ok, this is really quite scary, I have to say, because as Nik mentioned, the victory was actually 4-1, but because it was 3 out of 5, best of 5, so as long as you win 3, then you already nailthe final victory. And as it turned out, AlphaGo, 1-3 in the row, so some people was speculating that, you know,AlphaGo actually, through one game in there, just make humans look better, and because he happened to lose the forth one, you know, the one after 3 in the row, I do also believe that it was possible, he kind of just through that away. And now that, you already have one on the score board, and the final one, and I can play my real, according to my real abilityagain. H: You’ve scared me a second time, as the way you referred to Mr. robotAlphaGo, you use “he”, as if it’s a real person!L: I know, but you know, somethings you just can’t deny, as far as AI is concerned. AI defeated human a long time ago, when it came to chess. And apparently, a lot of people are saying Go is likely the last safe place, because Go is so much more complicated, and needs much more human factors, human thinking, but now, the last place, is also… (HY: The laststraw ofthe world chess play. )it also gone, so, it should be very scary. It should be worried.H: Yes, and I think Nik, you make a really good point, that because the robot can learn, it can learn! Oh my gosh! So can we say that we don’t necessarily know that where the robots are going to arrive at, since it’slike a person that can learn things, and there is sort of cognitive activity going on, and is it fair to say as this way.N: I think, not quite. I think (HY: good, good), it can learn, but you know it developesstrategies when it does something wrong and it can learn how not to do that thing wrong again, the next time, but it doesn’t mean it can apply that acknowledge to anything outside the realms of the games necessarily. (HY: thank you, Nik, Thank you very much.) I also think although Go is a big advancement from chess, and it’s much more complicated game for humans as well as machine to learn how to play, other games where there is a kind of hidden aspect, like poker for example, it’s gonna be another kind of milestone, ifthe artificial intelligence learns how to do that, because in Go although it is very complicated, you know all the pieces are out on the table though, both side can see them, so it can learn those kind of strategies where if there was more hidden element, where you have to read another person, the opposite player, maybeisn’t quite there yet.L: That’s really a good point, because if you don’t see everything on the table, then you have to rely on watching someone’s facial expressions for example, and trying to figure out what he or she is exactly thinking, and since this was not involved at this stage, it’s relatively easy for the machine to beat the people, because the machine doesn’t have to do all that. And also, I know we have been painting a very scary picture as for, but if you don’t want to be scared, you can just think this way, cause at the end of the day, AlphaGo is still just a computer program, and who design that program? Humans. So computer programs cannot do something that human… if you don’t put it in the program. Then they certainly cannot do that. So as long as you don’t put it in the program for them to be so awesome, so intelligent, the program won’t be so awesome, so intelligent.H: You make a really good point, and also, if we have a directfirmanswer that we can control the robots in the way they so called think and develop, that they’re always in the control of the hands of humans, then I wouldn’t be so scared, but the reason why people like myself who don’t know much about technology, it is very happy to sleep in mycavewithout any advanced technology at my hand, I’m happy to live like that alright, we’re scared because we don’t know where this is going. Guys, what did the tycoons say?And where is this going?N: Mark Zuckerberg from facebook said he predicted there will be more big advances for artificial intelligence in just next decade. So we not quite finished yet in terms of the development. He said the artificial intelligence will understand senses, such as vision and hearing, and grasp language better than human beings over the next 5 to 10 years. And he highlighted the company Oculus VR, saying that it’sgoing to start producing its product virtual reality products and it could generate $110 billion by 2025, so it is a big business as well as artificial intelligence.L: And of course, Lei Jun, who is the founder of Xiaomi, also seems very optimistic. And he said, he expects AI to be able to beat a human champion at the current stage of development since it's a pretty complicated game, and he also said that now that we’re already see that AlphaGo beat human beings, you can expect that it will attract more capital and talent to the AI sector, so obviously, it will be more developed even in the future.N: One for you Heyang, Jack Mar says, there is no need for humans to fear the machines, he says, the machines will be stronger and smarter than human beings, but they will never be wiser, because wisdom, soul and heart are things that only human beings possess, isn’t that beautiful.H: That is beautiful, but you know, the cynic in me says‘Mr. Jack Madoesn’t sound like you know, that much about AI,’ sorry to say this, but because he is saying, OK, wisdom, fine, soul, fine, heart, fine, these aregreat concepts, but if the robot, it’s so clever, and if you can’t control it, this is all presumptions alright, then with the means nothing, if it can surpass you in other ways, and can terminateyou, that is notgonna happen everybody, I’m not going tosend scares or spread that.L: But I do think your worry is kind of valued, I don’t know if you guys have seen the film “her” from a couple of years ago. Scarlett Johansson (HY: oh, she is so hot. Oh, excuse me, I’m going to the wrong direction.) with her computer program, but a guy, played by Joaquin Phoenix, is totally to be able to be attracted just to a computer program, because she has possessed all those qualities, she has heart, she has soul, and she has intelligence, wisdom, so it is possible. And also last year, there was a film called “Ex Machina”, I don’t know if you have seen that, that one is even scarier, because in that one, robots actually were so clever and so manipulative, they eventually killed humans.H: And like so many other Hollywood films, they played with insecurity and anxiety in this area, because we don’t know what’s gonna to happen in the future, and if we are being colonized by robots, that kind of thing.Here on wechat, our listener Lin Yingdong, says I want a robot companion, I mean, girl, why? I’dbe scared, and like Japanese people are known for this, they’ve got a whole bunch of…like robot pet dogs, robot companions, and how could like human emotion be placed in that regard, I mean, I don’t know what is going on here.N: Now, it is a definitely area that you can easily see why people are scared or hate. I think, the more we have this conversation, the more I’m falling into that campas well I think. It is interesting that someone said that they would like a robot companion, I mean, what kind of companionship, someone sit in your house, and like to talk to, can it to talk back?L: This might be off the topic, but you know what this reminds me, it totally reminds me of the “大白”character from the Big Hero 6, (HY: the Baymax) yeah, the Baymax from the Big Hero 6, maybe she was thinking about the robot companion like that. That would be adorable, I would want one like that as well.H: Alright, I’m happy to be your friend, Liu Yan, (Nik: a human being.) Yeah, your blood and fresh, a real person.L: That would be the best, however, a robot could be very cuddly, which not necessarily, you know, human beings can do.H: What? Cuddlier than human beings? LY, What’s going on?L: I’m thinking about Baymax in particular.H: Baymax is cute. But yes, I think the ultimateworry for people is whether this technology can be controlled by people, and I mean, scientists out there that must be the bottom line that I think is really needed here. And just quickilybefore we go on to the next topic guys, a lot of people are saying that so many jobs are just gonna be takenover by robots, and I think a couple of months ago on this show, we talked about that even some of the news stories but news stories, alright, only reporting news without much of human touch, those kind of stories have been written by robot journalists. (Nik: we’re still safe now.) Are we? Are we? So what are the jobs that you think are gonna disappear the landscape from the job market for humans, and how can we deal the situation.L: I think if a job requires a lot of data or to be processed or a great deal of routine to be repeated, then this job probably will be tookoverby robots, because it’s relatively easy. However, if this is a job requires a lot of creative input, for example, if you are an artist, I don’t think that would be easily replaceable by any robots any time soon. So it’s really depends on the quality or the specific requirement of the job.N: Anything, any kind of job where you perform repetitive kind of continuousaction throughout the day doesn’t change that much, I think something robots probably do better in a lot of cases, cause they’re soquick and efficient and also you don’t have to pay them.H: So, yeah, you don’t need to pay for them, but you need to charge the battery probably, keep them connect to the electronic sockets and those kind of things. And yeah, our wechat listener Jessy has something pretty smart to say regarding this topic. She says about the list of jobs that will easily be replaced in the future by the robots has gone viral on wechat right now, and like accountants, security guards, cleaning jobs, and delivering jobs, these things might have no place for humans in the future, but what distinguishes us from robots is that we, as human beings are able to feel stuff, we are not emotionless. So she holds this slightly positive view that is, you know, there is still place for humans as long as humans are in the control position, but it’s going to be hard to defend that position if the technology is that advanced, and I think this is one of those high grounds we have to guard as humans...
En este episodio os hablamos del go, un juego de tablero oriental tradicionalmente considerado como uno de los más difíciles para los ordenadores, y os hablamos del histórico enfrentamiento entre Lee Sedol, el mejor jugador del mundo en la década de los 2000 y AlphaGo, una inteligencia artificial desarrollada por Google. AlphaGo se impuso en este encuentro por 4-1, siendo la primera vez que un ordenador vence a uno de los mejores jugadores humanos. Os contamos en qué consiste el go, os explicamos por qué es tan difícil de dominar para un ordenador y os contamos cómo ha conseguido AlphaGo ser tan bueno en este juego. Este programa se emitió originalmente el 17 de marzo de 2016. Podéis escuchar el resto de audios de La Brújula en su canal de iVoox y en la web de Onda Cero, ondacero.es
In this week's UK Tech Weekly Podcast host Matt Egan is joined by first time podder Tamlin Magee (1:50), online editor at ComputerworldUK.com, to discuss the UK tech implications of this year's Budget, including rural broadband and driverless cars. Then Christina Mercer, assistant online editor at Techworld.com, chats AlphaGo (10:00) and board games following the AI's historic win over world Go champion Lee Sedol. Later, resident Virtual Reality (VR) enthusiast and PCAdvisor.co.uk staff writer Lewis Painter discussed "the big three" VR headset release dates, pricing and features from HTC, Sony Playstation and Oculus Rift (19:00). Finally, UKTW Podcast regular David Price, acting editor at Macworld.co.uk chats about Apple's big upcoming event (28:45). See acast.com/privacy for privacy and opt-out information.
In this first episode of the PartTimePoker Podcast, PTP lead writer Alex Weldon and WSOP bracelet winner Andrew Barber talk about their own backgrounds, the defeat of world Go champion Lee Sedol by Google’s AlphaGo, and the controversy facing Ivan Luca after he found himself heads-up against his girlfriend Maria Lampropoulos at the end of a Eureka Poker Tour main event. These topics lead to discussion of the ethics of markup, AI bots and the future of online poker, and the issue of soft play in tournaments. Finally, in the strategy segment, Andrew critiques a hand in which Alex finds himself debating whether to make a thin river value bet in a low buy-in Mix-Max tournament.
Công luận những ngày gần đây dồn hết mọi sự chú ý vào cuộc đọ trí giữa máy tính thông minh AlphaGo và kỳ thủ cờ vây thế giới Lee Sedol, người Hàn Quốc. Sự kiện cho thấy lĩnh vực nghiên cứu trí thông minh nhân tạo đã có những bước đột phá lớn, hy vọng mở ra nhiều ứng dụng mới phục vụ cho lợi ích con người. Nhưng bên cạnh đó, cũng có nhiều lo ngại cho rằng một ngày nào đó máy tính sẽ vượt lên trên và điều khiển cuộc sống nhân loại. Máy tính thông minh đã hạ gục kỳ thủ cờ vây Lee Se-dol người Hàn Quốc với tỷ số chung cuộc 4-1, trong một trận so trí gồm năm ván đấu như qui định. Trận so tài đã được công luận và nhất là giới chuyên môn theo dõi sít sao. Bởi vì phải đợi đến 19 năm sau ngày kiện tướng thế giới cờ vua Garry Kasparov bị máy tính DeepBlue của IBM đánh bại trong một trấn đấu 6 ván, thế giới mới lại được tận mắt chứng kiến tiến bộ mới của lĩnh vực trí thông minh nhân tạo trong ngành công nghệ tin học. Hơn nữa sự quan tâm của công luận dành cho trận đấu này không chỉ vì sự hiếu kỳ mà vì trước đó ai cũng nghĩ rằng vẫn còn xa máy tính mới giành được phần thắng trong môn cờ vây, một bộ môn giải trí mang tính trí tuệ có nguồn gốc Trung Hoa. Xuất hiện cách đây hơn 3000 năm, được chơi nhiều tại các quốc gia Trung Quốc, Nhật Bản và Hàn Quốc, cờ vây tuy luật chơi dễ dàng nhưng lại có hàng vạn các nước đi, thiên biến vạn hóa. Do đó, môn cờ này được cho là sẽ rất khó lập trình. Chính vì thế, ngay sau khi ván đấu thứ ba kết thúc với phần thắng nghiên về AlphaGo, ông Demis Hassabis đã phải thốt lên là : « Thật tình mà nói, chúng tôi cảm thấy sững sờ. Tôi muốn nhắc lại là mục tiêu của chúng tôi theo nghĩa rộng : chúng tôi đến đối đầu với Lee Sedol là để học hỏi từ anh ấy và muốn biết xem phần mềm của chúng tôi có khả năng đến đâu ». Những bước tiến của trí thông minh nhân tạo Như vậy, trí thông minh nhân tạo là gì ? Theo giải thích của nhiều chuyên gia, trí thông minh nhân tạo, viết tắt là AI (Artificial Intelligence) thật ra là toàn bộ các thuật toán, bao gồm một chuỗi các phép tính cho phép thực hiện một vấn đề được đặt ra. Ý tưởng xây dựng một chương trình AI đã xuất hiện ngay từ giữa những thập niên 1950, chính xác là vào năm 1956, tại Hanover (Hoa Kỳ). Theo quan điểm của các nhà sáng lập bộ môn này, ông John McCarthy và Marvin Minsky (vừa qua đời hôm 24/01/2016), máy móc có thể bắt chước hay mô phỏng một mặt nào đó của con người. Và đến lúc nào đó có thể bằng cả trí tuệ nhân loại. Trong suốt thập niên 1960, các nhà sáng chế theo đuổi hy vọng này một cách tuyệt vọng, do tiến bộ tin học thời bấy giờ vẫn chưa đạt đến mức để có thể thực hiện. Mọi việc bắt đầu có những tiến triển từ năm 1985, với sự phát triển của ngành rô-bốt học, mà Nhật Bản là quốc gia đi đầu. Thế nhưng làn sóng phấn khích đó cũng thật là ngắn ngủi. Rô-bốt thời đó chỉ phục vụ cho công nghiệp và chưa có một chỗ đứng trong gia đình. Niềm hy vọng về trí nhân tạo thật sự hồi sinh sau trận đấu lịch sử giữa kiện tướng cờ vua Garry Kasparov với máy tính DeepBlue của IBM năm 1997. Để rồi từ đó, AI đã dần xuất hiện len lỏi vào cuộc sống con người. Ban đầu chỉ ở dạng « AI thấp » tức chỉ dùng để giải quyết một vấn đề đưa ra. Dạng sơ khởi này cho phép máy tính độc lập hơn và có khả năng tự học. Đây cũng chính là những dạng trí thông minh nhân tạo chúng ta sử dụng hàng ngày : công cụ dò tìm của Google, hay đối thoại với các nhân viên tư vấn ảo của các trang mạng Amazon, Netflix, Youtube…. Dạng thông minh đơn giản đó cũng được thiết kế cho một số loại rô-bốt sử dụng trong các bệnh viện, các phần mềm dịch thuật hay một số trò chơi video tương ứng … AlphaGo: Một cuộc cách mạng của trí thông minh nhân tạo ? Thế nhưng, theo ông Raja Chatila, giám đốc Viện nghiên cứu hệ thống trí thông minh và rô-bốt học (Isir), trường đại học Pierre-et-Marie-Curie của Pháp « từ một thập niên nay, AI đã bước lên một nấc mới nhờ vào phần mềm ‘deep learning’ ». Theo đó, deep learning được thiết kế sao cho máy móc có thể bắt chước cách thức vận hành của não bộ con người. Cả hệ thống này trú trong ổ chứa đặt biệt có đến hàng ngàn con chip điện tử (tương đương như là nơ-ron thần kinh), được sắp xếp thành nhiều lớp khác nhau. Các nơ-ron này sẽ tự nuôi lấy lẫn nhau để rồi từ đó xuất phát thuật ngữ « apprentissage profond » (tiếng Anh gọi là reinforcement learning). Đây cũng chính là nét độc đáo làm nên thành công của AlphaGo so với DeepBlue cách đây 19 năm theo như giải thích của nữ ký giả Amelie Charnay trên trang mạng O1Net: « Tính độc đáo của Alphago nằm ở các thuật toán. Ở đây có ba điểm khác nhau, tức là ba phương pháp. Một phương pháp cổ điển mà ta thường thấy ở điện thoại thông minh để nhận dạng giọng nói, hay như nhận biết hình ảnh. Phương pháp này được gọi là ‘deep learning’ (tạm hiểu là học hỏi sâu). Đó là những thuật toán, đơn giản để giúp cho máy tính ít nhiều tự học một mình. Cũng giống như là dạy cho một đứa trẻ học đọc bảng chữ cái. Người lập trình sẽ đưa ra những con chữ để giúp cho máy tính tự nhận dạng đó là chữ A hay là B. Thế nhưng, sự độc đáo của Alphago ở đây không chỉ có sử dụng deep learning, mà còn kết hợp với một phương pháp khác, đòi hỏi nhiều thời gian hơn để thực hiện : Đó là ‘reinforcement learning’ (apprentissage par renforcement). Với phương pháp này, máy tính sẽ tự đối đầu với chính các biến thể khác nhau của nó. Sự tiến bộ của máy tính sẽ không bao giờ ngừng, càng tự đối đầu, máy tính càng tự hoàn thiện. Ngoài việc kết hợp hai phương pháp trên, ALPHAGO còn sử dụng đến một phương pháp khác, cổ điển hơn, khá nổi tiếng với tên gọi Monte Carlo. Theo đó, các máy được yêu cầu chơi phần cuối của ván cờ, với mục đích là cố gắng dự đoán trước các nước đi. Để có được kết quả này, những người lập trình đã dạy cho máy học thuộc lòng tất cả nước đi từ các kỳ thủ trên thế giới. Nhờ vào kho dữ liệu khủng này, máy tính có thể dự đoán trước đến 56% các tổ hợp. » Trí thông minh nhân tạo giúp ích gì cho con người ? Ngày nay các tập đoàn công nghệ lớn đang lao vào một cuộc đua khốc liệt để khai thác thế mạnh của Deep Learning. Facebook thì có DeepFace nhận dạng khuôn mặt ; Google với Tensorflow để sắp xếp tự động các thư điện tử của Gmail ; Apple có Siri hay Amazon thì có chương trình tổng hợp giọng nói Alexa. Cách đây một tháng, trên tuần san L’Express, ông Laurent Alexandre, chủ tịch DNAVision và nhà sáng lập trang mạng Doctissimo.com, đã có nhận định rằng : « Thế kỷ XXI là thế kỷ của một cuộc cách mạng mới, cuộc cách mạng rô-bốt (robolution). Cuộc cách mạng này đang diễn ra ngay trước mắt chúng ta và đặc trưng của nó chính là sự thúc đẩy nhanh chóng đến chóng mặt của các ngành công nghệ ». Nếu chúng ta phải mất đến hơn một thế kỷ để có thể đưa các khám phá hiện tượng vật lý của ngành nhiếp ảnh vào trong đời sống xã hội loài người, thì nay với công nghệ, bước chuyển tiếp đó đã được rút ngắn một cách đáng kể với chỉ từ 24-48 giờ mà thôi. Giờ đây ứng dụng trí thông minh nhân tạo hầu như đã hiện diện khắp nơi. Sự phát triển mạnh mẽ của lĩnh vực này sẽ làm biến đổi sâu sắc cuộc sống nhân loại. Để minh chứng cho điều này, trong chương trình bản tin lúc 20 giờ trên kênh 2, ký giả Nicolas Chateauneuf đã sử dụng hình ảnh ảo và giọng nói nhân tạo của mình được một start-up tại Nante lập trình, trên truyền hình để giải thích các ứng dụng có thể có của trí thông minh nhân loại trong tương lai : « Chúng ta hãy lấy điện thoại thông minh làm ví dụ. Bạn có thể hỏi chúng là ngày mai bạn cần đi mát-xa, nhưng bạn cũng muốn mua một chai rượu vang hợp với món bò rô-ti, và thế là một trình hỗ trợ âm thanh sẽ vẽ cho bạn một lộ trình đi ngang qua tiệm bán rượu đồng thời tư vấn cho bạn một loại rượu phù hợp. Trong y học, AI có thể sẽ còn là một cuộc ‘cách mạng’. Chúng có khả năng xem xét tất cả các dữ liệu của một bệnh nhân : tuổi tác, tiền sử, các bản chụp phim ; đối chiếu chúng với tất cả các nghiên cứu được công bố, và cuối cùng sẽ đưa ra một chẩn đoán đôi khi ngay chính bác sĩ cũng chưa nghĩ tới. Việc này đã tồn tại tại Mỹ. Thậm chí, đến một ngày nào đó, người ta có thể thấy các phóng viên bị thay thế bằng những rô-bốt ảo. Máy tính có thể tự học. Chúng có thể tự đặt ra câu hỏi và tự tìm ra đáp án. Lấy xe hơi tự lái của Google làm ví dụ. Đây cũng là một ví dụ cho trí thông minh nhân tạo. Chiếc xe này của Google có thể tự chạy mà không cần người điều khiển. Trí thông minh nhân tạo của nó phải nhận biết hết mọi nguy hiểm trên đường : khoảng cách đi lại của người qua đường, để rồi sau đó thích ứng với thực tế ở mọi tình huống mới. Cứ thế, trong tương lai có thể sẽ đến lượt máy bay, những chiếc máy bay không người lái. Google đã phải phát triển một trí thông minh nhân tạo, có khả năng nhận dạng được hình ảnh, có khả năng lục tìm trên mạng tất cả những hình ảnh có sẵn. Một cách kỳ diệu, thông minh nhân tạo đã nhận biết hình ảnh của một quả chanh, một quả bưởi bị cắt làm đôi và một ly nước cam vắt. Hơn nữa, AI của Google còn có khả năng đưa ra được ý nghĩa của cảnh được nhìn thấy ». Máy tính thông minh sẽ điều khiển con người ? Với thắng lợi mới trong trận đấu cờ vây vừa diễn ra, rõ ràng chiến lược mới của ngành tin học với hệ thống AI gồm ba tầng : mạng nơ-ron thần kinh nhân tạo, machine learning và deep learning đã cho thấy một hiệu quả thật sự đáng gờm. Thế nhưng, sự hội tụ giữa ngành khoa học não bộ và tin học chỉ có thể diễn ra với một điều kiện, phải hiểu rõ cách thức vận hành não bộ con người. Nhật báo Le Monde ngay sau trận so trí giữa AlphaGo và Lee Se-dol kết thúc, trong một bài viết đăng trên mạng có tiết lộ thông tin : Trước khi thành lập DeepMind, được tập đoàn Google mua lại, Demis Hassabis đã từng hoàn thành luận án tiến sĩ về ngành khoa học thần kinh. Viễn cảnh một ngày nào đó, máy tính sẽ sở hữu một trí thông minh nhân tạo mạnh mẽ và có thể điều khiển chúng ta là hãy còn xa. Một trí thông minh nhân tạo mà nhờ vào đó máy có khả năng thể hiện những hành vi thông minh, chứng tỏ một sự nhận thức về bản thân, biểu lộ tình cảm và có một sự hiểu biết về lý trí của mình từ đây cho đến năm 2050 vẫn là điều chưa thể. Do bởi chúng còn thiếu khả năng « học mà không cần sự giám sát » (apprentissage non supervisé), một mảnh ghép quan trọng trong quá trình kiến tạo trí thông minh nhân tạo theo như nhận định của ông Yanne Lecun, người Pháp, một trong những người đã sáng tạo ra lập trình deep learning được Facebook với một giá cao, và cạnh tranh với Geoffrey Hinton, tại Google. « Điều đó hãy còn xa, còn xa lắm. Có thể trong tương lai, chúng ta sẽ có những thứ máy móc mà trí thông minh nhân tạo của chúng có thể vượt qua con người trong mọi lĩnh vực. Hiện tại, chúng ta chỉ có những loại máy có AI hơn cả con người nhưng chỉ trong những lĩnh vực đặc thù. Chẳng hạn như chúng có thể xuống đường đến tiệm mua một món đồ chơi, hay như một cái máy có thể hạ gục bạn trong một ván cờ vua, hay như lúc này là cờ vây. Nói tóm lại là trong những lĩnh vực chuyên biệt. Hay như sắp tới bạn sẽ có cả xe tự lái, điều khiển xe còn tốt hơn cả bạn nữa. Chúng rất chuyên biệt theo nghĩa là chúng chưa có được trí thông minh tổng quát như con người. Hiện tại chúng tôi vẫn còn thiếu một mảnh ghép quan trọng, nhiều khái niệm vẫn chưa được phát triển, mà chúng tôi gọi là ‘học mà không cần sự giám sát’ » Trí thông minh nhân tạo thấp : mối nguy hiện tại Có lẽ đó là một nỗi sợ xa vời. Nhưng vấn đề trước mắt đặt ra ở đây, để có thể tạo ra những AI cao, các nhà lập trình cần phải sử dụng đến một lượng dữ liệu khổng lồ (big data), liên quan đến các thông tin cá nhân, kể cả cho các dạng trí tuệ thông minh thấp như hiện nay. Chẳng phải cũng đã đến lúc chúng ta phải suy nghĩ nghiêm túc về việc bảo vệ các dữ liệu cá nhân đó và phải có các biện pháp để bảo đảm ? « Đó là những dữ liệu cung cấp một sự hiểu biết rất cặn kẽ về hành xử cá nhân và tập thể nhằm mục đích điều chỉnh cho phù hợp việc cung cấp các dịch vụ và sản phẩm », theo như giải thích của ông Eric Sadin ,một nhà triết học với tuần báo L’Express. Đời sống riêng tư của từng con người trở thành một sản phẩm hàng hóa. Sự bùng phát của ngành công nghệ thông tin có liên quan đến AI đã làm nổi lên hai tác động quan trọng, được ông Erik Brynjofsson và Andrew McAfee đề cập đến trong tác phẩm : « Thời đại thứ hai của máy móc » (Le Deuxième Âge de la machine – nhà xuất bản Odile Jacob). Một mặt, mức sống chung của một bộ phận lớn nhân loại được nâng cao. Nhưng mặt khác, một sự « phân tán » sự giàu có không thể tránh khỏi cùng với việc chia sẻ nguồn thu nhập sẽ mất cân đối hơn so với cách chia sẻ lợi nhuận từ giới công nghiệp. « Người ta quá chú trọng đến chuyện máy móc bắt chước chúng ta như thế nào, đáng lý ra, ngược lại chúng ta phải tự hỏi chúng đã làm thay đổi cách ứng xử của chúng ta ra sao và điều đó có sẽ đi theo đúng hướng hay không ? », theo như lời phê phán của nhà bình luận Alexei Grinbaum. Cuộc chinh phục của AlphaGo đang làm dấy lên một mối lo sợ về siêu trí thông minh nhân tạo. Tuy rằng kịch bản đó hiện nay đã bị nhà sáng lập DeepMind loại trừ hay như khẳng định của Yanne Lecun. Nhưng nhiều câu hỏi cũng được đặt ra. Chuyện gì sẽ xảy ra nếu như người ta dạy cho máy móc học cách để đánh lừa, thống trị, vượt qua cả con người ? Thế giới sẽ ra sao khi người ta dạy nó học cách giấu giếm các ý định, triển khai các chiến lược hung hăng và điều khiển như trận cờ vây cho thấy ? Nếu như thế, « Liệu có nên cấm Google-AlphaGo hay không ? » như lo lắng của báo Le Monde.
Ko se bodo pisale pesmi. Povezave iPhone 7 kamera in ohišje 21. marec uradno potrjen Transmission in izsiljevalski program (članek z Ars Technica) Kako odstraniti animacije pri iPhonu brez jailbreaka AlphaGo Google DeepMind DeepRL Osnoven pregled AlphaGo Wikipedia članek Analiza iger na Wikipediji 15 min obnove in cele igre Kako se igra GO Priporočila MSQRD User Benchmark (www.spritmonitor.de)
Concerning AI | Existential Risk From Artificial Intelligence
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0015-2016-03-13.mp3 Ted talks with special guest Eric Saumur about AlphaGo, compassion, desires and more.
On this episode of the TechRepublic podcast we discuss how Google's DeepMind AlphaGo AI defeated Go master Lee Sedol, and explain why you should learn the world's oldest game. Read more: http://tek.io/223uJPb. Thanks for listening.
Lee Sedol, world champion of the Chinese board game Go, has just been beaten by a computer. Murad Ahmed explains how Google's DeepMind AlphaGo programme did it, and why its victory is significant for the world beyond the Go board. Music by David Sappa See acast.com/privacy for privacy and opt-out information.
Der Wettkampf zwischen Mensch und Maschine hat ein neues Level erreicht. Google tritt mit seinem Programm „AlphaGo“ gegen den Weltmeister im Go-Spielen, Lee Sedol, an. Wie weit die künstliche Intelligenz bereits fortgeschritten ist, zeigt das Spielergebnis. Wir fragen einen Experten, was neuronale Netzwerke so leistungsfähig macht. >> Artikel zum Nachlesen: https://detektor.fm/digital/neuronales-netzwerk-spielt-go
In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.