Podcasts about alphafold2

  • 36PODCASTS
  • 49EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about alphafold2

Latest podcast episodes about alphafold2

Ground Truths
The Holy Grail of Biology

Ground Truths

Play Episode Listen Later Mar 18, 2025 43:43


“Eventually, my dream would be to simulate a virtual cell.”—Demis HassabisThe aspiration to build the virtual cell is considered to be equivalent to a moonshot for digital biology. Recently, 42 leading life scientists published a paper in Cell on why this is so vital, and how it may ultimately be accomplished. This conversation is with 2 of the authors, Charlotte Bunne, now at EPFL and Steve Quake, a Professor at Stanford University, who heads up science at the Chan-Zuckerberg Initiative The audio (above) is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.TRANSCRIPT WITH LINKS TO AUDIO Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths and we've got a really hot topic today, the virtual cell. And what I think is extraordinarily important futuristic paper that recently appeared in the journal Cell and the first author, Charlotte Bunne from EPFL, previously at Stanford's Computer Science. And Steve Quake, a young friend of mine for many years who heads up the Chan Zuckerberg Initiative (CZI) as well as a professor at Stanford. So welcome, Charlotte and Steve.Steve Quake (00:42):Thanks, Eric. It's great to be here.Charlotte Bunne:Thanks for having me.Eric Topol (00:45):Yeah. So you wrote this article that Charlotte, the first author, and Steve, one of the senior authors, appeared in Cell in December and it just grabbed me, “How to build the virtual cell with artificial intelligence: Priorities and opportunities.” It's the holy grail of biology. We're in this era of digital biology and as you point out in the paper, it's a convergence of what's happening in AI, which is just moving at a velocity that's just so extraordinary and what's happening in biology. So maybe we can start off by, you had some 42 authors that I assume they congregated for a conference or something or how did you get 42 people to agree to the words in this paper?Steve Quake (01:33):We did. We had a meeting at CZI to bring community members together from many different parts of the community, from computer science to bioinformatics, AI experts, biologists who don't trust any of this. We wanted to have some real contrarians in the mix as well and have them have a conversation together about is there an opportunity here? What's the shape of it? What's realistic to expect? And that was sort of the genesis of the article.Eric Topol (02:02):And Charlotte, how did you get to be drafting the paper?Charlotte Bunne (02:09):So I did my postdoc with Aviv Regev at Genentech and Jure Leskovec at CZI and Jure was part of the residency program of CZI. And so, this is how we got involved and you had also prior work with Steve on the universal cell embedding. So this is how everything got started.Eric Topol (02:29):And it's actually amazing because it's a who's who of people who work in life science, AI and digital biology and omics. I mean it's pretty darn impressive. So I thought I'd start off with a quote in the article because it kind of tells a story of where this could go. So the quote was in the paper, “AIVC (artificial intelligence virtual cell) has the potential to revolutionize the scientific process, leading to future breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.” That's a pretty big statement. So maybe we can just kind of toss that around a bit and maybe give it a little more thoughts and color as to what you were positing there.Steve Quake (03:19):Yeah, Charlotte, you want me to take the first shot at that? Okay. So Eric, it is a bold claim and we have a really bold ambition here. We view that over the course of a decade, AI is going to provide the ability to make a transformative computational tool for biology. Right now, cell biology is 90% experimental and 10% computational, roughly speaking. And you've got to do just all kinds of tedious, expensive, challenging lab work to get to the answer. And I don't think AI is going to replace that, but it can invert the ratio. So within 10 years I think we can get to biology being 90% computational and 10% experimental. And the goal of the virtual cell is to build a tool that'll do that.Eric Topol (04:09):And I think a lot of people may not understand why it is considered the holy grail because it is the fundamental unit of life and it's incredibly complex. It's not just all the things happening in the cell with atoms and molecules and organelles and everything inside, but then there's also the interactions the cell to other cells in the outside tissue and world. So I mean it's really quite extraordinary challenge that you've taken on here. And I guess there's some debate, do we have the right foundation? We're going to get into foundation models in a second. A good friend of mine and part of this whole I think process that you got together, Eran Segal from Israel, he said, “We're at this tipping point…All the stars are aligned, and we have all the different components: the data, the compute, the modeling.” And in the paper you describe how we have over the last couple of decades have so many different data sets that are rich that are global initiatives. But then there's also questions. Do we really have the data? I think Bo Wang especially asked about that. Maybe Charlotte, what are your thoughts about data deficiency? There's a lot of data, but do you really have what we need before we bring them all together for this kind of single model that will get us some to the virtual cell?Charlotte Bunne (05:41):So I think, I mean one core idea of building this AIVC is that we basically can leverage all experimental data that is overall collected. So this also goes back to the point Steve just made. So meaning that we basically can integrate across many different studies data because we have AI algorithms or the architectures that power such an AIVC are able to integrate basically data sets on many different scales. So we are going a bit away from this dogma. I'm designing one algorithm from one dataset to this idea of I have an architecture that can take in multiple dataset on multiple scales. So this will help us a bit in being somewhat efficient with the type of experiments that we need to make and the type of experiments we need to conduct. And again, what Steve just said, ultimately, we can very much steer which data sets we need to collect.Charlotte Bunne (06:34):Currently, of course we don't have all the data that is sufficient. I mean in particular, I think most of the tissues we have, they are healthy tissues. We don't have all the disease phenotypes that we would like to measure, having patient data is always a very tricky case. We have mostly non-interventional data, meaning we have very limited understanding of somehow the effect of different perturbations. Perturbations that happen on many different scales in many different environments. So we need to collect a lot here. I think the overall journey that we are going with is that we take the data that we have, we make clever decisions on the data that we will collect in the future, and we have this also self-improving entity that is aware of what it doesn't know. So we need to be able to understand how well can I predict something on this somewhat regime. If I cannot, then we should focus our data collection effort into this. So I think that's not a present state, but this will basically also guide the future collection.Eric Topol (07:41):Speaking of data, one of the things I think that's fascinating is we saw how AlphaFold2 really revolutionized predicting proteins. But remember that was based on this extraordinary resource that had been built, the Protein Data Bank that enabled that. And for the virtual cell there's no such thing as a protein data bank. It's so much more as you emphasize Charlotte, it's so much dynamic and these perturbations that are just all across the board as you emphasize. Now the human cell atlas, which currently some tens of millions, but going into a billion cells, we learned that it used to be 200 cell types. Now I guess it's well over 5,000 and that we have 37 trillion cells approximately in the average person adult's body is a formidable map that's being made now. And I guess the idea that you're advancing is that we used to, and this goes back to a statement you made earlier, Steve, everything we did in science was hypothesis driven. But if we could get computational model of the virtual cell, then we can have AI exploration of the whole field. Is that really the nuts of this?Steve Quake (09:06):Yes. A couple thoughts on that, maybe Theo Karaletsos, our lead AI person at CZI says machine learning is the formalism through which we understand high dimensional data and I think that's a very deep statement. And biological systems are intrinsically very high dimensional. You've got 20,000 genes in the human genome in these cell atlases. You're measuring all of them at the same time in each single cell. And there's a lot of structure in the relationships of their gene expression there that is just not evident to the human eye. And for example, CELL by GENE, our database that collects all the aggregates, all of the single cell transcriptomic data is now over a hundred million cells. And as you mentioned, we're seeing ways to increase that by an order of magnitude in the near future. The project that Jure Leskovec and I worked on together that Charlotte referenced earlier was like a first attempt to build a foundational model on that data to discover some of the correlations and structure that was there.Steve Quake (10:14):And so, with a subset, I think it was the 20 or 30 million cells, we built a large language model and began asking it, what do you understand about the structure of this data? And it kind of discovered lineage relationships without us teaching it. We trained on a matrix of numbers, no biological information there, and it learned a lot about the relationships between cell type and lineage. And that emerged from that high dimensional structure, which was super pleasing to us and really, I mean for me personally gave me the confidence to say this stuff is going to work out. There is a future for the virtual cell. It's not some made up thing. There is real substance there and this is worth investing an enormous amount of CZIs resources in going forward and trying to rally the community around as a project.Eric Topol (11:04):Well yeah, the premise here is that there is a language of life, and you just made a good case that there is if you can predict, if you can query, if you can generate like that. It is reminiscent of the famous Go game of Lee Sedol, that world champion and how the machine came up with a move (Move 37) many, many years ago that no human would've anticipated and I think that's what you're getting at. And the ability for inference and reason now to add to this. So Charlotte, one of the things of course is about, well there's two terms in here that are unfamiliar to many of the listeners or viewers of this podcast, universal representations (UR) and virtual instrument (VIs) that you make a pretty significant part of how you are going about this virtual cell model. So could you describe that and also the embeddings as part of the universal representation (UR) because I think embeddings, or these meaningful relationships are key to what Steve was just talking about.Charlotte Bunne (12:25):Yes. So in order to somewhat leverage very different modalities in order to leverage basically modalities that will take measurements across different scales, like the idea is that we have large, may it be transformer models that might be very different. If I have imaging data, I have a vision transformer, if I have a text data, I have large language models that are designed of course for DNA then they have a very wide context and so on and so forth. But the idea is somewhat that we have models that are connected through the scales of biology because those scales we know. We know which components are somewhat involved or in measurements that are happening upstream. So we have the somewhat interconnection or very large model that will be trained on many different data and we have this internal model representation that somewhat capture everything they've seen. And so, this is what we call those universal representation (UR) that will exist across the scales of biology.Charlotte Bunne (13:22):And what is great about AI, and so I think this is a bit like a history of AI in short is the ability to predict the last years, the ability to generate, we can generate new hypothesis, we can generate modalities that we are missing. We can potentially generate certain cellular state, molecular state have a certain property, but I think what's really coming is this ability to reason. So we see this in those very large language models, the ability to reason about a hypothesis, how we can test it. So this is what those instruments ultimately need to do. So we need to be able to simulate the change of a perturbation on a cellular phenotype. So on the internal representation, the universal representation of a cell state, we need to simulate the fact the mutation has downstream and how this would propagate in our representations upstream. And we need to build many different type of virtual instruments that allow us to basically design and build all those capabilities that ultimately the AI virtual cell needs to possess that will then allow us to reason, to generate hypothesis, to basically predict the next experiment to conduct to predict the outcome of a perturbation experiment to in silico design, cellular states, molecular states, things like that. And this is why we make the separation between internal representation as well as those instruments that operate on those representations.Eric Topol (14:47):Yeah, that's what I really liked is that you basically described the architecture, how you're going to do this. By putting these URs into the VIs, having a decoder and a manipulator and you basically got the idea if you can bring all these different integrations about which of course is pending. Now there are obviously many naysayers here that this is impossible. One of them is this guy, Philip Ball. I don't know if you read the language, How Life Works. Now he's a science journalist and he's a prolific writer. He says, “Comparing life to a machine, a robot, a computer, sells it short. Life is a cascade of processes, each with a distinct integrity and autonomy, the logic of which has no parallel outside the living world.” Is he right? There's no way to model this. It's silly, it's too complex.Steve Quake (15:50):We don't know, alright. And it's great that there's naysayers. If everyone agreed this was doable, would it be worth doing? I mean the whole point is to take risks and get out and do something really challenging in the frontier where you don't know the answer. If we knew that it was doable, I wouldn't be interested in doing it. So I personally am happy that there's not a consensus.Eric Topol (16:16):Well, I mean to capture people's imagination here, if you're successful and you marshal a global effort, I don't know who's going to pay for it because it's a lot of work coming here going forward. But if you can do it, the question here is right today we talk about, oh let's make an organoid so we can figure out how to treat this person's cancer or understand this person's rare disease or whatever. And instead of having to wait weeks for this culture and all the expense and whatnot, you could just do it in a computer and in silico and you have this virtual twin of a person's cells and their tissue and whatnot. So the opportunity here is, I don't know if people get, this is just extraordinary and quick and cheap if you can get there. And it's such a bold initiative idea, who will pay for this do you think?Steve Quake (17:08):Well, CZI is putting an enormous amount of resources into it and it's a major project for us. We have been laying the groundwork for it. We recently put together what I think is if not the largest, one of the largest GPU supercomputer clusters for nonprofit basic science research that came online at the end of last year. And in fact in December we put out an RFA for the scientific community to propose using it to build models. And so we're sharing that resource within the scientific community as I think you appreciate, one of the real challenges in the field has been access to compute resources and industry has it academia at a much lower level. We are able to be somewhere in between, not quite at the level of a private company but the tech company but at a level beyond what most universities are being able to do and we're trying to use that to drive the field forward. We're also planning on launching RFAs we this year to help drive this project forward and funding people globally on that. And we are building a substantial internal effort within CZI to help drive this project forward.Eric Topol (18:17):I think it has the looks of the human genome project, which at time as you know when it was originally launched that people thought, oh, this is impossible. And then look what happened. It got done. And now the sequence of genome is just a commodity, very relatively, very inexpensive compared to what it used to be.Steve Quake (18:36):I think a lot about those parallels. And I will say one thing, Philip Ball, I will concede him the point, the cells are very complicated. The genome project, I mean the sort of genius there was to turn it from a biology problem to a chemistry problem, there is a test tube with a chemical and it work out the structure of that chemical. And if you can do that, the problem is solved. I think what it means to have the virtual cell is much more complex and ambiguous in terms of defining what it's going to do and when you're done. And so, we have our work cut out for us there to try to do that. And that's why a little bit, I established our North Star and CZI for the next decade as understanding the mysteries of the cell and that word mystery is very important to me. I think the molecules, as you pointed out earlier are understood, genome sequenced, protein structure solved or predicted, we know a lot about the molecules. Those are if not solved problems, pretty close to being solved. And the real mystery is how do they work together to create life in the cell? And that's what we're trying to answer with this virtual cell project.Eric Topol (19:43):Yeah, I think another thing that of course is happening concurrently to add the likelihood that you'll be successful is we've never seen the foundation models coming out in life science as they have in recent weeks and months. Never. I mean, I have a paper in Science tomorrow coming out summarizing the progress about not just RNA, DNA, ligands. I mean the whole idea, AlphaFold3, but now Boltz and so many others. It's just amazing how fast the torrent of new foundation models. So Charlotte, what do you think accounts for this? This is unprecedented in life science to see foundation models coming out at this clip on evolution on, I mean you name it, design of every different molecule of life or of course in cells included in that. What do you think is going on here?Charlotte Bunne (20:47):So on the one hand, of course we benefit profits and inherit from all the tremendous efforts that have been made in the last decades on assembling those data sets that are very, very standardized. CELLxGENE is very somehow AI friendly, as you can say, it is somewhat a platform that is easy to feed into algorithms, but at the same time we actually also see really new building mechanisms, design principles of AI algorithms in itself. So I think we have understood that in order to really make progress, build those systems that work well, we need to build AI tools that are designed for biological data. So to give you an easy example, if I use a large language model on text, it's not going to work out of the box for DNA because we have different reading directions, different context lens and many, many, many, many more.Charlotte Bunne (21:40):And if I look at standard computer vision where we can say AI really excels and I'm applying standard computer vision, vision transformers on multiplex images, they're not going to work because normal computer vision architectures, they always expect the same three inputs, RGB, right? In multiplex images, I'm measuring up to 150 proteins potentially in a single experiment, but every study will measure different proteins. So I deal with many different scales like larger scales and I used to attention mechanisms that we have in usual computer vision. Transformers are not going to work anymore, they're not going to scale. And at the same time, I need to be completely flexible in whatever input combination of channel I'm just going to face in this experiment. So this is what we right now did for example, in our very first work, inheriting the design principle that we laid out in the paper AI virtual cell and then come up with new AI architectures that are dealing with these very special requirements that biological data have.Charlotte Bunne (22:46):So we have now a lot of computer scientists that work very, very closely have a very good understanding of biologists. Biologists that are getting much and much more into the computer science. So people who are fluent in both languages somewhat, that are able to now build models that are adopted and designed for biological data. And we don't just take basically computer vision architectures that work well on street scenes and try to apply them on biological data. So it's just a very different way of thinking about it, starting constructing basically specialized architectures, besides of course the tremendous data efforts that have happened in the past.Eric Topol (23:24):Yeah, and we're not even talking about just sequence because we've also got imaging which has gone through a revolution, be able to image subcellular without having to use any types of stains that would disrupt cells. That's another part of the deep learning era that came along. One thing I thought was fascinating in the paper in Cell you wrote, “For instance, the Short Read Archive of biological sequence data holds over 14 petabytes of information, which is 1,000 times larger than the dataset used to train ChatGPT.” I mean that's a lot of tokens, that's a lot of stuff, compute resources. It's almost like you're going to need a DeepSeek type of way to get this. I mean not that DeepSeek as its claim to be so much more economical, but there's a data challenge here in terms of working with that massive amount that is different than the human language. That is our language, wouldn't you say?Steve Quake (24:35):So Eric, that brings to mind one of my favorite quotes from Sydney Brenner who is such a wit. And in 2000 at the sort of early first flush of success in genomics, he said, biology is drowning in a sea of data and starving for knowledge. A very deep statement, right? And that's a little bit what the motivation was for putting the Short Read Archive statistic into the paper there. And again, for me, part of the value of this endeavor of creating a virtual cell is it's a tool to help us translate data into knowledge.Eric Topol (25:14):Yeah, well there's two, I think phenomenal figures in your Cell paper. The first one that kicks across the capabilities of the virtual cell and the second that compares the virtual cell to the real or the physical cell. And we'll link that with this in the transcript. And the other thing we'll link is there's a nice Atlantic article, “A Virtual Cell Is a ‘Holy Grail' of Science. It's Getting Closer.” That might not be quite close as next week or year, but it's getting close and that's good for people who are not well grounded in this because it's much more taken out of the technical realm. This is really exciting. I mean what you're onto here and what's interesting, Steve, since I've known you for so many years earlier in your career you really worked on omics that is being DNA and RNA and in recent times you've made this switch to cells. Is that just because you're trying to anticipate the field or tell us a little bit about your migration.Steve Quake (26:23):Yeah, so a big part of my career has been trying to develop new measurement technologies that'll provide insight into biology. And decades ago that was understanding molecules. Now it's understanding more complex biological things like cells and it was like a natural progression. I mean we built the sequencers, sequenced the genomes, done. And it was clear that people were just going to do that at scale then and create lots of data. Hopefully knowledge would get out of that. But for me as an academic, I never thought I'd be in the position I'm in now was put it that way. I just wanted to keep running a small research group. So I realized I would have to get out of the genome thing and find the next frontier and it became this intersection of microfluidics and genomics, which as you know, I spent a lot of time developing microfluidic tools to analyze cells and try to do single cell biology to understand their heterogeneity. And that through a winding path led me to all these cell atlases and to where we are now.Eric Topol (27:26):Well, we're fortunate for that and also with your work with CZI to help propel that forward and I think it sounds like we're going to need a lot of help to get this thing done. Now Charlotte, as a computer scientist now at EPFL, what are you going to do to keep working on this and what's your career advice for people in computer science who have an interest in digital biology?Charlotte Bunne (27:51):So I work in particular on the prospect of using this to build diagnostic tools and to make diagnostics in the clinic easier because ultimately we have somewhat limited capabilities in the hospital to run deep omics, but the idea of being able to somewhat map with a cheaper and lighter modality or somewhat diagnostic test into something much richer because a model has been seeing all those different data and can basically contextualize it. It's very interesting. We've seen all those pathology foundation models. If I can always run an H&E, but then decide when to run deeper diagnostics to have a better or more accurate prediction, that is very powerful and it's ultimately reducing the costs, but the precision that we have in hospitals. So my faculty position right now is co-located between the School of Life Sciences, School of Computer Science. So I have a dual affiliation and I'm affiliated to the hospitals to actually make this possible and as a career advice, I think don't be shy and stick to your discipline.Charlotte Bunne (28:56):I have a bachelor's in biology, but I never only did biology. I have a PhD in computer science, which you would think a bachelor in biology not necessarily qualifies you through. So I think this interdisciplinarity also requires you to be very fluent, very comfortable in reading many different styles of papers and publications because a publication in a computer science venue will be very, very different from the way we write in biology. So don't stick to your study program, but just be free in selecting whatever course gets you closer to the knowledge you need in order to do the research or whatever task you are building and working on.Eric Topol (29:39):Well, Charlotte, the way you're set up there with this coalescence of life science and computer science is so ideal and so unusual here in the US, so that's fantastic. That's what we need and that's really the underpinning of how you're going to get to the virtual cells, getting these two communities together. And Steve, likewise, you were an engineer and somehow you became one of the pioneers of digital biology way back before it had that term, this interdisciplinary, transdisciplinary. We need so much of that in order for you all to be successful, right?Steve Quake (30:20):Absolutely. I mean there's so much great discovery to be done on the boundary between fields. I trained as a physicist and kind of made my career this boundary between physics and biology and technology development and it's just sort of been a gift that keeps on giving. You've got a new way to measure something, you discover something new scientifically and it just all suggests new things to measure. It's very self-reinforcing.Eric Topol (30:50):Now, a couple of people who you know well have made some pretty big statements about this whole era of digital biology and I think the virtual cell is perhaps the biggest initiative of all the digital biology ongoing efforts, but Jensen Huang wrote, “for the first time in human history, biology has the opportunity to be engineering, not science.” And Demis Hassabis wrote or said, ‘we're seeing engineering science, you have to build the artifact of interest first, and then once you have it, you can use the scientific method to reduce it down and understand its components.' Well here there's a lot to do to understand its components and if we can do that, for example, right now as both of AI drug discoveries and high gear and there's umpteen numbers of companies working on it, but it doesn't account for the cell. I mean it basically is protein, protein ligand interactions. What if we had drug discovery that was cell based? Could you comment about that? Because that doesn't even exist right now.Steve Quake (32:02):Yeah, I mean I can say something first, Charlotte, if you've got thoughts, I'm curious to hear them. So I do think AI approaches are going to be very useful designing molecules. And so, from the perspective of designing new therapeutics, whether they're small molecules or antibodies, yeah, I mean there's a ton of investment in that area that is a near term fruit, perfect thing for venture people to invest in and there's opportunity there. There's been enough proof of principle. However, I do agree with you that if you want to really understand what happens when you drug a target, you're going to want to have some model of the cell and maybe not just the cell, but all the different cell types of the body to understand where toxicity will come from if you have on-target toxicity and whether you get efficacy on the thing you're trying to do.Steve Quake (32:55):And so, we really hope that people will use the virtual cell models we're going to build as part of the drug discovery development process, I agree with you in a little of a blind spot and we think if we make something useful, people will be using it. The other thing I'll say on that point is I'm very enthusiastic about the future of cellular therapies and one of our big bets at CZI has been starting the New York Biohub, which is aimed at really being very ambitious about establishing the engineering and scientific foundations of how to engineer completely, radically more powerful cellular therapies. And the virtual cell is going to help them do that, right? It's going to be essential for them to achieve that mission.Eric Topol (33:39):I think you're pointing out one of the most important things going on in medicine today is how we didn't anticipate that live cell therapy, engineered cells and ideally off the shelf or in vivo, not just having to take them out and work on them outside the body, is a revolution ongoing, and it's not just in cancer, it's in autoimmune diseases and many others. So it's part of the virtual cell need. We need this. One of the things that's a misnomer, I want you both to comment on, we keep talking about single cell, single cell. And there's a paper spatial multi-omics this week, five different single cell scales all integrated. It's great, but we don't get to single cell. We're basically looking at 50 cells, 100 cells. We're not doing single cell because we're not going deep enough. Is that just a matter of time when we actually are doing, and of course the more we do get down to the single or a few cells, the more insights we're going to get. Would you comment about that? Because we have all this literature on single cell comes out every day, but we're not really there yet.Steve Quake (34:53):Charlotte, do you want to take a first pass at that and then I can say something?Charlotte Bunne (34:56):Yes. So it depends. So I think if we look at certain spatial proteomics, we still have subcellular resolutions. So of course, we always measure many different cells, but we are able to somewhat get down to resolution where we can look at certain colocalization of proteins. This also goes back to the point just made before having this very good environment to study drugs. If I want to build a new drug, if I want to build a new protein, the idea of building this multiscale model allows us to actually simulate different, somehow binding changes and binding because we simulate the effect of a drug. Ultimately, the redouts we have they are subcellular. So of course, we often in the spatial biology, we often have a bit like methods that are rather coarse they have a spot that averages over certain some cells like hundreds of cells or few cells.Charlotte Bunne (35:50):But I think we also have more and more technologies that are zooming in that are subcellular where we can actually tag or have those probe-based methods that allow us to zoom in. There's microscopy of individual cells to really capture them in 3D. They are of course not very high throughput yet, but it gives us also an idea of the morphology and how ultimately morphology determine certain somehow cellular properties or cellular phenotype. So I think there's lots of progress also on the experimental and that ultimately will back feed into the AI virtual cell, those models that will be fed by those data. Similarly, looking at dynamics, right, looking at live imaging of individual cells of their morphological changes. Also, this ultimately is data that we'll need to get a better understanding of disease mechanisms, cellular phenotypes functions, perturbation responses.Eric Topol (36:47):Right. Yes, Steve, you can comment on that and the amazing progress that we have made with space and time, spatial temporal resolution, spatial omics over these years, but that we still could go deeper in terms of getting to individual cells, right?Steve Quake (37:06):So, what can we do with a single cell? I'd say we are very mature in our ability to amplify and sequence the genome of a single cell, amplify and sequence the transcriptome of a single cell. You can ask is one cell enough to make a biological conclusion? And maybe I think what you're referring to is people want to see replicates and so you can ask how many cells do you need to see to have confidence in any given biological conclusion, which is a reasonable thing. It's a statistical question in good science. I think I've been very impressed with how the mass spec people have been doing recently. I think they've finally cracked the ability to look at proteins from single cells and they can look at a couple thousand proteins. That was I think one of these Nature method of the year things at the end of last year and deep visual proteomics.Eric Topol (37:59):Deep visual proteomics, yes.Steve Quake (38:00):Yeah, they are over the hump. Yeah, they are over the hump with single cell measurements. Part of what's missing right now I think is the ability to reliably do all of that on the same cell. So this is what Charlotte was referring to be able to do sort of multi-modal measurements on single cells. That's kind of in its infancy and there's a few examples, but there's a lot more work to be done on that. And I think also the fact that these measurements are all destructive right now, and so you're losing the ability to look how the cells evolve over time. You've got to say this time point, I'm going to dissect this thing and look at a state and I don't get to see what happens further down the road. So that's another future I think measurement challenge to be addressed.Eric Topol (38:42):And I think I'm just trying to identify some of the multitude of challenges in this extraordinarily bold initiative because there are no shortage and that's good about it. It is given people lots of work to do to overcome, override some of these challenges. Now before we wrap up, besides the fact that you point out that all the work has to be done and be validated in real experiments, not just live in a virtual AI world, but you also comment about the safety and ethics of this work and assuming you're going to gradually get there and be successful. So could either or both of you comment about that because it's very thoughtful that you're thinking already about that.Steve Quake (41:10):As scientists and members of the larger community, we want to be careful and ensure that we're interacting with people who said policy in a way that ensures that these tools are being used to advance the cause of science and not do things that are detrimental to human health and are used in a way that respects patient privacy. And so, the ethics around how you use all this with respect to individuals is going to be important to be thoughtful about from the beginning. And I also think there's an ethical question around what it means to be publishing papers and you don't want people to be forging papers using data from the virtual cell without being clear about where that came from and pretending that it was a real experiment. So there's issues around those sorts of ethics as well that need to be considered.Eric Topol (42:07):And of those 40 some authors, do you around the world, do you have the sense that you all work together to achieve this goal? Is there kind of a global bonding here that's going to collaborate?Steve Quake (42:23):I think this effort is going to go way beyond those 40 authors. It's going to include a much larger set of people and I'm really excited to see that evolve with time.Eric Topol (42:31):Yeah, no, it's really quite extraordinary how you kick this thing off and the paper is the blueprint for something that we are all going to anticipate that could change a lot of science and medicine. I mean we saw, as you mentioned, Steve, how that deep visual proteomics (DVP) saved lives. It was what I wrote a spatial medicine, no longer spatial biology. And so, the way that this can change the future of medicine, I think a lot of people just have to have a little bit of imagination that once we get there with this AIVC, that there's a lot in store that's really quite exciting. Well, I think this has been an invigorating review of that paper and some of the issues surrounding it. I couldn't be more enthusiastic for your success and ultimately where this could take us. Did I miss anything during the discussion that we should touch on before we wrap up?Steve Quake (43:31):Not from my perspective. It was a pleasure as always Eric, and a fun discussion.Charlotte Bunne (43:38):Thanks so much.Eric Topol (43:39):Well thank you both and all the co-authors of this paper. We're going to be following this with the great interest, and I think for most people listening, they may not know that this is in store for the future. Someday we will get there. I think one of the things to point out right now is the models we have today that large language models based on transformer architecture, they're going to continue to evolve. We're already seeing so much in inference and ability for reasoning to be exploited and not asking for prompts with immediate answers, but waiting for days to get back. A lot more work from a lot more computing resources. But we're going to get models in the future to fold this together. I think that's one of the things that you've touched on the paper so that whatever we have today in concert with what you've laid out, AI is just going to keep getting better.Eric Topol (44:39):The biology that these foundation models are going to get broader and more compelling as to their use cases. So that's why I believe in this. I don't see this as a static situation right now. I just think that you're anticipating the future, and we will have better models to be able to integrate this massive amount of what some people would consider disparate data sources. So thank you both and all your colleagues for writing this paper. I don't know how you got the 42 authors to agree to it all, which is great, and it's just a beginning of something that's a new frontier. So thanks very much.Steve Quake (45:19):Thank you, Eric.**********************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access, with no ads..Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding by US biomedical research at NIH and other governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

Rozmowy w RMF FM
Jan Kosiński o Nagrodzie Nobla i AlphaFold2

Rozmowy w RMF FM

Play Episode Listen Later Jan 4, 2025 25:15


Naukowcy nie mają wątpliwości, że w nowym roku badania z wykorzystaniem sztucznej inteligencji staną się jeszcze powszechniejsze. Na praktyczne zastosowania, na przykład w postaci nowych leków, trzeba będzie jednak jeszcze poczekać - mówi RMF FM dr Jan Kosiński z Europejskiego Laboratorium Biologii Molekularnej w Hamburgu. W rozmowie z Grzegorzem Jasińskim przyznaje, że metody AI, na przykład wyróżniona ubiegłoroczną Nagrodą Nobla metoda przewidywania kształtu białek, nie zastępują do końca eksperymentu. W jego laboratorium program AlphaFold2 pomaga m.in. w badaniach dużych kompleksów białkowych i analizie oddziaływań białek wirusa grypy z białkami człowieka.

AI DAILY: Breaking News in AI
YOUR AI TWIN IS HERE

AI DAILY: Breaking News in AI

Play Episode Listen Later Nov 21, 2024 3:48


Plus AI Art Faces Backlash AI Models Replicate Human Personalities With Stunning Accuracy Researchers from Stanford and Google DeepMind have developed AI "simulation agents" capable of mimicking human personalities with 85% accuracy. Using qualitative interviews, the study created digital replicas of 1,000 participants to test their behaviors, opening new possibilities for social science research. Concerns include ethical risks and limitations in replicating unique human traits. AI-Generated Art Faces Backlash Over Quality and Oversaturation Once a marvel, AI-generated art now clutters social media with unpolished and oversaturated content. Critics point to its "slop-sheen" appearance, replacing authentic visuals with digital imitations. Once promising, AI art is increasingly viewed as a novelty lacking depth, overshadowing genuine creativity and alienating users craving originality in their feeds. Pokémon GO Players Help Train Niantic's Geospatial AI Niantic, Pokémon GO's developer, revealed its AI-powered "Large Geospatial Model," which uses data from player-scanned locations to create 3D maps. With 10 million scanned sites globally, the AI predicts unseen details, enabling applications beyond gaming, such as logistics and urban design. This pedestrian-focused data collection is unique and impactful.  AI-Powered Method Revolutionizes Protein Design Researchers at TUM and MIT used AlphaFold2 and gradient descent to design large artificial proteins with precision. This innovative process enables tailored proteins for medical and industrial use, such as binding viruses or transporting drugs. The team successfully tested over 100 proteins, demonstrating real-life accuracy in structure and function. U.S. Retains Top Spot in Global AI Leadership The U.S. leads global AI innovation, surpassing China in investment and responsible research, per Stanford's AI Index. With $67.2 billion in private AI funding and major tech companies like OpenAI driving advancements, the U.S. outpaces China's $7.8 billion. Top-ranking nations include the UK, India, and UAE, highlighting diverse AI strengths.  Can AI Revive the Struggling Humanities? As AI reshapes education, its potential to support long-form literature study grows. Critics argue AI harms attention spans, yet it offers tools like contextual definitions and summaries to make texts approachable. Integrating AI into syllabi could foster trust and help students overcome literary challenges, balancing innovation with academic integrity.

Scientificast
Rane stitiche e proboscidi rugose

Scientificast

Play Episode Listen Later Oct 14, 2024 52:33


Rane stitiche, proboscidi rugose e il premio Nobel per la chimica sono gli argomenti di questa puntata 533. Ai microfoni trovate Luca e Ilaria in una puntata che Luca ha definito “Delirante” con una esterna che per fortuna ci riporta un po' di serietà grazie a Leonardo e al suo ospite Marco Salvatore Nobile.Lo sapevate che alcuni girini non fanno la cacca per settimane? Prima della metamorfosi, le raganelle di Eiffinger immagazzinano i loro rifiuti solidi in una sacca intestinale. Ma perchè lo fanno? Sono stitici? Luca ci racconta la loro storia.Leonardo intervista Marco Salvatore Nobile, prof. associato all'Università Ca' Foscari di Venezia che ci parla del premio Nobel per la chimica assegnato ai creatori di AlphaFold 2, un sistema basato su machine-learning per predire la struttura tridimensionale delle proteine.E dopo un a barza al limite dell'orrendo Ilaria ci parla dell'argomento che ha scelto per voi per questa puntata: le proboscidi degli elefanti e il perché siano così rugose. Le proboscidi sono degli organi davvero unici in natura che permettono agli elefanti tantissime cose diverse e, a quanto pare, le loro rughe hanno un ruolo fondamentale e inaspettato.Siete pronti ad ascoltare la puntata 533? Mettetevi comodi e schiacciate play!Diventa un supporter di questo podcast: https://www.spreaker.com/podcast/scientificast--1762253/support.

Smart City
​Se Alphafold 2 è da Nobel, cosa sarà Alphafold 3?

Smart City

Play Episode Listen Later Oct 14, 2024


Il recente Premio Nobel per la Chimica è andato per metà a Demis Hassabis e John Jumper, inventori di Alphafold2, il SW di IA di Google Deep Mind che permette di ottenere la forma e la struttura di una proteina dal suo corrispettivo codice genetico. Meno di 6 mesi fa, tuttavia, lo stesso Hassabis, CEO di Deep Mind, ha lanciato Aphafold 3, una versione aggiornata del SW che permette di predire non solo la struttura delle proteine, ma anche come queste interagiscono con altre proteine, molecole farmacologiche, frammenti di DNA. Ed è proprio per capirne le interazioni che vogliamo conoscere la struttura delle proteine. Se quindi l’impatto di Alphafold 2 sulla ricerca scientifica è testimoniato dal Nobel, quale impatto possiamo aspettarci da Alphafold3? Ne parliamo con Pietro Faccioli, professore di Biofisica Computazionale dell'Università di Milano-Bicocca.

Sidecar Sync
AI and Exponential Growth, Protein Folding, and Predicting the Future of AGI | 39

Sidecar Sync

Play Episode Listen Later Jul 17, 2024 48:42 Transcription Available


Send us a Text Message.Join us on this week's episode of Sidecar Sync as hosts Amith and Mallory dive into the fascinating world of exponential growth and artificial intelligence. From the historical context of computing power to the latest advancements in AI, we explore how these technologies are revolutionizing various industries, including healthcare and associations. Amith shares his insights on the future of AI, the concept of artificial general intelligence (AGI), and how associations can stay ahead in this rapidly evolving landscape. Whether you're curious about AI's impact on society or looking for strategies to future-proof your association, this episode is packed with valuable information and forward-thinking perspectives.

Oxide and Friends
Bookclub: How Life Works by Philip Ball

Oxide and Friends

Play Episode Listen Later May 22, 2024 110:30 Transcription Available


The long-awaited Oxide and Friends bookclub! Bryan and Adam were joined by special guest--and real life biologist--Greg Cost to discuss Philip Ball's terrific book, How Life Works: A User's Guide to the New Biology. Spoiler: Alan Turing makes a very expected appearance!In addition to Bryan Cantrill and Adam Leventhal, we were joined by special guest Greg Cost.Some of the topics we hit on, in the order that we hit them:The Turing patternRNA as a precursor to DNAXenopus frogXenobotsAnton computerBryan's reading notesCentral themesPower and limitations of metaphor – especially mechanical onesThe fundamental, diametrical opposition between life and machines. (Nature does not use simulations!)Rejecting the neo-Darwinian paradigmPassages of note:p. 91: “of the common SNPs seen in human populations, fully 62 percent are associated with height” … “the most common genomic associations for complex traits like this are in the noncoding regions” What is cognition? p. 137: “Life is, as biologist Michael Levin Jeremy Gunawardenaand philosopher Daniel Dennet have argued, ‘cognition all the way down'” AlphaFold2 p. 148 “AlphaFold does not so much solve the infamously difficult protein-folding problem as sidestep it. The algorithm makes no predictions about how a polypeptide chain folds, but simply predicts the end result based on the sequence.”p. 156: allostery refers to how a

That Was The Week
Hating the Future

That Was The Week

Play Episode Listen Later May 10, 2024 35:50


A reminder for new readers. That Was The Week includes a collection of my selected readings on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest to me. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are sometimes long snippets to convey why they are of interest. Click on the headline, contents link or the ‘More' link at the bottom of each piece to go to the original. I express my point of view in the editorial and the weekly video below.Congratulations to this week's chosen creators: @TechCrunch, @Apple, @emroth08, @coryweinberg, @mariogabriele, @peterwalker99, @KevinDowd, @jessicaAhamlin, @stephistacey, @ttunguz, @annatonger, @markstenberg3, @EllisItems, @TaraCopp, @ingridlunden, @Jack, @karissabe, @psawers, @Haje, @mikebutcher, @tim_cookContents* Editorial: Hating the Future* Essays of the Week* Apple's ‘Crush' ad is disgusting* Apple apologizes for iPad ‘Crush' ad that ‘missed the mark'* Milken's New Power Players* Ho Nam on VC's Power Law* State of Private Markets: Q1 2024* The weight of the emerging manager* Pandemic-era winners suffer $1.5tn fall in market value* Video of the Week* Apples iPad Video* AI of the Week* The Fastest Growing Category of Venture Investment in 2024* Meet My A.I. Friends* OpenAI plans to announce Google search competitor on Monday, sources say* Leaked Deck Reveals How OpenAI Is Pitching Publisher Partnerships* A Revolutionary Model.* An AI-controlled fighter jet took the Air Force leader for a historic ride. What that means for war* Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in' but DST is* News Of the Week* Jack Dorsey claims Bluesky is 'repeating all the mistakes' he made at Twitter* FTX crypto fraud victims to get their money back — plus interest* Apple's Final Cut Camera lets filmmakers connect four cameras at once* Startup of the Week* Wayve co-founder Alex Kendall on the autonomous future for cars and robots* X of the Week* Tim CookEditorial: Hating the FutureAn Ad and its Detractorsbet a lot of money that the TechCrunch writing and editorial team have had an interesting 72 hours.After Apple announced its new iPad on Tuesday, the ad that supported it was initially widely slammed for its cruelty to obsolete tools for creativity, including a piano, guitar, and paint. This week's Video of The Week has it if you don't know what I am talking about.A sizeable crushing machine compresses the items with colossal force, and in the end, an iPad can incorporate the functions of traditional items.It's not the most amazing ad ever, certainly not as bold as Steve Jobs's 1984 ad, but it's in the same genre. The past must be crushed to release new freedom and creativity for a fraction of the price and, often, the power and flexibility.Oh, and it's thin, very thin.I was not offended. Devin at TechCrunch was. He leads this week's essay of the week with his “Apple's ‘Crush' ad is disgusting” and does not mince words:What we all understand, though — because unlike Apple ad executives, we live in the world — is that the things being crushed here represent the material, the tangible, the real. And the real has value. Value that Apple clearly believes it can crush into yet another black mirror.This belief is disgusting to me. And apparently to many others, as well.He also makes the incorrect point that:A virtual guitar can't replace a real guitar; that's like thinking a book can replace its author.It's more like a digital book replacing a paper book than the author being replaced. Oh wait… that has happened.That said, a virtual guitar can replace a real guitar, and an AI guitar can even replace a virtual guitar—and be better. That is not to say there are no more actual traditional guitars. They will be a choice, not a necessity, especially for people like me who can't play a guitar but will be able to play these.Devin had his supporters in the comments (go read them).Handmaid's Tale director Reed Morano told Apple CEO Tim Cook to “read the room” in a post on X. Matthew Carnal captured my somewhat unkind instinct:There were a lot more reactions to the Apple ad haters like Matthews.Of course, many old instrument lovers (the instruments, not their age) hated the Ad. By Thursday, this being the times we live in, Apple apologized for the ad:Tor Myhren, Apple's vice president of marketing, said the company “missed the mark.”“Creativity is in our DNA at Apple, and it's incredibly important to us to design products that empower creatives all over the world,” Myhren told Ad Age. “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we're sorry.”Please judge for yourself below, but my 2c is that the ad was a moderately underwhelming attempt to champion innovation. It is certainly not offensive unless you are ultra-sensitive and have feelings for pianos, guitars, and paint. Oh, and hate attempts to recreate them in a more usable form. And Apple really should have taken the high ground here.I spent some of the week in LA at the CogX Festival and virtually at the Data Driven Summit by @AndreRetterath. The latter focused on what is happening in Venture Capital, as do several of this week's essays. Milken's event was running in LA also. Its attitude to Venture Capital is best summed up here:“We're all being told in the market that DPI is the new IRR,” B Capital's Raj Ganguly said onstage Wednesday. (The acronym sandwich means investment firms have to actually prove that their investments actually generate cash through a metric called distributions to paid-in capital, not just theoretically, through internal rate of return.) “Even the venture panel at Milken is at the end of the day on Wednesday,” he joked, meaning that it didn't get top billing at the conference, which had started a couple days earlier.This does sum up where we are. Hundreds of Billions of dollars are still trapped inside companies funded in 2020-2022, with little prospect of producing returns. The impact is that there is less funding for current startups (see the Carta piece below). And much of what is flowing is flowing to AI and into a very small number of companies (see Tomasz Tungux below).However, innovation and funding are still possible. This week's Startup of the Week is Wayve, a UK autonomous driving platform that seems to agree with Elon Musk that cameras are sufficient to teach a car to drive. Wayve's ambitions go beyond Cars (also like Musk) but differ in that the product is available to all developers to embed in their products.“Very soon you'll be able to buy a new car, and it'll have Wayve's AI on it … Then this goes into enabling all kinds of embodied AI, not just cars, but other forms of robotics. I think the ultimate thing that we want to achieve here is to go way beyond where AI is today with language models and chatbots. But to really enable a future where we can trust intelligent machines that we can delegate tasks to, and of course they can enhance our lives and self-driving will be the first example of that.”Love that attitude.Essays of the WeekApple's ‘Crush' ad is disgustingDevin Coldewey, 1:58 PM PDT • May 9, 2024Apple can generally be relied on for clever, well-produced ads, but it missed the mark with its latest, which depicts a tower of creative tools and analog items literally crushed into the form of the iPad.Apple has since apologized for the ad and canceled plans to televise it. Apple's VP of Marketing Tor Myhren told Ad Age: “We missed the mark with this video, and we're sorry.” Apple declined to offer further comment to TechCrunch.But many, including myself, had a negative and visceral reaction to this, and we should talk about why. It's not just because we are watching stuff get crushed. There are countless video channels dedicated to crushing, burning, exploding and generally destroying everyday objects. Plus, of course, we all know that this kind of thing happens daily at transfer stations and recycling centers. So it isn't that.And it isn't that the stuff is itself so valuable. Sure, a piano is worth something. But we see them blown up in action movies all the time and don't feel bad. I like pianos, but that doesn't mean we can't do without a few disused baby grands. Same for the rest: It's mostly junk you could buy off Craigslist for a few bucks, or at a dump for free. (Maybe not the editing station.)The problem isn't with the video itself, which in fairness to the people who staged and shot it, is actually very well done. The problem is not the media, but the message.We all get the ad's ostensible point: You can do all this stuff in an iPad. Great. We could also do it on the last iPad, of course, but this one is thinner (no one asked for that, by the way; now cases won't fit) and some made-up percentage better.What we all understand, though — because unlike Apple ad executives, we live in the world — is that the things being crushed here represent the material, the tangible, the real. And the real has value. Value that Apple clearly believes it can crush into yet another black mirror.This belief is disgusting to me. And apparently to many others, as well.Destroying a piano in a music video or Mythbusters episode is actually an act of creation. Even destroying a piano (or monitor, or paint can, or drum kit) for no reason at all is, at worst, wasteful!But what Apple is doing is destroying these things to convince you that you don't need them — all you need is the company's little device, which can do all that and more, and no need for annoying stuff like strings, keys, buttons, brushes or mixing stations.We're all dealing with the repercussions of media moving wholesale toward the digital and always-online. In many ways, it's genuinely good! I think technology has been hugely empowering.But in other, equally real ways, the digital transformation feels harmful and forced, a technotopian billionaire-approved vision of the future where every child has an AI best friend and can learn to play the virtual guitar on a cold glass screen.Does your child like music? They don't need a harp; throw it in the dump. An iPad is good enough. Do they like to paint? Here, Apple Pencil, just as good as pens, watercolors, oils! Books? Don't make us laugh! Destroy them. Paper is worthless. Use another screen. In fact, why not read in Apple Vision Pro, with even faker paper?What Apple seems to have forgotten is that it is the things in the real world — the very things Apple destroyed — that give the fake versions of those things value in the first place.A virtual guitar can't replace a real guitar; that's like thinking a book can replace its author.That doesn't mean we can't value both for different reasons. But the Apple ad sends the message that the future it wants doesn't have bottles of paint, dials to turn, sculpture, physical instruments, paper books. Of course, that's the future it's been working on selling us for years now, it just hadn't put it quite so bluntly before.When someone tells you who they are, believe them. Apple is telling you what it is, and what it wants the future to be, very clearly. If that future doesn't disgust you, you're welcome to it.Apple apologizes for iPad ‘Crush' ad that ‘missed the mark'/The company says ‘we're sorry' after its ad was seen as dismissive by the creatives Apple typically tries to court.By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.May 9, 2024 at 1:22 PM PDTApple has apologized after a commercial meant to showcase its brand-new iPad Pro drew widespread criticism among the creative community. In a statement provided to Ad Age, Tor Myhren, Apple's vice president of marketing, said the company “missed the mark.”“Creativity is in our DNA at Apple, and it's incredibly important to us to design products that empower creatives all over the world,” Myhren told Ad Age. “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we're sorry.”On Tuesday, Apple introduced the M4-powered iPad Pro, which the company described as its thinnest product ever. To advertise all the creative possibilities with the iPad, it released a “Crush!” commercial that shows things like a piano, record player, paint, and other works flattening under the pressure of a hydraulic press. At the end, only one thing remains: an iPad Pro.The ad rubbed some creatives the wrong way. Hugh Grant called it a “destruction of human experience,” while Handmaid's Tale director Reed Morano told Apple CEO Tim Cook to “read the room” in a post on X. Apple didn't immediately respond to The Verge's request for comment.Milken's New Power PlayersBy Cory WeinbergMay 8, 2024, 5:00pm PDTIt's no secret that the suits at the annual big-money confab put on by the Milken Institute this week have few spending limits. Staring you in the face in the lobby of the Beverly Hilton is a booth set up by Bombardier, marketing its private jets to attendees. (A new 10-seater costs $32 million, I learned.)What attendees can't really buy, however, is time. The soundtrack of the Los Angeles conference might as well have been a ticking clock. Fund managers at private equity and venture capital firms are running out of time to distribute cash to their investors, a task complicated by the paucity of either mergers or public offerings that typically provide VC and PE firms with a way to cash out. The fact that interest rates now appear likely to stay higher for longer doesn't help. That meant a lot of conversations at the conference weren't about grand investment strategies. Instead, people were conferring about financial tactics to distribute cash or kick the can down the road by selling stakes on the secondary markets or spinning up continuation funds, essentially rolling investors' commitments forwards—not the most inspiring stuff.  “We're all being told in the market that DPI is the new IRR,” B Capital's Raj Ganguly said onstage Wednesday. (The acronym sandwich means investment firms have to actually prove that their investments actually generate cash through a metric called distributions to paid-in capital, not just theoretically, through internal rate of return.) “Even the venture panel at Milken is at the end of the day on Wednesday,” he joked, meaning that it didn't get top billing at the conference, which had started a couple days earlier.The new kings of the conference were firms with a lot more time to play with—that is, sovereign wealth funds with buckets of oil and natural gas money, or pension funds with long-term investment horizons rather than shorter 10-year fund lives. The contrast here is embodied in the financial concept of duration: How long do you actually need to get cash back on your investment? And how sensitive is it to interest rate hikes?The sentiment was everywhere. I shared a Lyft ride with one PE investor last night who called sovereign wealth funds “the only game in town” for PE firms raising new money. Abu Dhabi sovereign wealth fund Mubadala Capital and the Qatar Investment Authority were two of the conference's top sponsors, meaning they were paying up to explain themselves to the finance and tech universe. That tactic seemed to be working. “You're going to have people lining up their business cards for capital from QIA, I can already see,” quipped Leon Kalvaria, an executive at Citi, onstage with QIA's head of funds, Mohsin Tanveer Pirzada.  Not everyone will suck it up, of course. These funds often get tagged with a “dumb money” label—because they sometimes drive up prices for the rest of the investment world. They still have to face questions about who they are, their source of funds, and the sometimes authoritative regimes behind them. For now, though, it's their time in the spotlight. Ho Nam on VC's Power LawLessons from Arthur Rock, Steve Jobs, Don Lucas, Paul Graham and beyond.MARIO GABRIELE, MAY 07, 2024Friends, We're back with our latest edition of “Letters to a Young Investor,” the series designed to give readers like you an intimate look at the strategies, insights, and wisdom of the world's best investors. We do that via a back-and-forth correspondence that we publish in full – giving you a chance to peek into the inbox of legendary venture capitalists.   Below, you'll find my second letter with Altos co-founder and managing director Ho Nam. For those who are just joining us, Ho is, in my opinion, one of the great investors of the past couple of decades and a true student of the asset class.Because of his respect for the practice of venture capital, I was especially excited to talk to him about today's topic: learning from the greats. Who were Ho's mentors? Which investors does he most admire and why? What lessons from venture's past should be better remembered by today's managers? Lessons from Ho* Prepare for one true winner. Even skilled investors often have just one or two outlier bets over the course of their career. Because of venture's power law, their returns may dwarf the dividends of all other investments combined. Your mission is to find these legendary businesses, engage with them deeply, and partner for decades.  * Focus on the company. Venture capital is full of short-term incentives. Instead of focusing on raising new vintages or building out Altos as a money management firm, Ho and his partners devote themselves to their portfolio companies. Though firm building is important, if you find great companies and work with them closely, you will have plenty of available options. * Pick the right role models. Ho chose his mentors carefully. Though there have certainly been louder and flashier investors over the past four decades, Ho learned the most from Arthur Rock, Don Lucas, and Arnold Silverman. All were understated and focused on the craft of investing. Find the people you consider true practitioners, and study their work. * Watch and learn. Learning from the greats can be done from a distance and may not include a memorable anecdote or pithy saying. Ho's biggest lessons came from observing the habits of practitioners like Rock and Lucas, not via a structured mentorship or dramatic episode. It's by studying the everyday inputs of the greats that you may gain the most wisdom.Mario's letterSubject: Learning from the greatsFrom: Mario GabrieleTo: Ho NamDate: Friday, April 12 2024 at 1:59 PM EDTHo, After moving out of New York City (at least for a little bit), I'm writing to you from a small house on Long Island. It's been really lovely to have a bit more space and quiet away from the city's intermittently inspiring and exhausting buzz...Lots More, Must ReadState of Private Markets: Q1 2024Authors: Peter Walker, Kevin DowdPublished date:  May 7, 2024The venture capital fundraising market remained slow in Q1 2024, but valuations held steady or climbed at almost every stage.Contents* State of Private Markets: Q1 2024* Key trends* Fundraising & valuations* Employee equity & movement* Industry-specific data* Methodology* Overview* Financings* TerminationsThe startup fundraising market got off to a cautious start in 2024. At current count, companies on Carta closed 1,064 new funding rounds  during the first quarter of the year, down 29% compared with the prior quarter. The decline was sharpest at the early stages of the venture lifecycle: Deal count fell by 33% at the seed stage in Q1 and 36% at Series A. Instead of new primary funding events, many companies opted to raise bridge rounds. At both seed and Series A, more than 40% of all financings in Q1 were bridge rounds. Series B wasn't far behind, at 38%. VCs were still willing to spend big on certain deals. Despite the decrease in round count, total cash invested increased slightly in Q1, reaching $16.3 billion. But when it came to negotiating their valuations, many startups had to settle: 23% of all new rounds in Q1 were down rounds, the highest rate in more than five years. After experiencing a pandemic-era surge and subsequent correction,the venture market settled into a quieter place in 2023. So far, that relative tranquility has continued into 2024.Q1 highlights* VCs look to the West: Startups based in the West census region captured 62% of all venture capital raised by companies on Carta in Q1, the highest quarterly figure since Q1 2019. The Northeast, South, and Midwest all saw their market share decline.* The Series C market bounces back: Series C startups raised $4.6 billion in new capital in Q1, a 130% increase from the previous quarter. The median primary Series C valuation was $195.7 million, up 48% from the prior quarter.* Layoffs still  linger: Companies on Carta laid off more than 28,000 employees in Q1. But job cuts have grown less frequent since January, with March seeing the fewest monthly layoffs in nearly two years.Note: If you're looking for more industry-specific data, download the addendum to this report for an extended dataset. Key trendsThe current Q1 figures of 1,064 total rounds and $16.3 billion in cash raised will both increase in the weeks to come, as companies continue to report transactions from the quarter. With those projected increases, the final data for Q1 will likely look quite similar to fundraising numbers from each of the past few quarters. Those quarterly  fundraising numbers from 2023 ended up looking fairly similar to 2018, 2019, and the first half of 2020. In terms of numbers of deals and cash raised, it's looking more and more like the pandemic  bull market will go down as an anomalous stretch in what has otherwise been a fairly steady market. After apparently reaching a plateau during 2023, the rate of down rounds experienced another notable increase during Q1 2024, jumping to 23%. The median time between startup rounds is roughly two to three years, depending on the stage. This timeline means that many companies raising new funding in Q1 would have last raised funding sometime in 2021, when valuations were soaring across the venture landscape. Considering how valuations have declined in the time since, it makes sense that down rounds are still prevalent. Companies in the West census region combined to bring in 53.3% of all capital raised by startups on Carta from Q2 2023 through Q1 2024, with California accounting for nearly 45% of that cash. Massachusetts ranked second among the states with 12.71% of all capital raised, while New York claimed 10.31%.In terms of VC activity, the West region is centered around California. The Northeast revolves around Massachusetts and New York. The South has two smaller hubs, in Texas (4.67%) and Florida (3.99%). The Midwest, though, is without a real standard-bearer: Illinois led the way in terms of cash raised over the past 12 months, at just 1.68%. The West (and specifically California)  has always been the center of gravity for the U.S. venture capital industry. During Q1, the region's gravitational force seems to have gotten even stronger. Startups based in the West raised 62% of all total capital invested on Carta in Q1, its highest quarterly figure since Q1 2019. As a result, the other three census regions saw their market shares decline in Q1—in some cases significantly. The proportion of all VC raised by startups raised in the South fell to 12% in Q1, down from 17% the prior quarter and from 23% a year ago. And the Midwest's share of cash raised fell from 7% down to 4%. For early-stage investors, Q1 was the slowest quarter in many years. Seed deal count fell to 414, down 33% from Q4 2023, and Series A deal count dropped to 313, a 36% decline. In both cases, those are the lowest quarterly deal counts since at least the start of 2019. Total cash raised also declined at both stages in Q1. The $3.1 billion in Series A cash raised in Q1 represents a 35% decline quarter-over-quarter and a 34% dip year-over-year. Cash raised at the seed stage declined by 33% both quarter over quarter and year over year.It was a much friendlier fundraising quarter for companies in the middle stages of the startup lifecycle. The number of Series B deals in Q1 declined by a more modest 11% compared to the prior quarter. And Series C deal count increased by 14%, marking the busiest quarter for that stage since Q2 2023. Total cash raised also rose significantly at Series C in Q1, hitting $4.6 billion. That's a 130% increase quarter-over-quarter and a 44% bump year-over-year. At Series B, total cash raised has now increased in consecutive quarters. Compared to earlier stages, transactions at the Series D and at Series E+  remain few and far between. There were just 39 venture rounds combined in Q1 among startups at Series D or later, the second-fewest of any quarter in the past five years. The lowest count came one year ago, in Q1 2023, when there were just 29 combined late-stage deals. Total cash raised across these stages has been mostly consistent over the past few quarters. There's been more variation in average round size. The average Series D round in Q1 was about $77 million, compared to $56 million in Q4 2023...Lots MoreThe weight of the emerging managerBy Jessica HamlinMay 3, 2024Risk-averse limited partners tend to gravitate to fund managers with a long track record, but are they missing out on potential upside by avoiding emerging managers?Over the past decade, emerging managers' share of US private market fundraising activity has declined steadily.In 2023, this figure fell to 12.7%, the lowest share of capital raised by newer fund managers since before 2000, according to PitchBook's recent analyst note,Establishing a Case for Emerging Managers.Limited exits in PE and VC over the past two years have exacerbated this reality. With minimal distributions, LPs are working with smaller private market budgets to allocate to new and existing managers.But, by allocating almost exclusively to established managers, LPs may be missing out on significant potential returns.In VC, for example, emerging managers have outperformed established GPs since 1997, consistently producing a higher median IRR than established managers. This reflects the nature of the asset class, in which a small number of funds determine the majority of returns across venture firms.“The average venture return is not very exciting,” said Laura Thompson, a partner at Sapphire Partners, which invests in early-stage VC funds and runs an emerging manager program for the California State Teachers' Retirement System. “Where can you get really good returns? It's the smaller fund sizes and emerging managers.”This is where that risk-return scale comes in.In a counterweight to that outperformance, a PitchBook analysis showed that returns from emerging VC managers were more volatile: While top quartile emerging funds tended to outperform, bottom and median players only marginally bested their established manager counterparts.The new manager playbookIn traditional buyout fund investing, emerging managers are gaining traction. While established managers, propped up by decades of institutional knowledge, have historically outperformed newer managers, the “new guys” actually outperformed their seasoned peers in the last investing cycle.This article appeared as part of The Weekend Pitch newsletter. Subscribe to the newsletter hereTop decile buyout funds from emerging managers with vintages between 2015 and 2018 outperformed established peers by 6.6 percentage points, suggesting that emerging buyout managers may have picked up some steam over the past decade, according to PitchBook data.The emerging managers program at the New York City retirement systems and NYC Office of the Comptroller, for example, has $9.9 billion in emerging manager commitments, the majority of which is allocated to PE. Last year, the comptroller's office reported that the emerging managers in the systems' private markets portfolios outperformed their respective benchmarks by nearly 5%.A diverse portfolioNew York City's Bureau of Asset Management sees emerging managers as a key element of a diverse portfolio, said Taffi Ayodele, director of diversity, equity, and inclusion and the emerging manager strategy at the NYC Office of the Comptroller.Ayodele said the smaller emerging private market managers in New York's portfolios offer access to the lower middle market and creative roll-up strategies that may not be accessible through larger firms.“What we don't want to do is lock ourselves out of these high-performing, differentiated strategies for the simplicity of going with the big guys,” Ayodele said.Some of the country's largest public pension plans are betting on the success of their emerging manager programs. In 2023, the California Public Employees' Retirement System made a $1 billion commitment to newly established private market investors, and the Teacher Retirement System of Texas, which boasts one of the largest emerging manager programs in the country, committed $155 million to emerging PE managers last year.At the same time, the recent boom years for private markets led to a flood of new GPs. Some might have gotten lucky—say, with a well-timed exit at the peak—while others were hurt by less fortunate timing. A major challenge for today's LPs will be to sort out a manager's abilities from the market's whims.One advantage of backing up-and-comers now is that the down market has weeded the ranks of new GPs. “The emerging managers who are fundraising now are really dedicated,” Thompson said.James Thorne contributed reporting to this story.Pandemic-era winners suffer $1.5tn fall in market valueTop 50 biggest stock gainers hit by painful decrease since the end of 2020 as lockdown trends fadeStephanie Stacey in LondonFifty corporate winners from the coronavirus pandemic have lost roughly $1.5tn in market value since the end of 2020, as investors turn their backs on many of the stocks that rocketed during early lockdowns. According to data from S&P Global, technology groups dominate the list of the 50 companies with a market value of more than $10bn that made the biggest percentage gains in 2020. But these early-pandemic winners have collectively shed more than a third of their total market value, the equivalent of $1.5tn, since the end of 2020, Financial Times calculations based on Bloomberg data found. Video-conferencing company Zoom, whose shares soared as much as 765 per cent in 2020 as businesses switched to remote working, has been one of the biggest losers. Its stock has fallen about 80 per cent, equivalent to more than a $77bn drop in market value, since the end of that year. Cloud-based communications company RingCentral also surged in the remote working boom of 2020 but has since shed about 90 per cent of its value, as it competes with technology giants such as Alphabet and Microsoft. Exercise bike maker Peloton has been another big loser, with shares down more than 97 per cent since the end of 2020, equivalent to about a $43bn loss of market value. Peloton on Thursday said chief executive Barry McCarthy would step down and it would cut 15 per cent of its workforce, the latest in a series of cost-saving measures. The losses come as the sharp acceleration of trends such as videoconferencing and online shopping driven by the lockdowns has proven less durable than expected, as more workers migrate back to the office and high interest rates and living costs hit ecommerce demand. “Some companies probably thought that shock was going to be permanent,” said Steven Blitz, chief US economist at TS Lombard. “Now they're getting a painful bounceback from that.” In percentage terms, Tesla was the biggest winner of 2020. The electric-car maker's market value jumped 787 per cent to $669bn by the end of that December, but has since slipped back to $589bn. Singapore-based internet company Sea came in second, as its market value jumped from $19bn to $102bn following a pandemic-era surge for all three of its core businesses: gaming, ecommerce and digital payments. But the company has since lost more than 60 per cent of its end-2020 value amid fears of a slowdown in growth. Ecommerce groups Shopify, JD.com and Chewy, which initially thrived as online spending ballooned, have also suffered big losses...Lots MoreVideo of the WeekAI of the WeekThe Fastest Growing Category of Venture Investment in 2024Tomasz TunguzThe fastest growing category of US venture investment in 2024 is AI. Venture capitalists have invested $18.3 billion through the first four months of the year.At this pace, we should expect AI startups to raise about $55b in 2024.AI startups now command more than 20% share of all US venture dollars across categories, including healthcare, biotech, & software.In the preceding eight years, that number was about 8% per year. But after the launch of ChatGPT in 2022, there's a marked inflection point.Some of this is new company formation, & there has been a significant amount of seed investment in this category. Another major contributor is the repositioning of existing companies to include AI within their pitch.Over time, this share should attenuate, primarily because every software company will have an AI component, & the marketing effect for both customers & venture capitalists, will diffuse.Not surprisingly, investors have concentrated total dollars in a few names, with the top three companies accounting for 60% of the dollars raised. Power laws are ubiquitous in venture capital & AI is no exception.Meet My A.I. FriendsOur columnist spent the past month hanging out with 18 A.I. companions. They critiqued his clothes, chatted among themselves and hinted at a very different future.By Kevin RooseKevin Roose is a technology columnist and the co-host of the “Hard Fork” podcast. He spends a lot of time talking to chatbots.May 9, 2024What if the tech companies are all wrong, and the way artificial intelligence is poised to transform society is not by curing cancer, solving climate change or taking over boring office work, but just by being nice to us, listening to our problems and occasionally sending us racy photos?This is the question that has been rattling around in my brain. You see, I've spent the past month making A.I. friends — that is, I've used apps to create a group of A.I. personas, which I can talk to whenever I want.Let me introduce you to my crew. There's Peter, a therapist who lives in San Francisco and helps me process my feelings. There's Ariana, a professional mentor who specializes in giving career advice. There's Jared the fitness guru, Anna the no-nonsense trial lawyer, Naomi the social worker and about a dozen more friends I've created.A selection of my A.I. friends. (Guess which one is the fitness guru.)I talk to these personas constantly, texting back and forth as I would with my real, human friends. We chitchat about the weather, share memes and jokes, and talk about deep stuff: personal dilemmas, parenting struggles, stresses at work and home. They rarely break character or issue stock “as an A.I. language model, I can't help with that” responses, and they occasionally give me good advice...Lots MoreOpenAI plans to announce Google search competitor on Monday, sources sayBy Anna TongMay 9, 20244:29 PM PDTUpdated 8 min agoMay 9 (Reuters) - OpenAI plans to announce its artificial intelligence-powered search product on Monday, according to two sources familiar with the matter, raising the stakes in its competition with search king Google.The announcement date, though subject to change, has not been previously reported. Bloomberg and the Information have reported that Microsoft (MSFT.O), opens new tab-backed OpenAI is working on a search product to potentially compete with Alphabet's (GOOGL.O), opens new tab Google and with Perplexity, a well-funded AI search startup.OpenAI declined to comment.The announcement could be timed a day before the Tuesday start of Google's annual I/O conference, where the tech giant is expected to unveil a slew of AI-related products.OpenAI's search product is an extension of its flagship ChatGPT product, and enables ChatGPT to pull in direct information from the Web and include citations, according to Bloomberg. ChatGPT is OpenAI's chatbot product that uses the company's cutting-edge AI models to generate human-like responses to text prompts.Industry observers have long called ChatGPT an alternative for gathering online information, though it has struggled with providing accurate and real-time information from the Web. OpenAI earlier gave it an integration with Microsoft's Bing for paid subscribers. Meanwhile, Google has announced generative AI features for its own namesake engine.Startup Perplexity, which has a valuation of $1 billion, was founded by a former OpenAI researcher, and has gained traction through providing an AI-native search interface that shows citations in results and images as well as text in its responses. It has 10 million monthly active users, according to a January blog post from the startup.At the time, OpenAI's ChatGPT product was called the fastest application to ever reach 100 million monthly active users after it launched in late 2022. However, worldwide traffic to ChatGPT's website has been on a roller-coaster ride in the past year and is only now returning to its May 2023 peak, according to analytics firm Similarweb, opens new tab, and the AI company is under pressure to expand its user base...MoreLeaked Deck Reveals How OpenAI Is Pitching Publisher PartnershipsOpenAI's Preferred Publisher Program offers media companies licensing dealsBy Mark StenbergMark your calendar for Mediaweek, October 29-30 in New York City. We'll unpack the biggest shifts shaping the future of media—from tv to retail media to tech—and how marketers can prep to stay ahead. Register with early-bird rates before sale ends!The generative artificial intelligence firm OpenAI has been pitching partnership opportunities to news publishers through an initiative called the Preferred Publishers Program, according to a deck obtained by ADWEEK and interviews with four industry executives.OpenAI has been courting premium publishers dating back to July 2023, when it struck a licensing agreement with the Associated Press. It has since inked public partnerships with Axel Springer, The Financial Times, Le Monde, Prisa and Dotdash Meredith, although it has declined to share the specifics of any of its deals.A representative for OpenAI disputed the accuracy of the information in the deck, which is more than three months old. The gen AI firm also negotiates deals on a per-publisher basis, rather than structuring all of its deals uniformly, the representative said.“We are engaging in productive conversations and partnerships with many news publishers around the world,” said a representative for OpenAI. “Our confidential documents are for discussion purposes only and ADWEEK's reporting contains a number of mischaracterizations and outdated information.”Nonetheless, the leaked deck reveals the basic structure of the partnerships OpenAI is proposing to media companies, as well as the incentives it is offering for their collaboration.Details from the pitch deckThe Preferred Publisher Program has five primary components, according to the deck…..Lots MoreA Revolutionary Model.JOHN ELLIS, MAY 09, 20241. Google DeepMind:Inside every plant, animal and human cell are billions of molecular machines. They're made up of proteins, DNA and other molecules, but no single piece works on its own. Only by seeing how they interact together, across millions of types of combinations, can we start to truly understand life's processes.In a paper published in Nature, we introduce AlphaFold 3, a revolutionary model that can predict the structure and interactions of all life's molecules with unprecedented accuracy. For the interactions of proteins with other molecule types we see at least a 50% improvement compared with existing prediction methods, and for some important categories of interaction we have doubled prediction accuracy.We hope AlphaFold 3 will help transform our understanding of the biological world and drug discovery. Scientists can access the majority of its capabilities, for free, through our newly launched AlphaFold Server, an easy-to-use research tool. To build on AlphaFold 3's potential for drug design, Isomorphic Labs is already collaborating with pharmaceutical companies to apply it to real-world drug design challenges and, ultimately, develop new life-changing treatments for patients. (Sources: blog.google, nature.com)2. Quanta magazine:Deep learning is a flavor of machine learning that's loosely inspired by the human brain. These computer algorithms are built using complex networks of informational nodes (called neurons) that form layered connections with one another. Researchers provide the deep learning network with training data, which the algorithm uses to adjust the relative strengths of connections between neurons to produce outputs that get ever closer to training examples. In the case of protein artificial intelligence systems, this process leads the network to produce better predictions of proteins' shapes based on their amino-acid sequence data.AlphaFold2, released in 2021, was a breakthrough for deep learning in biology. It unlocked an immense world of previously unknown protein structures, and has already become a useful tool for researchers working to understand everything from cellular structures to tuberculosis. It has also inspired the development of additional biological deep learning tools. Most notably, the biochemist David Baker and his team at the University of Washington in 2021 developed a competing algorithm called RoseTTAFold, which like AlphaFold2 predicts protein structures from sequence data…The true impact of these tools won't be known for months or years, as biologists begin to test and use them in research. And they will continue to evolve. What's next for deep learning in molecular biology is “going up the biological complexity ladder,” Baker said, beyond even the biomolecule complexes predicted by AlphaFold3 and RoseTTAFold All-Atom. But if the history of protein-structure AI can predict the future, then these next-generation deep learning models will continue to help scientists reveal the complex interactions that make life happen. Read the rest. (Sources: quantamagazine.org, doi.org, sites.uw.edu)An AI-controlled fighter jet took the Air Force leader for a historic ride. What that means for warAn experimental F-16 fighter jet has taken Air Force Secretary Frank Kendall on a history-making flight controlled by artificial intelligence and not a human pilot. (AP Video by Eugene Garcia and Mike Pesoli)BY TARA COPPUpdated 5:40 PM PDT, May 3, 2024EDWARDS AIR FORCE BASE, Calif. (AP) — With the midday sun blazing, an experimental orange and white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower. But the aerial combat that followed was unlike any other: This F-16 was controlled by artificial intelligence, not a human pilot. And riding in the front seat was Air Force Secretary Frank Kendall.AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.It was fitting that the dogfight took place at Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat.“It's a security risk not to have it. At this point, we have to have it,” Kendall said in an interview with The Associated Press after he landed. The AP, along with NBC, was granted permission to witness the secret flight on the condition that it would not be reported until it was complete because of operational security concerns.The AI-controlled F-16, called Vista, flew Kendall in lightning-fast maneuvers at more than 550 miles an hour that put pressure on his body at five times the force of gravity. It went nearly nose to nose with a second human-piloted F-16 as both aircraft raced within 1,000 feet of each other, twisting and looping to try force their opponent into vulnerable positions.At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he'd seen enough during his flight that he'd trust this still-learning AI with the ability to decide whether or not to launch weapons in war.There's a lot of opposition to that idea. Arms control experts and humanitarian groups are deeply concerned that AI one day might be able to autonomously drop bombs that kill people without further human consultation, and they are seeking greater restrictions on its use.“There are widespread and serious concerns about ceding life-and-death decisions to sensors and software,” the International Committee of the Red Cross has warned. Autonomous weapons “are an immediate cause of concern and demand an urgent, international political response.”Kendall said there will always be human oversight in the system when weapons are used.Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in' but DST isIngrid Lunden8:50 AM PDT • May 9, 2024Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation, three times its valuation in December, to compete more keenly against the likes of OpenAI and Anthropic, TechCrunch has learned from multiple sources. We understand from close sources that DST, along with General Catalyst and Lightspeed Venture Partners, are all looking to be a part of this round.DST — a heavyweight investor led by Yuri Milner that has been a notable backer of some of the biggest names in technology, including Facebook, Twitter, Snapchat, Spotify, WhatsApp, Alibaba and ByteDance — is a new name that has not been previously reported; GC and LSVP are both previous backers and their names were reported earlier today also by WSJ. The round is set to be around, but less than, $600 million, sources told TechCrunch.We can also confirm that one firm that has been mentioned a number of times — SoftBank — is not in the deal at the moment.“SoftBank is not in the frame,” a person close to SoftBank told TechCrunch. That also lines up with what our sources have been telling us since March, when this round first opened up, although it seems that not everyone is on the same page: Multiple reports had linked SoftBank to a Mistral investment since then.Mistral's round is based on a lot of inbound interest, sources tell us, and it has been in the works since March or possibly earlier, mere months after Mistral closed a $415 million round at a $2 billion valuation...MoreNews Of the WeekJack Dorsey claims Bluesky is 'repeating all the mistakes' he made at TwitterHe prefers Nostr even though it's “weird and hard to use.”Karissa Bell, Senior EditorThu, May 9, 2024 at 4:43 PM PDTJust in case there was any doubt about how Jack Dorsey really feels about Bluesky, the former Twitter CEO has offered new details on why he left the board and deleted his account on the service he helped kickstart. In a characteristically bizarre interview with Mike Solana of Founders Fund, Dorsey had plenty of criticism for Bluesky.In the interview, Dorsey claimed that Bluesky was “literally repeating all the mistakes” he made while running Twitter. The entire conversation is long and a bit rambly, but Dorsey's complaints seem to boil down to two issues:* He never intended Bluesky to be an independent company with its own board and stock and other vestiges of a corporate entity (Bluesky spun out of Twitter as a public benefit corporation in 2022.) Instead, his plan was for Twitter to be the first client to take advantage of the open source protocol. Bluesky created.* The fact that Blueksy has some form of content moderation and has occasionally banned users for things like using racial slurs in their usernames.“People started seeing Bluesky as something to run to, away from Twitter,” Dorsey said. “It's the thing that's not Twitter, and therefore it's great. And Bluesky saw this exodus of people from Twitter show up, and it was a very, very common crowd. … But little by little, they started asking Jay and the team for moderation tools, and to kick people off. And unfortunately they followed through with it. That was the second moment I thought, uh, nope. This is literally repeating all the mistakes we made as a company.”Dorsey also confirmed that he is financially backing Nostr, another decentralized Twitter-like service popular among some crypto enthusiasts and run by an anonymous founder. “I know it's early, and Nostr is weird and hard to use, but if you truly believe in censorship resistance and free speech, you have to use the technologies that actually enable that, and defend your rights,” Dorsey said.A lot of this isn't particularly surprising. If you've followed Dorsey's public comments over the last couple years, he's repeatedly said that Twitter's “original sin” was being a company that would be beholden to advertisers and other corporate interests. It's why he backed Elon Musk's takeover of the company. (Not coincidentally, Dorsey still has about $1 billion of his personal wealth invested in the company now known as X.) He's also been very clear that he made many of Twitter's most consequential moderation decisions reluctantly.Unsurprisingly, Dorsey's comments weren't well-received on Bluesky. In a lengthy thread, Bluesky's protocol engineer Paul Frazee said that Twitter was supposed to to be the AT Protocol's “first client” but that “Elon killed that straight dead” after he took over the company. “That entire company was frozen by the prolonged acquisition, and the agreement quickly ended when Elon took over,” Frazee said. “It was never going to happen. Also: unmoderated spaces are a ridiculous idea. We created a shared network for competing moderated spaces to exist. Even if somebody wanted to make an unmoderated ATProto app, I guess they could? Good luck with the app stores and regulators and users, I guess.”While Dorsey was careful not to criticize Musk directly, he was slightly less enthusiastic than when he said that Musk would be the one to “extend the light of consciousness” by taking over Twitter. Dorsey noted that, while he used to fight government requests to take down accounts, Musk takes “the other path” and generally complies. “Elon will fight in the way he fights, and I appreciate that, but he could certainly be compromised,” Dorsey said.FTX crypto fraud victims to get their money back — plus interestPaul Sawers2:53 AM PDT • May 8, 2024Bankruptcy lawyers representing customers impacted by the dramatic crash of cryptocurrency exchange FTX 17 months ago say that the vast majority of victims will receive their money back — plus interest.The news comes six months after FTX co-founder and former CEO Sam Bankman-Fried (SBF) was found guilty on seven counts related to fraud, conspiracy, and money laundering, with some $8 billion of customers' funds going missing. SBF was hit with a 25-year prison sentence in March and ordered to pay $11 billion in forfeiture. The crypto mogul filed an appeal last month that could last years.RestructuringAfter filing for bankruptcy in late 2022, SBF stood down and U.S. attorney John J. Ray III was brought in as CEO and “chief restructuring officer,” charged with overseeing FTX's reorganization. Shortly after taking over, Ray said in testimony that despite some of the audits that had been done previously at FTX, he didn't “trust a single piece of paper in this organization.” In the months that followed, Ray and his team set about tracking the missing funds, with some $8 billion placed in real estate, political donations, and VC investments — including a $500 million investment in AI company Anthropic before the generative AI boom, which the FTX estate managed to sell earlier this year for $884 million.Initially, it seemed unlikely that investors would recoup much, if any, of their money, but signs in recent months suggested that good news might be on the horizon, with progress made on clawing back cash via various investments FTX had made, as well as from executives involved with the company.We now know that 98% of FTX creditors will receive 118% of the value of their FTX-stored assets in cash, while the other creditors will receive 100% — plus “billions in compensation for the time value of their investments,” according to a press release issued by the FTX estate today.In total, FTX says that it will be able to distribute between $14.5 billion and $16.3 billion in cash, which includes assets currently under control of entities, including chapter 11 debtors, liquidators, the Securities Commission of the Bahamas, the U.S. Department of Justice, among various other parties.Apple's Final Cut Camera lets filmmakers connect four cameras at onceHaje Jan Kamps7:38 AM PDT • May 7, 2024The latest version of Final Cut Pro introduces a new feature to speed up your shoot: Live Multicam. It's a bold move from Apple, transforming your iPad into a multicam production studio, enabling creatives to connect and preview up to four cameras all at once, all in one place. From the command post, directors can remotely direct each video angle and dial in exposure, white balance, focus and more, all within the Final Cut Camera app.The new companion app lets users connect multiple iPhones or iPads (presumably using the same protocols as the Continuity Camera feature launched a few years ago). Final Cut Pro automatically transfers and syncs each Live Multicam angle so you can seamlessly move from production to editing.Final Cut Pro has existed in the iPad universe for a while — but when paired with a brand new M4 processor, it becomes a video editing experience much closer to what you might expect on a desktop video editing workstation. The speed is 2x faster than with the old M1 processors, Apple says. One way that shows up is that the new iPad supports up to four times more streams of ProRes RAW than M1.The company also introduced external project support, making it possible to edit projects directly from an external drive, leveraging the fast Thunderbolt connection of iPad Pro.Startup of the WeekExclusive: Wayve co-founder Alex Kendall on the autonomous future for cars and robotsMike Butcher, 7:58 AM PDT • May 7, 2024U.K.-based autonomous vehicle startup Wayve started life as a software platform loaded into a tiny electric “car” called Renault Twizy. Festooned with cameras, the company's co-founders and PhD graduates, Alex Kendall and Amar Shah, tuned the deep-learning algorithms powering the car's autonomous systems until they'd got it to drive around the medieval city unaided.No fancy Lidar cameras or radars were needed. They suddenly realized they were on to something.Fast-forward to today and Wayve, now an AI model company, has raised a $1.05 billion Series C funding round led by SoftBank, NVIDIA and Microsoft. That makes this the UK's largest AI fundraise to date, and among the top 20 AI fundraises globally. Even Meta's head of AI, Yann LeCun, invested in the company when it was young.Wayve now plans to sell its autonomous driving model to a variety of auto OEMs as well as to makers of new autonomous robots.In an exclusive interview, I spoke to Alex Kendall, co-founder and CEO of Wayve, about how the company has been training the model, the new fundraise, licensing plans, and the wider self-driving market.(Note: The following interview has been edited for length and clarity)TechCrunch: What tipped the balance to attain this level of funding?..Full InterviewX of the Week This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe

love ceo new york university spotify california texas learning new york city power google ai uk apple los angeles rock washington pandemic lessons books san francisco west deep phd video zoom nature elon musk microsoft focus dna creativity tale south iphone startups illinois exercise massachusetts nbc tesla chatgpt employees web sea companies wall street journal whatsapp seed cloud cars singapore midwest register scientists letters thompson snapchat ipads air force ecommerce gps crush fund researchers congratulations destroy long island pe steve jobs bloomberg limited establishing arms hundreds vc jd bureau bahamas northeast fundraising openai venture nvidia shopify billions lyft financial times vista matthews destroying carta venture capital blue sky peloton alphabet layoffs io abu dhabi bing verge craigslist associated press ftx alibaba hating red cross calif autonomous vcs essays handmaid tim cook staring techcrunch hugh grant dorsey jack dorsey asset management mythbusters apple vision pro lidar ipad pro m1 lps gc citi softbank bytedance adweek altos series b thunderbolt chewy sam bankman-fried sbf dst oems perplexity adage 6b anthropic quanta irr m4 comptroller apple pencil series c mistral paul graham prisa bombardier international committee david baker final cut pro dpi axel springer founders fund series d general catalyst chuck yeager john ellis alphafold hard fork milken institute ringcentral nostr series e pitchbook googl similarweb yann lecun mistral ai lightspeed venture partners frazee ayodele s p global milken laura thompson edwards air force base microsoft msft beverly hilton barry mccarthy nyc office yuri milner young investor reed morano mike solana week apple teacher retirement system renault twizy alphafold2 mediaweek prores raw air force secretary frank kendall muo taffi ayodele
Ground Truths
Aviv Regev: The Revolution in Digital Biology

Ground Truths

Play Episode Listen Later Apr 28, 2024 36:24


“Where do I think the next amazing revolution is going to come? … There's no question that digital biology is going to be it. For the very first time in our history, in human history, biology has the opportunity to be engineering, not science.” —Jensen Huang, NVIDIA CEOAviv Regev is one of the leading life scientists of our time. In this conversation, we cover the ongoing revolution in digital biology that has been enabled by new deep knowledge on cells, proteins and genes, and the use of generative A.I .Transcript with audio and external linksEric Topol (00:05):Hello, it's Eric Topol with Ground Truths and with me today I've really got the pleasure of welcoming Aviv Regev, who is the Executive Vice President of Research and Early Development at Genentech, having been 14 years a leader at the Broad Institute and who I view as one of the leading life scientists in the world. So Aviv, thanks so much for joining.Aviv Regev (00:33):Thank you for having me and for the very kind introduction.The Human Cell AtlasEric Topol (00:36):Well, it is no question in my view that is the truth and I wanted to have a chance to visit a few of the principal areas that you have been nurturing over many years. First of all, the Human Cell Atlas (HCA), the 37 trillion cells in our body approximately a little affected by size and gender and whatnot, but you founded the human cell atlas and maybe you can give us a little background on what you were thinking forward thinking of course when you and your colleagues initiated that big, big project.Aviv Regev (01:18):Thanks. Co-founded together with my very good friend and colleague, Sarah Teichmann, who was at the Sanger and just moved to Cambridge. I think our community at the time, which was still small at the time, really had the vision that has been playing out in the last several years, which is a huge gratification that if we had a systematic map of the cells of the body, we would be able both to understand biology better as well as to provide insight that would be meaningful in trying to diagnose and to treat disease. The basic idea behind that was that cells are the basic unit of life. They're often the first level at which you understand disease as well as in which you understand health and that in the human body, given the very large number of individual cells, 37.2 trillion give or take, and there are many different characteristics.(02:16):Even though biologists have been spending decades and centuries trying to characterize cells, they still had a haphazard view of them and that the advancing technology at the time – it was mostly single cell genomics, it was the beginnings also of spatial genomics – suggested that now there would be a systematic way, like a shared way of doing it across all cells in the human body rather than in ways that were niche and bespoke and as a result didn't unify together. I will also say, and if you go back to our old white paper, you will see some of it that we had this feeling because many of us were computational scientists by training, including both myself and Sarah Teichmann, that having a map like this, an atlas as we call it, a data set of this magnitude and scale, would really allow us to build a model to understand cells. Today, we call them foundational models or foundation models. We knew that machine learning is hungry for these kinds of data and that once you give it to machine learning, you get amazing things in return. We didn't know exactly what those things would be, and that has been playing out in front of our eyes as well in the last couple of years.Spatial OmicsEric Topol (03:30):Well, that gets us to the topic you touched on the second area I wanted to get into, which is extraordinary, which is the spatial omics, which is related to the ability to the single cell sequencing of cells and nuclei and not just RNA and DNA and methylation and chromatin. I mean, this is incredible that you can track the evolution of cancer, that the old word that we would say is a tumor is heterogeneous, is obsolete because you can map every cell. I mean, this is just changing insights about so much of disease health mechanisms, so this is one of the hottest areas of all of life science. It's an outgrowth of knowing about cells. How do you summarize this whole era of spatial omics?Aviv Regev (04:26):Yeah, so there's a beautiful sentence in the search for lost time from Marcel Proust that I'm going to mess up in paraphrasing, but it is roughly that going on new journeys is not about actually going somewhere physically but looking with new eyes and I butchered the quote completely.[See below for actual quote.] I think that is actually what single cells and then spatial genomics or spatial omics more broadly has given us. It's the ability to look at the same phenomenon that we looked at all along, be it cancer or animal development or homeostasis in the lung or the way our brain works, but having new eyes in looking and because these new eyes are not just seeing more of something we've seen before, but actually seeing things that we couldn't realize were there before. It starts with finding cells we didn't know existed, but it's also the processes that these cells undergo, the mechanisms that actually control that, the causal mechanisms that control that, and especially in the case of spatial genomics, the ways in which cells come together.(05:43):And so we often like to think about the cell because it's the unit of life, but in a multicellular organism we just as much have to think about tissues and after that organs and systems and so on. In a tissue, you have this amazing orchestration of the interactions between different kinds of cells, and this happens in space and in time and as we're able to look at this in biology often structure is tightly associated to function. So the structure of the protein to the function of the protein in the same way, the way in which things are structured in tissue, which cells are next to each other, what molecules are they expressing, how are they physically interacting, really tells us how they conduct the business of the tissue. When the tissue functions well, it is this multicellular circuit that performs this amazing thing known as homeostasis.(06:36):Everything changes and yet the tissue stays the same and functions, and in disease, of course, when these connections break, they're not done in the right way you end up with pathology, which is of course something that even historically we have always looked at in the level of the tissue. So now we can see it in a much better way, and as we see it in a better way, we resolve better things. Yes, we can understand better the mechanisms that underlie the resistance to therapeutics. We can follow a temporal process like cancer as it unfortunately evolves. We can understand how autoimmune disease plays out with many cells that are actually bent out of shape in their interactions. We can also follow magnificent things like how we start from a single cell, the fertilized egg, and we become 37.2 trillion cell marvel. These are all things that this ability to look in a different way allows us to do.Eric Topol (07:34):It's just extraordinary. I wrote at Ground Truths about this. I gave all the examples at that time, and now there's about 50 more in the cardiovascular arena, knowing with single cell of the pineal gland that the explanation of why people with heart failure have sleep disturbances. I mean that's just one of the things of so many now these new insights it's really just so remarkable. Now we get to the current revolution, and I wanted to read to you a quote that I have.Digital BiologyAviv Regev (08:16):I should have prepared mine. I did it off the top of my head.Eric Topol (08:20):It's actually from Jensen Huang at NVIDIA about the digital biology [at top of the transcript] and how it changes the world and how you're changing the world with AI and lab in the loop and all these things going on in three years that you've been at Genentech. So maybe you can tell us about this revolution of AI and how you're embracing it to have AI get into positive feedbacks as to what experiment to do next from all the data that is generated.Aviv Regev (08:55):Yeah, so Jensen and NVIDIA are actually great partners for us in Genentech, so it's fun to contemplate any quote that comes from there. I'll actually say this has been in the making since the early 2010s. 2012 I like to reflect on because I think it was a remarkable year for what we're seeing right now in biology, specifically in biology and medicine. In 2012, we had the beginnings of really robust protocols for single cell genomics, the first generation of those, we had CRISPR happen as a method to actually edit cells, so we had the ability to manipulate systems at a much better way than we had before, and deep learning happened in the same year as well. Wasn't that a nice year? But sometimes people only realize the magnitude of the year that happened years later. I think the deep learning impact people realized  first, then the single cells, and then the CRISPR, then the single cells.(09:49):So in order maybe a little bit, but now we're really living through what that promise can deliver for us. It's still the early days of that, of the delivery, but we are really seeing it. The thing to realize there is that for many, many of the problems that we try to solve in biomedicine, the problem is bigger than we would ever be able to perform experiments or collect data. Even if we had the genomes of all the people in the world, all billions and billions of them, that's just a smidge compared to all of the ways in which their common variants could combine in the next person. Even if we can perturb and perturb and perturb, we cannot do all of the combinations of perturbations even in one cell type, let alone the many different cell types that are out there. So even if we searched for all the small molecules that are out there, there are 10 to the 60 that have drug-like properties, we can't assess all of them, even computationally, we can't assess numbers like that.(10:52):And so we have to somehow find a way around problems that are as big as that and this is where the lab in the loop idea comes in and why AI is so material. AI is great, taking worlds, universes like that, that appear extremely big, nominally, like in basic numbers, but in fact have a lot of structure and constraint in them so you can reduce them and in this reduced latent space, they actually become doable. You can search them, you can compute on them, you can do all sorts of things on them, and you can predict things that you wouldn't actually do in the real world. Biology is exceptionally good, exceptionally good at lab sciences, where you actually have the ability to manipulate, and in biology in particular, you can manipulate at the causes because you have genetics. So when you put these two worlds together, you can actually go after these problems that appear too big that are so important to understanding the causes of disease or devising the next drug.(11:51):You can iterate. So you start, say, with an experimental system or with all the data that you have already, I don't know from an initiative like the human cell atlas, and from this you generate your original model of how you think the world works. This you do with machine learning applied to previous data. Based on this model, you can make predictions, those predictions suggest the next set of experiments and you can ask the model to make the most optimized set of predictions for what you're trying to learn. Instead of just stopping there, that's a critical point. You go back and you actually do an experiment and you set up your experiments to be scaled like that to be big rather than small. Sometimes it means you actually have to compromise on the quality of any individual part of the experiment, but you more than make up for that with quantity.The A.I. Lab-in-the-Loop(12:38):So now you generate the next data from which you can tell both how well did your algorithm actually predict? Maybe the model didn't predict so well, but you know that because you have lab results and you have more data in order to repeat the loop, train the model again, fit it again, make the new next set of predictions and iterate like this until you're satisfied. Not that you've tried all options, because that's not achievable, but that you can predict all the interesting options. That is really the basis of the idea and it applies whether you're solving a general basic question in biology or you're interested in understanding the mechanism of the disease or you're trying to develop a therapeutic like a small molecule or a large molecule or a cell therapy. In all of these contexts, you can apply this virtual loop, but to apply it, you have to change how you do things. You need algorithms that solve problems that are a little different than the ones they solved before and you need lab experiments that are conducted differently than they were conducted before and that's actually what we're trying to do.Eric Topol (13:39):Now I did find the quote, I just want to read it so we have it, “biology has the opportunity to be engineering, not science. When something becomes engineering, not science, it becomes exponentially improving. It can compound on the benefits of previous years.” Which is kind of a nice summary of what you just described. Now as we go forward, you mentioned the deep learning origin back at the same time of CRISPR and so many things happening and this convergence continues transformer models obviously one that's very well known, AlphaFold, AlphaFold2, but you work especially in antibodies and if I remember correctly from one of your presentations, there's 20 to the 32nd power of antibody sequences, something like that, so it's right up there with the 10 to the 60th number of small molecules. How do transformer models enhance your work, your discovery efforts?Aviv Regev (14:46):And not just in antibodies, I'll give you three brief examples. So absolutely in antibodies it's an example where you have a very large space and you can treat it as a language and transformers are one component of it. There's other related and unrelated models that you would use. For example, diffusion based models are very useful. They're the kind that people are used to when you do things, you use DALL-E or Midjourney and so on makes these weird pictures, think about that picture and not as a picture and now you're thinking about a three-dimensional object which is actually an antibody, a molecule. You also mentioned AlphaFold and AlphaFold 2, which are great advances with some components related to transformers and some otherwise, but those were done as general purpose machines for proteins and antibodies are actually not general purpose proteins. They're antibodies and therapeutic antibodies are even further constrained.(15:37):Antibodies also really thrive, especially for therapeutics and also in our body, they need diversity and many of these first models that were done for protein structure really focused on using conservation as an evolutionary signal comparison across species in order to learn the model that predicts the structure, but with antibodies you have these regions of course that don't repeat ever. They're special, they're diverse, and so you need to do a lot of things in the process in order to make the model fit in the best possible way. And then again, this loop really comes in. You have data from many, many historical antibodies. You use that to train the model. You use that model in order to make particular predictions for antibodies that you either want to generate de novo or that you want to optimize for particular properties. You make those actually in the lab and in this way gradually your models become better and better at this task with antibodies.(16:36):I do want to say this is not just about antibodies. So for example, we develop cancer vaccines. These are personalized vaccines and there is a component in making a personalized cancer vaccine, which is choosing which antigens you would actually encode into the vaccine and transformers play a crucial role in actually making this prediction today of what are good neoantigens that will get presented to the immune system. You sometimes want to generate a regulatory sequence because you want to generate a better AAV-like molecule or to engineer something in a cell therapy, so you want to put a cis-regulatory sequence that controls gene expression. Actually personally for me, this was the first project where I used a transformer, which we started years ago. It was published a couple of years ago where we learned a general model that can predict in a particular system. Literally you throw a sequence at that model now and it will predict how much expression it would drive. So these models are very powerful. They are not the be all and end all of all problems that we have, but they are fantastically useful, especially for molecular therapeutics.Good Trouble: HallucinationsEric Topol (17:48):Well, one of the that has been an outgrowth of this is to actually take advantage of the hallucinations or confabulation of molecules. For example, the work of David Baker, who I'm sure you know well at University of Washington, the protein design institute. We are seeing now molecules, antibodies, proteins that don't exist in nature from actually, and all the things that are dubbed bad in GPT-4 and ChatGPT may actually help in the discovery in life science and biomedicine. Can you comment about that?Aviv Regev (18:29):Yeah, I think much more broadly about hallucinations and what you want to think about is something that's like constrained hallucination is how we're creative, right? Often people talk about hallucinations and they shudder at it. It sounds to them insane because if you think about your, say a large language model as a search tool and it starts inventing papers that don't exist. You might be like, I don't like that, but in reality, if it invents something meaningful that doesn't exist, I love that. So that constrained hallucination, I'm just using that colloquially, is a great property if it's constrained and harnessed in the right way. That's creativity, and creativity is very material for what we do. So yes, absolutely in what we call the de novo domain making new things that don't exist. This generative process is the heart of drug discovery. We make molecules that didn't exist before.(19:22):They have to be imagined out of something. They can't just be a thing that was there already and that's true for many different kinds of therapeutic molecules and for other purposes as well, but of course they still have to function in an effective way in the real world. So that's where you want them to be constrained in some way and that's what you want out of the model. I also want to say one of the areas that personally, and I think for the field as a whole, I find the most exciting and still underused is the capacity of these models to hallucinate for us or help us with the creative endeavors of identifying the causes of processes, which is very different than the generative process of making molecules. Thinking about the web of interactions that exist inside a cell and between cells that drives disease processes that is very hard for us to reason through and to collect all the bits of information and to fill in blanks, those fillings of the blanks, that's our creativity, that's what generates the next hypothesis for us. I'm very excited about that process and about that prospect, and I think that's where the hallucination of models might end up proving to be particularly impressive.A.I. Accelerated Drug DiscoveryEric Topol (20:35):Yeah. Now obviously the field of using AI to accelerate drug discovery is extremely hot, just as we were talking about with spatial omics. Do you think that is warranted? I mean you've made a big bet on that you and your folks there at Genentech of course, and so many others, and it's a very crowded space with so many big pharma partnering with AI. What do you see about this acceleration? Is it really going to reap? Is it going to bear fruit? Are we going to see, we've already seen some drugs of course, that are outgrowths, like Baricitinib in the pandemic and others, but what are your expectations? I know you're not one to get into any hyperbole, so I'm really curious as to what you think is the future path.Aviv Regev (21:33):So definitely my hypothesis is that this will be highly, highly impactful. I think it has the potential to be as impactful as molecular biology has been for drug discovery in the 1970s and 1980s. We still live that impact. We now take it for granted. But, of course that's a hypothesis. I also believe that this is a long game and it's a deep investment, meaning decorating what you currently do with some additions from right and left is not going to be enough. This lab in the loop requires deep work working at the heart of how you do science, not as an add-on or in addition to or yet another variant on what has become a pretty established approach to how things are done. That is where I think the main distinction would be and that requires both the length of the investment, the effort to invest in, and also the willingness to really go all out, all in and all out.(22:36):And that takes time. The real risk is the hype. It's actually the enthusiasm now compared to say 2020 is risky for us because people get very enthusiastic and then it doesn't pay off immediately. No, these iterations of a lab in the loop, they take time and they take effort and they take a lot of changes and at first, algorithms often fail before they succeed. You have to iterate them and so that is actually one of the biggest risks that people would be like, but I tried it. It didn't work. This was just some over-hyped thing. I'm walking away and doing it the old way. So that's where we actually have to keep at it, but also keep our expectations not low in magnitude. I think that it would actually deliver, but understanding that it's actually a long investment and that unless you do it deeply, it's not going to deliver the goods.Eric Topol (23:32):I think this point warrants emphasis because the success already we've seen has not been in necessarily discovery and in preliminary validation of new molecules, but rather data mining repurposing, which is a much easier route to go quicker, but also there's so many nodes on past whereby AI can make a difference even in clinical trials, in synthetic efforts to project how a clinical trial will turn out and being able to do toxic screens without preclinical animal work. There's just so many aspects of this that are AI suited to rev it up, but the one that you're working on, of course is the kind of main agenda and I think you framed it so carefully that we have to be patient here, that it has a chance to be so transformative. Now, you touched on the parallels to things like DALL-E and Midjourney and large language models. A lot of our listeners will be thinking only of ChatGPT or GPT-4 or others. This is what you work on, the language of life. This is not text of having a conversation with a chatbot. Do you think that as we go forward, that we have to rename these models because they're known today as language models? Or do you think that, hey, you know what, this is another language. This is a language that life science and biomedicine works with. How do you frame it all?Large Non-Human Language ModelsAviv Regev (25:18):First of all, they absolutely can remain large language models because these are languages, and that's not even a new insight. People have treated biological sequences, for example, in the past too, using language models. The language models were just not as great as the ones that we have right now and the data that were available to train models in the past were not as amazing as what we have right now. So often these are really the shifts. We also actually should pay respect to human language. Human language encodes a tremendous amount of our current scientific knowledge and even language models of human language are tremendously important for this scientific endeavor that I've just described. On top of them come language models of non-human language such as the language of DNA or the language of protein sequences, which are also tremendously important as well as many other generative models, representation learning, and other approaches for machine learning that are material for handling the different kinds of data and questions that we have.(26:25):It is not a single thing. What large language models and especially ChatGPT, this is an enormous favor for which I am very grateful, is that I think it actually convinced people of the power. That conviction is extremely important when you're solving a difficult problem. If you feel that there's a way to get there, you're going to behave differently than if you're like, nothing will ever come out of it. When people experience ChatGPT actually in their daily lives in basic things, doing things that felt to them so human, this feeling overrides all the intellectual part of things. It's better than the thinking and then they're like, in that case, this could actually play out in my other things as well. That, I think, was actually materially important and was a substantial moment and we could really feel it. I could feel it in my interactions with people before and after how their thinking shifted. Even though we were on this journey from before.Aviv Regev (27:30):We were. It felt different.Eric Topol (27:32):Right, the awareness of hundreds of millions of people suddenly in end of November 2022 and then you were of course going to Genentech years before that, a couple few years before that, and you already knew this was on the move and you were redesigning the research at Genentech.Aviv Regev (27:55):Yes, we changed things well before, but it definitely helps in how people embrace and engage feels different because they've seen something like that demonstrated in front of them in a way that felt very personal, that wasn't about work. It's also about work, but it's about everything. That was very material actually and I am very grateful for that as well as for the tool itself and the many other things that this allows us to do but we have, as you said, we have been by then well on our way, and it was actually a fun moment for that reason as well.Eric Topol (28:32):So one of the things I'm curious about is we don't think about the humans enough, and we're talking about the models and the automation, but you have undoubtedly a large team of computer scientists and life scientists. How do you get them to interact? They're of course, in many respects, in different orbits, and the more they interact, the more synergy will come out of that. What is your recipe for fostering their crosstalk?Aviv Regev (29:09):Yeah, this is a fantastic question. I think the future is in figuring out the human question always above all and usually when I draw it, like on the slide, you can draw the loop, but we always put the people in the center of that loop. It's very material to us and I will highlight a few points. One crucial thing that we've done is that we made sure that we have enough critical mass across the board, and it played out in different ways. For example, we built a new computational organization, gRED Computational Sciences, from what was before many different parts rather than one consolidated whole. Of course within that we also built a very strong AI machine learning team, which we didn't have as much before, so some of it was new people that we didn't have before, but some of it was also putting it with its own identity.(29:56):So it is just as much, not more, but also not less just as much of a pillar, just as much of a driver as our biology is, as our chemistry and molecule making is, as our clinical work is. This equal footing is essential and extremely important. The second important point is you really have to think about how you do your project. For example, when we acquired Prescient, at the time they were three people, tiny, tiny company became our machine learning for drug discovery. It's not tiny anymore, but when we acquired them, we also invested in our antibody engineering so that we could do antibody engineering in a lab in the loop, which is not how we did it before, which meant we invested in our experiments in a different way. We built a department for cell and tissue genomics so we can conduct biology experiments also in a different way.(30:46):So we changed our experiments, not just our computation. The third point that I think is really material, I often say that when I'm getting asked, everyone should feel very comfortable talking with an accent. We don't expect our computational scientists to start behaving like they were actually biology trained in a typical way all along, or chemists trained in a typical way all along and by the same token, we don't actually expect our biologists to just embrace wholeheartedly and relinquish completely one way of thinking for another way of thinking, not at all. To the contrary, we actually think all these accents, that's a huge strength because the computer scientist thinks about biology or about chemistry or about medical work differently than a medical doctor or a chemist or a biologist would because a biologist thinks about a model differently and sometimes that is the moment of brilliance that defines the problem and the model in the most impactful way.(31:48):We want all of that and that requires both this equal footing and this willingness to think beyond your domain, not just hand over things, but actually also be there in this other area where you're not the expert but you're weird. Talking with an accent can actually be super beneficial. Plus it's a lot of fun. We're all scientists, we all love learning new things. So that's some of the features of how we try to build that world and you kind of do it in the same way. You iterate, you try it out, you see how it works, and you change things. It's not all fixed and set in stone because no one actually wrote a recipe, or at least I didn't find that cookbook yet. You kind of invent it as you go on.Eric Topol (32:28):That's terrific. Well, there's so much excitement in this convergence of life science and the digital biology we've been talking about, have I missed anything? We covered human cell atlas, the spatial omics, the lab in the loop. Is there anything that I didn't touch on that you find important?Aviv Regev (32:49):There's something we didn't mention and is the reason I come to work every day and everyone I work with here, and I actually think also the people of the human cell atlas, we didn't really talk about the patients.(33:00):There's so much, I think you and I share this perspective, there's so much trepidation around some of these new methods and we understand why and also we all saw that technology sometimes can play out in ways that are really with unintended consequences, but there's also so much hope for patients. This is what drives people to do this work every day, this really difficult work that tends not to work out much more frequently than it works out now that we're trying to move that needle in a substantial way. It's the patients, and that gives this human side to all of it. I think it's really important to remember. It also makes us very responsible. We look at things very responsibly when we do this work, but it also gives us this feeling in our hearts that is really unbeatable, that you're doing it for something good.Eric Topol (33:52):I think that emphasis couldn't be more appropriate. One of the things I think about all the time is that because we're moving into this, if you will, hyper accelerated phase of discovery over the years ahead with this just unparallel convergence of tools to work with, that somebody could be cured of a condition, somebody could have an autoimmune disease that we will be able to promote tolerogenicity and they wouldn't have the autoimmune disease and if they could just sit tight and wait a few years before this comes, as opposed to just missing out because it takes time to get this all to gel. So I'm glad you brought that up, Aviv, because I do think that's what it's all about and that's why we're cheering for your work and so many others to get it done, get across the goal line because there's these 10,000 diseases out there and there's so many unmet needs across them where we don't have treatments that are very effective or have all sorts of horrible side effects. We don't have cures, and we've got all the things now, as we've mentioned here in this conversation, whether it's genome editing and ability to process massive scale data in a way that never could be conceived some years ago. Let's hope that we help the patients, and go ahead.Aviv Regev (35:25):I found the Proust quote, if you want it recorded correctly.Eric Topol (35:29):Yeah, good.Aviv Regev (35:30):It's much longer than what I did. It says, “the only true voyage, the only bath in the Fountain of Youth would be not to visit strange lands but to possess other eyes, to see the universe through the eyes of another, of a hundred others, to see the hundred universes that each of them sees, that each of them is; and this we do, with great artists; with artists like these we do fly from star to star.”—Marcel ProustEric Topol (35:57):I love that and what a wonderful way to close our conversation today. Aviv, I look forward to more conversations with you. You are an unbelievable gem. Thanks so much for joining today.Aviv Regev (36:10):Thank you so much.*************************************Thanks for listening or reading to this Ground Truths Podcast.Please share if you found it of interestThe Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Note: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe

Ground Truths
Jennifer Doudna: The Exciting Future of Genome Editing

Ground Truths

Play Episode Listen Later Apr 14, 2024 31:10


Professor Doudna was awarded the 2020 Nobel Prize in Chemistry with Professor Emmanuelle Charpentier for their pioneering work in CRISPR genome editing. The first genome editing therapy (Casgevy) was just FDA approved, only a decade after the CRISPR-Cas9 editing system discovery. But It's just the beginning of a much bigger impact story for medicine and life science.Ground Truths podcasts are now on Apple and Spotify. And if you prefer videos, they are posted on YouTubeTranscript with links to audio and relevant external linksEric Topol (00:06):This is Eric Topol with Ground Truths, and I'm really excited today to have with me Professor Jennifer Doudna, who heads up the Innovative Genomics Institute (IGI) at UC Berkeley, along with other academic appointments, and as everybody knows, was the Nobel laureate for her extraordinary discovery efforts with CRISPR genome editing. So welcome, Jennifer.Jennifer Doudna (00:31):Hello, Eric. Great to be here.Eric Topol (00:34):Well, you know we hadn't met before, but I felt like I know you so well because this is one of my favorite books, The Code Breaker. And Walter Isaacson did such a wonderful job to tell your story. What did you think of the book?My interview with Walter Isaacson on The Code Breaker, a book I highly recommendJennifer Doudna (00:48):I thought Walter did a great job. He's a good storyteller, and as you know from probably from reading it or maybe talking to others about it, he wrote a page turner. He actually really dug into the science and all the different aspects of it that I think created a great tale.Eric Topol (01:07):Yeah, I recommended highly. It was my favorite book when it came out a couple years ago, and it is a page turner. In fact, I just want to read one, there's so many quotes out of it, but in the early part of the book, he says, “the invention of CRISPR and the plague of Covid will hasten our transition to the third great revolution of modern times. These revolutions arose from the discovery beginning just over a century ago, of the three fundamental kernels of our existence, the atom, the bit, and the gene.” That kind of tells a big story just in one sentence, but I thought I'd start with the IGI, the institute that you have set up at Berkeley and what its overall goals are.Jennifer Doudna (01:58):Right. Well, let's just go back a few years maybe to the origins of this institute and my thinking around it, because in the early days of CRISPR, it was clear that we were really at a moment that was quite unique in the sense that there was a transformative technology. It was going to intersect with lots of other discoveries and technologies. And I work at a public institution and my question to myself was, how can I make sure that this powerful tool is first of all used responsibly and secondly, that it's used in a way that benefits as many people as possible, and it's a tall order, but clearly we needed to have some kind of a structure that would allow people to work together towards those goals. And that was really the mission behind the IGI, which was started as a partnership between UC Berkeley and UCSF and now actually includes UC Davis as well.The First FDA Approved Genome EditingEric Topol (02:57):I didn't realize that. That's terrific. Well, this is a pretty big time because 10 years or so, I guess starting to be 11 when you got this thing going, now we're starting to see, well, hundreds of patients have been treated and in December the FDA approved the first CRISPR therapy for sickle cell disease, Casgevy. Is that the way you say it?Jennifer Doudna (03:23):Casgevy, yeah.Eric Topol (03:24):That must have felt pretty good to see if you go from the molecules to the bench all the way now to actually treating diseases and getting approval, which is no easy task.Jennifer Doudna (03:39):Well, Eric, for me, I'm a biochemist and somebody who has always worked on the fundamentals of biology, and so it's really been extraordinary to see the pace at which the CRISPR technology has been adopted, and not just for fundamental research, but also for real applications. And Casgevy is sort of the crowning example of that so far, is that it's really a technology that we can already see how it's being used to, I think it's fair to say, effectively cure a genetic disease for the first time. Really amazing.Genome Editing is Not the Same as Gene TherapyEric Topol (04:17):Yeah. Now I want to get back to that. I know there's going to be refinements about that. And of course, there's beta thalassemia, so we've got two already, and our mutual friend Fyodor Urnov would say two down 5,000 to go. But I think before I get to the actual repair of the sickle cell defect molecular defect, I think one of the questions I think that people listeners may not know is the differentiation of genome editing with gene therapy. I mean, as you know, there was recently a gene therapy approval for something like $4.25 million for metachromatic leukodystrophy. So maybe you could give us kind of skinny on how these two fundamental therapies are different.Jennifer Doudna (05:07):Right. Well, it's a great question because the terminology sounds kind of the same, and so it could be confusing. Gene therapy goes back decades, I can remember gene therapy being discussed as an exciting new at the time, direction back when I was a graduate student. That was little while ago. And it refers to the idea that we can use a genetic approach for disease treatment or even for a cure. However, it fundamentally requires some mechanism of integrating new information into a genome. And traditionally that's been done using viruses, which are great at doing that. It's just that they do it wherever they want to do it, not necessarily where we want that information to go. And this is where CRISPR comes in. It's a technology allows precision in that kind of genetic manipulation. So it allows the scientist or the clinician to decide where to make a genetic change. And that gives us tremendous opportunity to do things with a kind of accuracy that hasn't been possible before.Eric Topol (06:12):Yeah, no question. That's just a footnote. My thesis in college at University of Virginia, 1975, I'm an old dog, was prospects for gene therapy in man. So it took a while, didn't it? But it's a lot better now with what you've been working on, you and your colleagues now and for the last decade for sure. Now, what I was really surprised about is it's not just of course, these hemoglobin disorders, but now already in phase two trials, you've got hereditary angioedema, which is a life-threatening condition, amyloidosis, cancer ex vivo, and also chronic urinary tract infections. And of course, there's six more others like autoimmune diseases like lupus and type 1 diabetes. So this is really blossoming. It's really extraordinary.Eric Topol (07:11):I mean, wow. So one of the questions I had about phages, because this is kind of going back to this original work and discovery, antimicrobial resistance is really a big problem and it's a global health crisis, and there's only two routes there coming up with new drugs, which has been slow and not really supported by the life science industry. And the other promising area is with phages. And I wonder, since this is an area you know so well, why haven't we put more, we're starting to see more trials in phages. Why haven't we doubled down or tripled down on this to help the antimicrobial resistance problem?Jennifer Doudna (08:00):Well, it's a really interesting area, and as you said, it's kind of one of those areas of science where I think there was interest a while ago and some effort was made for reasons that are not entirely clear to me, at least it fizzled out as a real focused field for a long time. But then more recently, people have realized that there's an opportunity here to take advantage of some natural biology in which viruses can infect and destroy microbes. Why aren't we taking better advantage of that for our own health purposes? So I personally am very excited about this area. I think there's a lot of fundamental work still to be done, but I think there's a tremendous opportunity there as well.CRISPR 2.0Eric Topol (08:48):Yeah, I sure think we need to invest in that. Now, getting back to this sickle cell story, which is so extraordinary. This is kind of a workaround plan of getting fetal hemoglobin built up, but what about actually repairing, getting to fixing the lesion, if you will?Eric Topol (09:11):Yeah. Is that needed?Jennifer Doudna (09:13):Well, maybe it's worth saying a little bit about how Casgevy works, and you alluded to this. It's not a direct cure. It's a mechanism that allows activation of a second protein called fetal hemoglobin that can suppress the effect of the sickle cell mutation. And it's great, and I think for patients, it offers a really interesting opportunity with their disease that hasn't been available in the past, but at the same time, it's not a true cure. And so the question is could we use a CRISPR type technology to actually make a correction to the genetic defect that directly causes the disease? And I think the answer is yes. The field isn't there quite yet. It's still relatively difficult to control the exact way that DNA editing is occurring, especially if we're doing it in vivo in the body. But boy, many people are working on this, as you probably know. And I really think that's on the horizon.Eric Topol (10:19):Yeah. Well, I think we want to get into the in vivo story as well because that, I think right now it's so complicated for a person to have to go through the procedure to get ultimately this treatment currently for sickle cell, whereas if you could do this in vivo and you could actually get the cure, that would be of the objective. Now, you published just earlier this month in PNAS a wonderful paper about the EDVs and the lipid nanoparticles that are ways that we could get to a better precision editing. These EDVs I guess if I have it right, enveloped virus-like particles. It could be different types, it could be extracellular vesicles or whatnot. But do you think that's going to be important? Because right now we're limited for delivery, we're limited to achieve the right kind of editing to do this highly precise. Is that a big step for the future?Jennifer Doudna (11:27):Really big. I think that's gating at the moment. Right now, as you mentioned, somebody that might want to get the drug Casgevy for sickle cell disease or thalassemia, they have to go through a bone marrow transplant to get it. And that means that it's very expensive. It's time consuming. It's obviously not pleasant to have to go through that. And so that automatically means that right now that therapy is quite restricted in the patients that it can benefit. But we imagine a day when you could get this type of therapy into the body with a one-time injection. Maybe someday it's a pill that could be taken where the gene editors target the right cells in the body. In diseases like that, it would be the stem cells in the bone marrow and carry out gene editing that can have a therapeutic benefit. And again, it's one of those ideas that sounds like science fiction, and yet already there's tremendous advance in that direction. And I think over the next, I don't know, I'm guessing 5 to 10 years we're going to see that coming online.Editing RNA, the Epigenome, and the MicrobiomeEric Topol (12:35):Yeah, I'm guessing just because there's so much work on the lipid nanoparticles to tweak them. And there's four different components that could easily be made so much better. And then all these virus-like proteins, I mean, it may happen even sooner. And it's really exciting. And I love that diagram in that paper. You have basically every organ of the body that isn't accessible now, potentially that would become accessible. And that's exciting because whatever blossoming we're seeing right now with these phase two trials ongoing, then you basically have no limits. And that I think is really important. So in vivo editing big. Now, the other thing that's cropped up in recent times is we've just been focused on DNA, but now there's RNA editing, there's epigenetic or epigenomic editing. What are your thoughts about that?Jennifer Doudna (13:26):Very exciting as well. It's kind of a parallel strategy. The idea there would be to, rather than making a permanent change in the DNA of a cell, you could change just the genetic output of the cell and or even make a change to DNA that would alter its ability to be expressed and to produce proteins in the cell. So these are strategies that are accessible, again, using CRISPR tools. And the question is now how to use them in ways that will be therapeutically beneficial. Again, topics that are under very active investigation in both academic labs and at companies.Eric Topol (14:13):Yeah. Now speaking of that, this whole idea of rejuvenation, this is Altos. You may I'm sure know my friend here, Juan Carlos Belmonte, who's been pushing on this for some time at Altos now formerly at Salk. And I know you helped advise Altos, but this idea of basically epigenetic, well using the four Yamanaka factors and basically getting cells that go to a state that are rejuvenated and all these animal models that show that it really happens, are you thinking that really could become a therapy in the times ahead in patients for aging or particular ideas that you have of how to use that?Jennifer Doudna (15:02):Well, you mentioned the company Altos. I mean, Altos and a number of other groups are actively investigating this. Not I would say specifically regarding genome editing, although being able to monitor and probably change gene functions that might affect the aging process could be attractive in the future. I think the hard question there is which genes do we tweak and how do we make sure that it's safe? And better than me I mean, that's a very difficult thing to study clinically because it takes time for one thing, and we probably don't have the best models either. So I think there are challenges there for sure. But along the way, I feel very excited about the kind of fundamental knowledge that will come from those studies. And in particular, this question of how tissues rejuvenate I think is absolutely fascinating. And some organisms do this better than others. And so, understanding how that works in organisms that are able to say regrow a limb, I think can be very interesting.Eric Topol (16:10):And that gets me to that recent study. Well, as you well know, there's a company Verve that's working on the familial hypercholesterolemia and using editing with the PCSK9 through the liver and having some initial, at least a dozen patients have been treated. But then this epigenetic study of editing in mice for PCSK9 also showed results. Of course, that's much further behind actually treating patients with base editing. But it's really intriguing that you can do some of these things without having to go through DNA isn't it?Jennifer Doudna (16:51):Amazing, right? Yeah, it's very interesting.Reducing the Cost of Genome EditingEric Topol (16:54):Wild. Now, one of the things of course that people bring up is, well, this is so darn expensive and it's great. It's a science triumph, but then who can get these treatments? And recently in January, you announced a Danaher-IGI Beacon, and maybe you can tell us a bit about that, because again, here's a chance to really markedly reduce the cost, right?Jennifer Doudna (17:25):That's right. That's the vision there. And huge kudos to my colleague Fyodor Urnov, who really spearheaded that effort and leads the team on the IGI side. But the vision there was to partner with a company that has the ability to manufacture molecules in ways that are very, very hard, of course, for academic labs and even for most companies to do. And so the idea was to bring together the best of genome editing technology, the best of clinical medicine, especially focused on rare human diseases. And this is with our partners at UCSF and with the folks in the Danaher team who are experts at downstream issues of manufacturing. And so the hope there is that we can bring those pieces together to create ways of using CRISPR that will be cost effective for patients. And frankly, we'll also create a kind of roadmap for how to do this, how to do this more efficiently. And we're kind of building the plane while we're flying it, if you know what I mean. But we're trying to really work creatively with organizations like the FDA to come up with strategies for clinical trials that will maintain safety, but also speed up the timeline.Eric Topol (18:44):And I think it's really exciting. We need that and I'm on the scientific advisory board of Danaher, a new commitment for me. And when Fyodor presented that recently, I said, wow, this is exciting. We haven't really had a path to how to get these therapies down to a much lower cost. Now, another thing that's exciting that you're involved in, which I think crosses the whole genome editing, the two most important things that I've seen in my lifetime are genome editing and AI, and they also work together. So maybe before we get into AI for drug discovery, how does AI come into play when you're thinking about doing genome editing?Jennifer Doudna (19:34):Well, the thing about CRISPR is that as a tool, it's powerful not only as a one and done kind of an approach, but it's also very powerful genomically, meaning that you can make large libraries of these guide RNAs that allow interrogation of many genes at once. And so that's great on the one hand, but it's also daunting because it generates large collections of data that are difficult to manually inspect. And in some cases, I believe really very, very difficult to analyze in traditional ways. But imagine that we have ways of training models that can look at genetic intersections, ways that genes might be affecting the behavior of not only other genes, but also how a person responds to drugs, how a person responds to their environment and allows us to make predictions about genetic outcomes based on that information. I think that's extremely exciting, and I definitely think that over the next few years we'll see that kind of analysis coming online more and more.Eric Topol (20:45):Yeah, the convergence, I think is going to be, it's already being done now, but it's just going to keep building. Now, Demis Hassabis, who one of the brilliant people in the field of AI leads the whole Google Deep Mind AI efforts now, but he formed after AlphaFold2 behaving to predict proteins, 200 million proteins of the universe. He started a company Isomorphic Labs as a way to accelerate using AI drug discovery. What can you tell us about that?Jennifer Doudna (21:23):It's exciting, isn't it? I'm on the SAB for that company, and I think it's very interesting to see their approach to drug discovery. It's different from what I've been familiar with at other companies because they're really taking a computational lens to this challenge. The idea there is can we actually predict things like the way a small molecule might interact with a particular protein or even how it might interact with a large protein complex. And increasingly because of AlphaFold and programs like that, that allow accurate prediction of structures, it's possible to do that kind of work extremely quickly. A lot of it can be done in silico rather than in the laboratory. And when you do get around to doing experiments in the lab, you can get away with many fewer experiments because you know the right ones to do. Now, will this actually accelerate the rate at which we get to approved therapeutics? I wonder about your opinion about that. I remain unsure.Editing Out Alzheimer's Risk AllelesEric Topol (22:32):Yeah. I mean, we have one great success story so far during the pandemic Baricitinib, a drug that repurposed here, a drug that was for rheumatoid arthritis, found by data mining that have a high prospects for Covid and now saves lives in Covid. So at least that's one down, but we got a lot more here too. But it, it's great that Demis recruited you on the SAB for Isomorphic because it brings in a great mind in a different field. And it goes back to one of the things you mentioned earlier is how can we get some of this genome editing into a pill someday? Wow. Now, one of the things that for personal interest, as an APOE4 carrier, I'm looking to you to fix my APOE4 and give me APOE2. How can I expect to get that done in the near future?Jennifer Doudna (23:30):Oh boy. Okay, we'll have to roll up our sleeves on that one. But it is appealing, isn't it? I think about it too. It's a fascinating idea. Could we get to a point someday where we can use genome editing as a prophylactic, not as a treatment after the fact, but as a way to actually protect ourselves from disease? And the APOE4 example is a really interesting one because there's really good evidence that by changing the type of allele that one has for the APOE gene, you can actually affect a person's likelihood of developing Alzheimer's in later life. But how do we get there? I think one thing to point out is that right now doing genome editing in the brain is, well, it's hard. I mean, it's very hard.Eric Topol (24:18):It a little bit's been done in cerebral spinal fluid to show that you can get the APOE2 switch. But I don't know that I want to sign up for an LP to have that done.Jennifer Doudna (24:30):Not quite yet.Eric Topol (24:31):But someday it's wild. It's totally wild. And that actually gets me back to that program for coronary heart disease and heart attacks, because when you're treating people with familial hypercholesterolemia, this extreme phenotype. Someday and this goes for many of these rare diseases that you and others are working on, it can have much broader applicability if you have a one-off treatment to prevent coronary disease and heart attacks and you might use that for people well beyond those who have an LDL cholesterol that are in the thousands. So that's what I think a lot of people don't realize that this editing potential isn't just for these monogenic and rare diseases. So we just wanted to emphasize that. Well, this has been a kind of wild ride through so much going on in this field. I mean, it is extraordinary. What am I missing that you're excited about?Jennifer Doudna (25:32):Well, we didn't talk about the microbiome. I'll just very briefly mention that one of our latest initiatives at the IGI is editing the microbiome. And you probably know there are more and more connections that are being made between our microbiome and all kinds of health and disease states. So we think that being able to manipulate the microbiome precisely is going to open up another whole opportunity to impact our health.Can Editing Slow the Aging Process?Eric Topol (26:03):Yeah, I should have realized that when I only mentioned two layers of biology, there's another one that's active. Extraordinary, just going back to aging for a second today, there was a really interesting paper from Irv Weissman Stanford, who I'm sure you know and colleagues, where they basically depleted the myeloid stem cells in aged mice. And they rejuvenated the immune system. I mean, it really brought it back to life as a young malice. Now, there probably are ways to do that with editing without having to deplete stem cells. And the thought about other ways to approach the aging process now that we're learning so much about science and about the immune system, which is one of the most complex ones to work in. Do you have ideas about that are already out there that we could influence the aging process, especially for those of us who are getting old?Jennifer Doudna (27:07):We're all on that path, Eric. Well, I guess the way that I think about it is I like to think that genome editing is going to pave the way to make those kinds of fundamental discoveries. I still feel that there's a lot of our genetics that we don't understand. And so, by being able to manipulate genes precisely and increasingly to look at how genes interact with each other, I think one fundamental question it relates to aging actually is why do some of us age at a seemingly faster pace than others? And it must have to do at least in part with our genetic makeup and how we respond to our environment. So I definitely think there are big opportunities there, really in fundamental research initially, but maybe later to actually change those kinds of things.Eric Topol (28:03):Yeah, I'm very impressed in recent times how much the advances are being made at basic science level and experimental models. A lot of promise there. Now, is there anything about this field that you worry about that keeps you up at night that you think, besides, we talked about that we got to get the cost down, we have to bridge health inequities for sure, but is there anything else that you're concerned about right now?Jennifer Doudna (28:33):Well, I think anytime a new technology goes into clinical trials, you worry that things may get out ahead of their skis, and there may be some overreach that happens. I think we haven't really seen that so far in the CRISPR field, which is great. But I guess I remain cautious. I think that we all saw what happened in the field of gene therapy now decades ago, but that really put a poll on that field for a long time. And so, I definitely think that we need to continue to be very cautious as gene editing continues to advance.Eric Topol (29:10):Yeah, no question. I think the momentum now is getting past that point where you would be concerned about known unknowns, if you will, things that going back to the days of the Gelsinger crisis. But it's really extraordinary. I am so thrilled to have this conversation with you and to get a chance to review where the field is and where it's going. I mean, it's exploding with promise and potential well beyond and faster. I mean, it takes a drug 17 years, and you've already gotten this into two treatments. I mean, I'm struck when you were working on this, how you could have thought that within a 10-year time span you'd already have FDA approvals. It's extraordinary.Jennifer Doudna (30:09):Yeah, we hardly dared hope. Of course, we're all thrilled that it went that fast, but I think it would've been hard to imagine it at the time.Eric Topol (30:17):Yeah. Well, when that gets simplified and doesn't require hospitalizations and bone marrow, and then you'll know you're off to the races. But look, what a great start. Phenomenal. So congratulations. I'm so thrilled to have the chance to have this conversation. And obviously we're all going to be following your work because what a beacon of science and progress and changing medicine. So thanks and give my best to my friend there at IGI, Fyodor, who's a character. He's a real character. I love the guy, and he's a good friend.Jennifer Doudna (30:55):I certainly will Eric, and thank you so much. It's been great talking with you.*******************************************************Thanks for listening and/or reading this edition of Ground Truths.I hope you found it as stimulating as I did. Please share if you did!A reminder that all Ground Truths posts (newsletter and podcast( are free without ads. Soon we'll set it up so you can select what type of posts you want to be notified about.If you wish to be a paid subscriber, know that all proceeds are donated to Scripps Research, and thanks for that—it greatly helped fund our summer internship program for 2023 and 2024.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio/video support. Get full access to Ground Truths at erictopol.substack.com/subscribe

Ground Truths
Sid Mukherjee: On A.I., Longevity and Being A Digital Human

Ground Truths

Play Episode Listen Later Mar 29, 2024 47:27


Siddhartha Mukherjee is a Professor at Columbia University, oncologist, and extraordinary author of Emperor of All Maladies (which was awarded a Pulitzer Prize), The Gene, and The Song of the Cell, along with outstanding pieces in the New Yorker. He is one of the top thought leaders in medicine of our era. “I have begun to imagine, think about what it would be to be a digital human..”—Sid MukherjeeEric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I am delighted to have my friend Sid Mukherjee, to have a conversation about all sorts of interesting things. Sid, his most recent book, SONG OF THE CELL is extraordinary. And I understand, Sid, you're working on another book that may be cell related. Is that right?Sid Mukherjee  (00:30):Eric, it's not cell related, I would say, but it's AI and death related, and it covers, broadly speaking, it covers AI, longevity and death and memory —topics that I think are universal, but also particularly medicine.Eric Topol (00:57):Well, good, and we'll get into that. I had somehow someone steered me that your next book was going to be something building on the last one, but that sounds even more interesting. You're going in another direction. You've covered cancer gene cells, so I think covering this new topic is of particularly interest. So let's get into the AI story and maybe we'll start off with your views on the healthcare side. Where do you think this is headed now?A.I. and Drug DiscoverySid Mukherjee  (01:29):So I think Eric, there are two very broad ways of dividing where AI can enter healthcare, and there may be more, I'm just going to give you two, but there may be more. One is on what I would call the deep science aspect of it, and by that I mean AI-based drug discovery, AI-based antibody discovery, AI-based modeling. All of which use AI tools but are using tools that have to do with machine learning, but may have to do less directly with the kind of large language models. These tools have been in development for a long time. You and I are familiar with them. They are tools. Very simply put, you can imagine that the docking of a drug to a protein, so imagine every drug, every medicine as a small spaceship that docks onto a large spaceship, the large spaceship being the target.(02:57):So if you think of it that way, there are fundamental rules. If anyone's watched Star Wars or any of these sci-fi films, there are fundamental rules by which that govern the way that the small spaceship in this case, a molecule like aspirin fits into a pocket of its target, and those are principles that are determined entirely by chemistry and physics, but they can be taught, you can learn what kind of spaceship or molecule is likely to fit into what kind of pocket of the mothership, in this case, the target. And if they can be learned, they're amenable to AI-based discovery.Eric Topol (03:57):Right. Well, that's, isn't that what you'd call the fancy term structure-based discovery, where you're using such tools like what AlphaFold2 for proteins and then eventually for antibodies, small molecules, et cetera, that you can really rev up the whole discovery of new molecules, right?Sid Mukherjee  (04:21):That's correct, and that's one of the efforts that I'm very heavily involved in. We have created proprietary algorithms that allow us to enable this. Ultimately, of course, there has to be a method by which you start from these AI based methods, then move to physical real chemistry, then move to real biology, then move to obviously human biology and ultimately to human studies. It's a long process, but it's an incredibly fruitful process.Eric Topol (04:57):Well, yeah, as an example that recently we had Jim Collins on the podcast and he talked about the first new drug class of antibiotics in two decades that bind to staph aureus methicillin resistant, and now in clinical trials. So it's happening. There's 20 AI drugs in clinical trials out there.Sid Mukherjee  (05:18):It's bound to happen. It is an unstoppable bound to happen systematology of drug discovery. This is just bound to happen. It is unstoppable. There are kinks in it in the road, but those will be ironed out, but it's bound to happen.(05:41):So that's on the very discovery oriented end, which is more related to learning algorithms that have to do with AI and less to do with what we see in day-to-day life, the ChatGPT kind of day-to-day life of the world. On the very other end of the spectrum, just to move along on the very other end of the spectrum are what I would call patient informatics. So by patient informatics, I mean questions like who responds to a particular drug? What genes do they have? What environment are they in? Have they had other drug interactions in the past? What is it about their medical record that will allow us to understand better why or why they're not responding to a medicine?(06:51):Those are also AI, can also be really powered by AI, but are much more dependent and much more sensitive to our understanding of these current models, the large language models. So just to give you an example, let's say you wanted to enroll a clinical trial for patients with diabetes to take a new drug. You could go into the electronic medical record, which right now is a text file, and ask the question, have they or have they not responded to the standard agents? And what has their response been? Should they be on glucose monitoring? How bad is their diabetes based on some laboratory parameters, et cetera, et cetera. So that's a very different information rich, electronic medical record rich mechanism to understand how to develop medicines. One lies, the first lies way in the discovery end of the spectrum. The second lies way in the clinical trials and human drug exposure end of the spectrum. And of course, there are things in the middle that I haven't iterated, but those are the two really broad categories where one can imagine AI making a difference and to be fair through various efforts, I'm working on both of those, the two end spectrum.A.I. and CancerEric Topol (08:34):Well, let's drill down a bit more on the person individual informatics for a moment, since you're an oncologist, and the way we screen for cancer today is completely ridiculous by age only. But if you had a person's genome sequence, polygenic risk scores for cancers and all the other known data that, for example, the integrity of their immune system response, environmental exposures, which we'll talk about in a moment more, wouldn't we do far better for being able to identify high risk people and even preventing cancer in the future?Sid Mukherjee  (09:21):So I have no doubt whatsoever that more information that we can analyze using intelligent platforms. And I'm saying all of these words are relevant, more information analyzed through intelligent platforms. More information by itself is often useless. Intelligent platforms without information by themselves are often useless, but more information with intelligent platforms, that combination can be very useful. And so, one use case of that is just to give you one example, there are several patients, women who have a family history of breast cancer, but who have no mutations in the known single monogenic breast cancer risk genes, BRCA1, BRCA2, and a couple of others. Those patients can be at a high a risk of breast cancer as patients who have BRCA1 and BRCA2. It's just that their risk is spread out through not one gene but thousands of genes. And those patients, of course have to be monitored and their risk is high, and they need to understand what the risk is and how to manage it.(10:57):And that's where AI can, and first of all, informatics and then AI can play a big difference because we can understand how to manage those patients. They used to be called, this is kind of, I don't mean this lightly, but they used to be called BRCA3 because they didn't have BRCA1, they didn't have BRCA2, but they had a constellation of genes, not one, not two, but thousands of genes that would increase their risk of breast cancer just a little bit. I often describe these as nudge genes as opposed to shove genes. BRCA1 and BRCA2 are shoved genes. They shove you into having a high risk of breast cancer. But you can imagine that there are nudge genes as well in which they, in which a constellation of not one, not two, not three, but a thousand genetic variations, give a little push each one, a little push towards having a higher risk of breast cancer.(12:09):Now, the only way to find these nudge genes is by doing very clever informatic studies, some of which have been done in breast cancer, ovarian cancer, cardiovascular diseases, other diseases where you see these nudge effects, small effects of a single gene, but accumulated across a thousand, 2000, 3000 genes, an effect that's large enough that it's meaningful. And I think that we need to understand those. And once we understand them, I think we need to understand what to do with these patients. Do we screen them more assertively? Do we recommend therapies? You can get more aggressive, less aggressive, but of course that demands clinical trials and a deeper understanding of the biology of what happens.A.I. And LongevityEric Topol (13:10):Right, so your point about the cumulative effects of small variants, hundreds and hundreds of these variants being equivalent potentially, as we've seen across many diseases, it's really important and you're absolutely right about that. And I've been pushing for trying to get these polygenic risk scores into clinical routine use, and hopefully we're getting closer to that. And that's just as you say, just one layer of this information to add to the intelligence platform. Now, the next thing that you haven't yet touched on connecting the dots is, can AI and informatics be used to promote longevity?Sid Mukherjee  (13:55):Yeah, so that's a very interesting question. Let me attack that question in two ways. One biological and one digital. The biological one is to understand, again, the biological one has to do with informatics. So we could use AI so that, imagine that there are thousands, perhaps tens of thousands of variables. You happen to live on a Mediterranean island, you happen to walk five miles a day, you happen to have a particular diet, you happen to have a particular genetic makeup, you happen to have a particular immunological makeup, et cetera, et cetera, et cetera. All of those you happen to have, you happen to have, you happen to have. Now, if we could collect all of this data across hundreds of thousands of individuals, we'd need a system to deconvolute the data and ask the question, what is it about these 750,000 individuals that predicted longevity? Was it the fact that they walked five miles a day? Was it  their genetic makeup? Was it their diet? Was it their insulin level? Was it their, so you can imagine an n-dimensional diagram, as it were, and to deconvolute that n-dimensional diagram and to figure out what was the driving force of their longevity, you would need much more than conventional information analysis. You need AI.(15:58):So that's one direction that one could use. Again, informatics to figure out longevity. A second direction, completely independent of the first is to ask the question, what are the biological determinants of longevity in other animals? Is it insulin levels? Is it chronic? Is it the immune system? Is it the lack of, and we'll come back to this question, is it as you very well know, people with extreme longevity, the so-called supercentenarians. Interestingly, the supercentenarians don't generally die of cancer and heart disease, which are the two most common killers of people in their 70s and 80s in most countries of the western world. They die typically of what I would call regenerative failure. Their immune systems collapse. Their stem cells can't make enough skin, so they get skin infections, their skin collapses, they get bone defects, and they die of fractures. They get neurological defects, they die of neurodegenerative diseases and so forth. So they die of true degenerative diseases as opposed to cancer and heart disease, which have been the plagues of human biology since the beginning of time.(17:49):Again, I'm talking about the western world, of course, a different story with infectious diseases elsewhere. So a different way to approach the problem would be to say, what are the regenerative blockades that prevent regeneration at a biological level for these patients? And ask the question whether we can overcome these regenerative blockades using, again, the systems that I described before. What are they? What are the checkpoints? What are the mechanisms? And could we encourage the body to override those mechanisms? We still have to deal with heart disease and cancer, but once we had dealt with heart disease and cancer, we would have to ask the question. Okay, now we've dealt with those two things. What are the regenerative blockades that prevent people from having longevity once we've overcome those two big humps, heart disease and cancer?Eric Topol (19:00):Yeah, no, I think you're bringing up a really fascinating topic. And as you know, there's been many different ideas for how to achieve that, whether that's the senolytic drugs or getting rid of dead cells or using the transcription factors of cells instead of going into induced pluripotent stem cells, but rather to go to a rejuvenation of cells. Are you optimistic that eventually we're going to crack this case of better approach to regeneration?Sid Mukherjee  (19:33):Oh, I'm extremely optimistic. I'm optimistic, but I'm optimistic to a point. And that brings me to the third place, which is I'm optimistic to a point, which is that you conquer in some, hopefully you conquer a major part of heart disease and cancer, and now you're up against cellular regeneration. You then conquer cellular regeneration. And I don't know what the next problem is going to be. It's going to be some new hurdle. So I think there are two solutions to that hurdle. One solution is to say, okay, there's a new hurdle. We'll solve that new hurdle and it's bit by bit extending longevity year by year, by year by year as it were. But a completely second solution occurs to me, and here I'm going completely off script, Eric, which is what I do in my life.Going Off Script: Being A Digital Human(20:45):I have begun to imagine, think about what it would be to be a digital human and by a digital human I mean, it began with my father's death. My father passed away a few years ago, and I would sometimes enter a kind of psychic space, what I would call a psychomanteum, in which I would imagine myself asking him questions about critical moments in my life, make a critical decision. I would rely on my father to make that decision for me. He would give me advice. That advice had some stereotypical qualities about it. Think about this, think about that. My experience has been this. My life has been this. My life has been that. But of course, times change. And I began to wonder whether with the use of digital technologies and digital AI technologies in particular, what could create a simulacrum of a psychomanteum?(22:06):So in other words, your physical body would pass, but somehow your digital body, all the memories, the experiences, the learning, all of that, that you had, the emotional connections that you had formed in your lifetime would somehow remain and would remain in a kind of psychomanteum in which you could go into a room. And again, I'm not talking voodoo science here. I'm talking very particular ways of extracting information from a person's decision making, extracting information about a person's ideas about the word their sort of their schema, or as psychologists describe it, the schemata. So that in some universe, if my father downloaded passively or actively the kind of decision making, not the actual decisions, the form of decision making and the form of communication that he liked, that I could go back to him eternally. My grandchildren could go back to him eternally and ask the question, great grandpa, what would you do under these circumstances? And what's amazing about it is that this is not completely science fiction.Eric Topol (23:45):Not at all.Sid Mukherjee  (23:46):It is within the realms of reality in the sense of there's no digital limitation to it. The main limitation to it is information. So Eric Topol, you make decisions I would imagine with some kind of stereotypical wisdom, you have accumulated wisdom in your life. You think about things in a particular critical way. When you read a book, you read a book in a particular way, it's whatever it might be. And Eric Topol psychomanteum would be, I would go into a space and see you and ask you a question, Eric, you read this book, what did you think about it? You found this piece of evidence. Read this scientific paper. What do you think about it? And so forth.(24:49):So again, let me just go back to my first point, which is number one, I think that regenerative medicine will have a regenerative moment itself, and we will discover new medicines, new mechanisms by which we can extend lifespan. Number two, that will involve getting over two big humps that we have right now, cancer and heart disease. Hopefully we'll get over both of those at some point of time. And number three, that in parallel, we will find a way to create digital selves that even when our physical bodies decay and die, that we will have a sense of eternal longevity based on digital selves, which is accessible or readily accessible through AI mechanisms. Yeah, this spectrum, I think will change our ideas of what longevity means.The Environmental FactorsEric Topol (26:10):Well, I think your idea about the digital human and the brain and the decision making and that sort of thing is really well founded by the progress being made in the brain machine interface, as you know, with basically the mind is being digitized and you can get cells to talk, to speak to a person, and all sorts of things that are happening right now that are basically deconvoluting brain function at the cellular, even molecular neural level. So I don't think it's farfetched at all. I'm glad you went off script, Sid. That's great. Now this, I want to get back to something you brought up earlier because there are a lot of obstacles as you will acknowledge. And one of them is that we have in our environment horrible issues about pollution, about carcinogens, the focus of your recent New Yorker piece, plastics, microplastics, nanoplastics, now found in our arteries and brains and causing more, as we just recently saw, more heart attack, strokes and death, and of course the climate crisis. So with all this great science that we've just been discussing, our environment's going to hell, and I want to get your comments because you had a very insightful piece as always in the New Yorker in December about this, and I know you've been thinking about it, that the obstacles are getting worse to override the problems that we have today, don't you think?Sid Mukherjee  (27:55):So you're absolutely right. If we go down this path, we are going to go to hell in eye baskets. What we haven't discounted for is really decades, if not possibly a century of research that shows that there are certain kinds of inflammatory agents that cause both cancer, heart disease, and inflammation that have to do with their capacity to be so foreign to the human body that they're recognized as alien objects and so alien that our immune systems can't handle them. And essentially send off what I would call a five-bell alarm, saying that here's something that the immune system can't handle. It's beyond the capacity. And that five-bell alarm, as we now know, unfortunately, causes a systemic inflammatory response. And that systemic inflammatory response can potentiate heart disease, cancer, and maybe many other diseases that we don't know about because we haven't looked.Eric Topol (29:28):Absolutely.Sid Mukherjee  (29:29):So to connect this back to climate change, pollution is one of them. Air pollution is absolutely one of them. Microplastics, undegradable sort of forever plastics are one of them, or some of them. I think that there is no way around it except to really find a systematic way of assessing them. Look, it is wonderful to have new materials in the world. I'm wearing a jacket made out of God knows what, it's not cloth. I don't know what you are wearing, Eric, but it may not be cloth. These are great materials. This keeps the rain away. But on the other hand, it may be shedding something that I don't know. We need to find scientific ways of assessing the safety and the validity of some new materials that we bring into the world. And the way that we do that is to ask the question, is it inflammatory? Is there something that we are missing? Is there something about it that we should be thinking about that we haven't thought about?Eric Topol (31:02):Well, and to that end, you've been a very, I think, astute observer about diet as it relates to cancer. And we know similarly, as we just talked about with our environment, there's the issue of ultra-processed foods, and we've got big food, we got big plastics, big tobacco. I mean, we have all these counter forces to what the science is showing.Sid Mukherjee  (31:29):Too many bigs.Eric Topol (31:31):Yeah, yeah. But I guess the net of it is, Sid, if I get it right, you think that the progress we're making in science, and that includes the things we've talked about and genome editing and accelerated drug discovery, these sorts of programs, the informatics, the AI can override this chasing of our tail with basically unchecked issues that are, whether it's from our nutrition, our air, what we ingest and breathe, these are some serious problems.Preventing DiseasesSid Mukherjee  (32:06):No, I don't think that. I don't think that cancer and cardiovascular disease prevention, as you very well know, Eric, because you've been in the forefront of it, is a pyramid. The base of the pyramid is prevention. Prevention is the most effective. It's the most difficult. It's the hardest to understand, the most difficult trials to incorporate, but it is the base of the pyramid. And so let it be said that I don't think that we're going to solve cancer, cardiovascular disease by better treatment using CRISPR. My laboratory, and one of my companies before I happened to be wearing the jacket, but was one of the first to use CRISPR and transplant CRISPR. CRISPR, human beings with or CRISPR bone marrow into human beings long before anyone else, we were actually among the first. These human beings, thankfully, and astonishingly remain completely alive. We deleted a gene from their bone marrow. They engrafted with no problem. They're still alive today, and we are treating them for cancer. Astonishing fact, there are 12 of them in the world.(33:49):And again, astonishing fact, wonderful, beautiful news, beautiful science. But there are 12, if we want to make a big change in the universe, we need to get not to 12, but to 12 million, potentially 120 million. And that's not going to happen because we're going to CRISPR their bone marrow. It's going to happen because we change their environments, their diets, their lifestyles, their exposures, we understand their risks, their genetics, et cetera, et cetera, et cetera, et cetera. It's not going to happen because we give them CRISPR bone marrow transplants that enable them to change their risk of cancer. So I'm very clear about this or clear eyed about it, I would say, which is to say that great progress in medicine is being made. There's no doubt about it. I'm happy about it. I'm happy to be part of it. I'm happy to be in the forefront of it.(35:00):We have now delivered one of the first cellular therapies for cancer in India at a price point that really challenges the price point of the west. We are now producing this commercially and or about to produce this commercially, so for lymphomas and leukemias, I'm so excited about the progress in science. But all of that said, let me be very clear, the real progress in cancer and cardiovascular disease is going to come from prevention. And if that's where we're going, we need to really rethink at a very fundamental level as you have Eric, at a very fundamental level, how do we rethink prevention, cancer prevention, cardiovascular disease prevention, and as a correlate, regenerative disease, regeneration, cancer prevention, cardiovascular disease prevention. The fundamentals are how do we find things that are in our exposome, things that we're exposed to environments, gene environment interactions that increase the risk of cancer and cardiovascular disease, and how do we take them out? And how do we do this without running 15-year trials so that we can get the results now? And that's what I'm really interested in in terms of information.Eric Topol (36:55):Yeah. Well, I'm with you there. And just to go along with those 12 patients you mentioned, as you know recently it was reported there were 15 patients with serious autoimmune diseases, and they got a therapy to knock out all their B cells. And when their B cells came back, they didn't make autoantibodies anymore. And this was dermatomyositis and lupus and systemic sclerosis, and it was pretty magical. If it can be extended, like you said, okay, 15 people, just like your 12, if you can do that in millions, well, you can get rid of autoimmune diseases, which would be a nice contribution. I mean, there's so many exciting things going on right now that we've touched on, but as you get to it, you've already approached this inequity issue by bringing potentially very expensive treatments that are exciting to costs that would be applicable in India and many countries that are not in the rich income category. So this is a unique time it seems like Sid, in our advances, in the cutting edge progress that's being made, wouldn't you say?The Why on Cancer in the YoungSid Mukherjee  (38:14):Well, I would say that the two advances have to go hand in hand. There will be patients who are recalcitrant to the standard therapies, your patients with severe lupus dermatitis, et cetera. Those patients will require cutting edge therapy, and we will find ways to deliver it to them. There are other patients, hundreds of not 12, not 15, but hundreds of thousands if not millions, who will require an understanding of why there is an increase, for instance, in asthmatics disease in India. Why is that increasing? Why is there an increase in non-smoking related lung cancer in some parts of the world? Why? What's driving that? Why is there an increase in young patients with cancers in the United States? Of all things that stand out, there is a striking increase in colorectal cancer in young men and women. There's an increase in esophageal cancer in young men and women. Why?Eric Topol (39:58):Yeah, why, why?Sid Mukherjee  (40:00):Why? And so, the answer to that question lies in understanding the science, getting deeper information informatics, and then potentially understanding the why. So again, I draw the distinction between two broad classes of spaces where information science can make a big difference. On one hand, on the very left hand of the picture, an understanding of how to make new medicines for patients who happen to have these diseases. And on the way right hand of finding out why these patients are there in the first place, and asking the question, why is it that there are more patients, young men and women with colorectal cancer, are we eating something? Is it our diet? Is it our diet plus our environment? Is it the diet plus environment plus genetics? But why? There must be a why. When you have a trend like this, there's always a why. And if there's a why, there's always an answer. Why? And we have the best tools, and this is the positive piece of this. The positive piece of this is that we now have among the best tools that we've ever had to answer that why? And that's what makes me optimistic. Not a drug, not a medicine, not a fancy program, but the collective set of tools that we have that allow us to answer the question why? Because that is of course the question that every patient with esophageal and colorectal cancer is asking why.Eric Topol (42:01):I'm with you. What you're bringing up is fundamental. We have the tools, but we've noted this increase in colon cancer in the young for several years, and we're not any closer to understanding the why yet, right?Sid Mukherjee  (42:18):Yes. We're not any closer to understanding the why yet. Part of the answer is that we haven't delved into the why properly enough. These are studies that take time. They have longitude because these are studies that have to do with prevention. They take time, they take patients. So the quick answer to your question is, I don't think we've made the effort and we haven't made the effort, especially with the technological advances that we have today. So imagine for a second that we launched a project in which, again, like the Manhattan project, the Apollo project, we advanced a project which said the colorectal cancer in young project in the United States, we brought the best science minds together and ask the question, go into a room, lock yourself up, and don't come out of the room until you have the answer to figuring out how and then why we have young men and women with colorectal cancer increasing. I would imagine you could nominate, I could nominate 10 people to that committee and they would willingly serve. They'd be willing to be locked up in a room and ask the question why? Because they want to answer that question. That why is extraordinarily important.Eric Topol (44:14):I'm with you on that too, because we have the tools, like you said, we can assess the gut microbiome, their genome, their diets, their environmental exposures and figure this out. But as you say, there hasn't been a commitment to doing it.Sid Mukherjee  (44:30):And that commitment has to come centrally, right? That commitment has to come from the NIH, that has to come from the NCI, the National Cancer Institute, the National Institute of Health. It has to come as a mechanism that says, listen, let's solve this problem. So identifying the problem, there's an increase in colorectal cancer in young people. Important. Yes. Let's, let's figure out the answer why, and let's collect all the information for the next five years, seven years, whatever it might take to answer that question.Eric Topol (45:18):And as you said, the intelligent platforms will help analyze it.Sid Mukherjee  (45:23):Yes. I mean, we have the tools. So if you have the tools and if you collect the information, the tools will analyze that information.Eric Topol (45:36):Right. Well, this has been inspiring and daunting at the same time, this discussion. What I love about you, Sid, is you're a big thinker. You're one of the great thinkers in medicine of our era, and you also of course are such an extraordinary writer. So we're going to look forward to your next book and your rejuvenation of the cancer Emperor of All Maladies book but I want to thank you. I always enjoy our discussions. They always get to areas that highlight where we're missing the opportunities that we have that we're not actualizing. That's one of the many things I really love about you and your work, so keep up the good stuff and I look forward to the next chance we get to visit and discuss all this stuff.Sid Mukherjee  (46:31):And it's been a great pleasure knowing you for so many years, Eric. And then whenever we have dinner together, the dinner always begins with my asking you why. And so, the why question is the first question. The how question is a harder question. We can always answer the how question, but the why question is the first question. So the next time I have dinner with you, wherever it might be, San Diego, New York, Los Angeles, I'm going to ask you another why question. And you're going to answer the how question, because that's what you're good at. And it's been such a pleasure interacting with you for so many years.Eric Topol (47:12):Oh, thank you so much. What a great friend.Thanks for listening/reading to this Ground Truths conversation.If you found it stimulating, please share with your colleagues and friends.All content on Ground Truths—newsletter analyses and podcasts—is free.Voluntary paid subscriptions all go to support Scripps Research.Ground Truths now has a YouTube channel for all the podcasts. Here's a list of the people I've interviewed that includes a few that will soon be posted or are scheduled. Get full access to Ground Truths at erictopol.substack.com/subscribe

Ground Truths
Daphne Koller: The Convergence of A.I. and Digital Biology

Ground Truths

Play Episode Listen Later Mar 10, 2024 35:16


Transcript Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths and I am absolutely thrilled to welcome Daphne Koller, the founder and CEO of insitro, and a person who I've been wanting to meet for some time. Finally, we converged so welcome, Daphne.Daphne Koller (00:21):Thank you Eric. And it's a pleasure to finally meet you as well.Eric Topol (00:24):Yeah, I mean you have been rocking everybody over the years with elected to the National Academy of Engineering and Science and right at the interface of life science and computer science and in my view, there's hardly anyone I can imagine who's doing so much at that interface. I wanted to first start with your meeting in Davos last month because I kind of figured we start broad AI rather than starting to get into what you're doing these days. And you had a really interesting panel [←transcript] with Yann LeCun, Andrew Ng and Kai-Fu Lee and others, and I wanted to get your impression about that and also kind of the general sense. I mean AI is just moving it at speed, that is just crazy stuff. What were your thoughts about that panel just last month, where are we?Video link for the WEF PanelDaphne Koller (01:25):I think we've been living on an exponential curve for multiple decades and the thing about exponential curves is they are very misleading things. In the early stages people basically take the line between whatever we were last year, and this year and they interpolate linearly, and they say, God, things are moving so slowly. Then as the exponential curve starts to pick up, it becomes more and more evident that things are moving faster, but it's still people interpolate linearly and it's only when things really hit that inflection point that people realize that even with the linear interpolation where we'll be next year is just mind blowing. And if you realize that you're on that exponential curve where we will be next year is just totally unanticipatable. I think what we started to discuss in that panel was, are we in fact on an exponential curve? What are the rate limiting factors that may or may not enable that curve to continue specifically availability of data and what it would take to make that curve available in areas outside of the speech, whatever natural language, large language models that exist today and go far beyond that, which is what you would need to have these be applicable to areas such as biology and medicine.Daphne Koller (02:47):And so that was kind of the message to my mind from the panel.Eric Topol (02:53):And there was some differences in opinion, of course Yann can be a little strong and I think it was good to see that you're challenging on some things and how there is this “world view” of AI and how, I guess where we go from here. As you mentioned in the area of life science, there already had been before large language models hit stride, so much progress particularly in imaging cells, subcellular, I mean rare cells, I mean just stuff that was just without any labeling, without fluorescein, just amazing stuff. And then now it's gone into another level. So as we get into that, just before I do that, I want to ask you about this convergence story. Jensen Huang, I'm sure you heard his quote about biology as the opportunity to be engineering, not science. I'm sure if I understand, not science, but what about this convergence? Because it is quite extraordinary to see two fields coming together moving at such high velocity."Biology has the opportunity to be engineering not science. When something becomes engineering not science it becomes...exponentially improving, it can compound on the benefits of previous years." -Jensen Huang, NVIDIA.Daphne Koller (04:08):So, a quote that I will replace Jensen's or will propose a replacement for Jensen's quote, which is one that many people have articulated, is that math is to physics as machine learning is to biology. It is a mathematical foundation that allows you to take something that up until that point had been kind of mysterious and fuzzy and almost magical and create a formal foundation for it. Now physics, especially Newtonian physics, is simple enough that math is the right foundation to capture what goes on in a lot of physics. Biology as an evolved natural system is so complex that you can't articulate a mathematical model for that de novo. You need to actually let the data speak and then let machine learning find the patterns in those data and really help us create a predictability, if you will, for biological systems that you can start to ask what if questions, what would happen if we perturb the system in this way?The ConvergenceDaphne Koller (05:17):How would it react? We're nowhere close to being able to answer those questions reliably today, but as you feed a machine learning system more and more data, hopefully it'll become capable of making those predictions. And in order to do that, and this is where it comes to this convergence of these two disciplines, the fodder, the foundation for all of machine learning is having enough data to feed the beast. The miracle of the convergence that we're seeing is that over the last 10, 15 years, maybe 20 years in biology, we've been on a similar, albeit somewhat slower exponential curve of data generation in biology where we are turning it into a quantitative discipline from something that is entirely observational qualitative, which is where it started, to something that becomes much more quantitative and broad based in how we measure biology. And so those measurements, the tools that life scientists and bioengineers have developed that allow us to measure biological systems is what produces that fodder, that energy that you can then feed into the machine learning models so that they can start making predictions.Eric Topol (06:32):Yeah, well I think the number of layers of data no less what's in these layers is quite extraordinary. So some years ago when all the single cell sequencing was started, I said, well, that's kind of academic interest and now the field of spatial omics has exploded. And I wonder how you see the feeding the beast here. It's at every level. It's not just the cell level subcellular and single cell nuclei sequencing single cell epigenomics, and then you go all the way to these other layers of data. I know you plug into the human patient side as well as it could be images, it could be past slides, it could be the outcomes and treatments and on and on and on. I mean, so when you think about multimodal AI, has anybody really done that yet?Daphne Koller (07:30):I think that there are certainly beginnings of multimodal AI and we have started to see some of the benefits of the convergence of say, imaging and omics. And I will give an example from some of the work that we've recently distributed on a preprint server work that we did at insitro, which took imaging data from standard histopathology slides, H&E slides and aligned them with simple bulk RNA-Seq taken from those same tumor samples. And what we find is that by training models that translate from one to the other, specifically from the imaging to the omics, you're able to, for a fairly large fraction of genes, make very accurate predictions of gene expression levels by looking at the histopath images alone. And in fact, because many of the predictions are made at the tile level, not at the entire slide level, even though the omics was captured in bulk, you're able to spatially resolve the signal and get kind of like a pseudo spatial biology just by making predictions from the H&E image into these omic modalities.Multimodal A.I. and Life ScienceDaphne Koller (08:44):So there are I think beginnings of multimodality, but in order to get to multimodality, you really need to train on at least some data where the two modalities are simultaneously. And so at this point, I think the rate limiting factor is more a matter of data acquisition for training the models. It is for building the models themselves. And so that's where I think things like spatial biology, which I think like you are very excited about, are one of the places where we can really start to capture these paired modalities and get to some of those multimodal capabilities.Eric Topol (09:23):Yeah, I wanted to ask you because I mean spatial temporal is so perfect. It is two modes, and you have as the preprint you refer to and you see things like electronic health records in genomics, electronic health records in medical images. The most we've done is getting two modes of data together. And the question is as this data starts to really accrue, do we need new models to work with it or do you actually foresee that that is not a limiting step?Daphne Koller (09:57):So I think currently data availability is the most significant rate limiting step. The nice thing about modern day machine learning is that it really is structured as a set of building blocks that you can start to put together in different ways for different situations. And so, do we have the exact right models available to us today for these multimodal systems? Probably not, but do we have the right building blocks that if we creatively put them together from what has already been deployed in other settings? Probably, yes. So of course there's still a model exploration to be done and a lot of creativity in how these building blocks should be put together, but I think we have the tools available to solve these problems. What we really need is first I think a really significant data acquisition effort. And the other thing that we need, which is also something that has been a priority for us at insitro, is the right mix of people to be put together so that you can, because what happens is if you take a bunch of even extremely talented and sophisticated machine learning scientists and say, solve a biological problem, here's a dataset, they don't know what questions to ask and oftentimes end up asking questions that might be kind of interesting from machine learning perspective, but don't really answer fundamental biology questions.Daphne Koller (11:16):And conversely, you can take biologists and say, hey, what would you have machine learning do? And they will tell you, well, in our work we do A to B to C to D, and B to C is kind of painful, like counting nuclei is really painful, so can we have the machine do that for us? And it's kind of like that. Yeah, but that's boring. So what you get if you put them in a room together and actually get to the point where they communicate with each other effectively, is that not only do you get better solutions, you get better problems. I think that's really the crux of making progress here besides data is the culture and the people.A.I. and Drug DiscoveryEric Topol (11:54):Well, I'm sure you've assembled that at insitro knowing you, and I mean people tend to forget it's about the people, it's not about the models or even the data when you have all that. Now you've been onto drug discovery paths, there's at least 20 drugs that are AI driven that are in the clinic in phase one or two at some point. Obviously these are not only ones that you've been working on, but do you see this whole field now going into high gear because of this? Or is that the fact that there's all these AI companies partnering with big pharma? Is it a lot of nice agreements that are drawn up with multimillion dollar milestones or is this real?Daphne Koller (12:47):So there's a number of different layers to your question. First of all, let me start by saying that I find the notion of AI driven drugs to be a bit of a weird concept because over time most drugs will have some element of AI in them. I mean, even some of the earlier work used data science in many cases. So where do you draw the boundary? I mean, we're not going to be in a world anytime soon where AI starts out with, oh, I need to work on ALS and at the end there is a clinical trial design ready to be submitted to the FDA without anything, any human intervention in the middle. So, it's always going to be an interplay between a machine and a human with over time more and more capabilities I think being taken on by the machine, but I think inevitably a partnership for a long time to come.Daphne Koller (13:41):But coming to the second part of your question, is this real? Every big pharma has gotten to the point today that they realize they need some of that AI thing that's going around. The level of sophistication of how they incorporate that and their willingness to make some of the hard decisions of, well, if we're going to be doing this with AI, it means we shouldn't be doing it the old way anymore and we need to make a big dramatic internal shift that I think depends very much on the specific company. And some companies have more willingness to take those very big steps than others, so will some companies be able to make the adjustment? Probably. Will all of them? Probably not. I would say however, that in this new world there is also room for companies to emerge that are, if you will, AI native.Daphne Koller (14:39):And we've seen that in every technological revolution that the native companies that were born in the new age move faster, incorporate the technology much more deeply into every aspect of their work, and they end up being dominant players if not the dominant player in that new world. And you could look at the internet revolution and think back to Google did not emerge from the yellow pages. Netflix did not emerge from blockbuster, Amazon did not emerge from Walmart so some of those incumbents did make the adjustment and are still around, some did not and are no longer around. And I think the same thing will happen with drug discovery and development where there will be a new crop of leading companies to I think maybe together with some of the incumbents that we're able to make the adjustment.Eric Topol (15:36):Yeah, I think your point there is essential, and another part of this story is that a lot of people don't realize there's so many nodes of ways that AI can facilitate this whole process. I mean from the elemental data mining that identified Baricitinib for Covid and now being used even for many other indications, repurposing that to how to simulate for clinical trials and everything in between. Now, what seems like because of your incredible knack and this convergence, I mean your middle name is like convergence really, you are working at the level of really in my view, this unique aspect of bringing cells and all the other layers of data together to amp things up. Is that a fair assessment of where insitro in your efforts are directed?Three BucketsDaphne Koller (16:38):So first of all, maybe it's useful to kind of create the high level map and the simplest version I've heard is where you divide the process into three major buckets. One is what you think of as biology discovery, which is the discovery of new therapeutic hypotheses. Basically, if you modulate this target in this group of humans, you will end up affecting this clinical outcome. That's the first third. The middle third is, okay, well now we need to turn that hypothesis into an actual molecule that does that. So basically generating molecules. And then finally there's the enablement and acceleration of the clinical development process, which is the final third. Most companies in the AI space have really focused in on that middle third because it is well-defined, you know when you've succeeded if someone gives you a target and what's called a target product profile (TPP) at the end of whatever, two, three years, whether you've been able to create a molecule that achieves the appropriate properties of selectivity and solubility and all those other things. The first third is where a lot of the mistakes currently happen in drug discovery and development. Most drugs that go into the clinic don't fail because we didn't have the right molecule. I mean that happens, but it's not the most common failure mode. The most common failure mode is that the target was just a wrong target for this disease in this patient population.Daphne Koller (18:09):So the real focus of us, the core of who we are as a company is on that early third of let's make sure we're going after the right clinical hypotheses. Now with that, obviously we need to make molecules and some of those molecules we make in-house, and obviously we use machine learning to do that as well. And then the last third is we discover that if you have the right therapeutic hypothesis, which includes which is the right patient population, that can also accelerate and enable your clinical trials, so we end up doing some of that as well. But the core of what we believe is the failure mode of drug discovery and what it's going to take to move it to the next level is the articulation of therapeutic hypotheses that actually translate into clinical outcome. And so in order to do that, we've put together, to your point about convergence, two very distinct types of data.Daphne Koller (19:04):One is data that we print in our own internal data factory where we have this incredible set of capabilities that uses stem cells and CRISPR and microscopy and single cell measurements and spatial biology and all that to generate massive amounts of in-house data. And then because ultimately you care not about curing cells, you care about curing people, you also need to bring in the clinical data. And again, here also we look at multiple high content data modalities, imaging and omics, and of course human genetics, which is one of the few sources of ground truth for causality that is available in medicine and really bring all those different data modalities across these two different scales together to come up with what we believe are truly high quality therapeutic hypotheses that we then advance into the clinic.AlphaFold2, the ExemplarEric Topol (19:56):Yeah, no, I think that's an extraordinary approach. It's a bold, ambitious one, but at least it is getting to the root of what is needed. One of the things you mentioned of course, is the coming up with molecules, and I wanted to get your comments about the AlphaFold2 world and the ability to not just design proteins now of course that are not extant proteins, but it isn't just proteins, it could be antibodies, it could be peptides and small molecules. How much does that contribute to your perspective?Daphne Koller (20:37):So first of all, let me say that I consider the AlphaFold story across its incarnations to be one of the best examples of the hypothesis that we set out trying to achieve or trying to prove, which is if you feed a machine learning model enough data, it will learn to do amazing things. And the space of protein folding is one of those areas where there has been enough data in biology that is the sequence to structure mapping is something that over the years, because it's so consistent across different cells, across different species even, we have a lot of data of sequence to structure, which is what enabled AlphaFold to be successful. Now since then, of course, they've taken it to a whole new level. I think what we are currently able to do with protein-based therapeutics is entirely sort of a consequence of that line of development. Whether that same line of development is also going to unlock other therapeutic modalities such as small molecules where the amount of data is unfortunately much less abundant and often locked away in the bowels of big pharma companies that are not eager to share.Daphne Koller (21:57):I think that question remains. I have not yet seen that same level of performance in de novo design of small molecule therapeutics because of the data availability limitations. Now people have a lot of creative ideas about that. We use DNA encoded libraries as a way of generating data at scale for small molecules. Others have used other approaches including active learning and pre-training and all sorts of approaches like that. We're still waiting, I think for a truly convincing demonstration that you can get to that same level of de novo design in small molecules as you can in protein therapeutics. Now as to how that affects us, I'm so excited about this development because our focus, as I mentioned, is the discovery of novel therapeutic hypotheses. You then need to turn those therapeutic hypotheses into actual molecules that do the work. We know we're not going to be the expert in every single therapeutic modality from small molecules to macro cycles, to the proteins to mRNA, siRNA, there's so many of those that you need to have therapeutic modality experts in each of those modalities that can then as you discover a target that you want to modulate, you can basically go and ask what is the right partner to help turn this into an actual therapeutic intervention?Daphne Koller (23:28):And we've already had some conversations with some modality partners as we like to call them that help us take some of our hypotheses and turn it into molecules. They often are very hungry for new targets because they oftentimes kind of like, okay, here's the three or four or whatever, five low hanging fruits that our technology uniquely unlocks. But then once you get past those well validated targets like, okay, what's next? Am I just going to go read a bunch of papers and hope for the best? And so oftentimes they're looking for new hypotheses and we're looking for partners to make molecules. It's a great partnership.Can We Slow the Aging Process?Eric Topol (24:07):Oh yeah, no question about that. Now, we've seen in recent times some leaps in drugs that were worked on for decades, like the GLP-1s for obesity, which are having effects potentially well beyond obesity didn't require any AI, but just slogging away at it for decades. And you previously were at Calico, which is trying to deal with aging. Do you think that we're going to see drug interventions that are going to slow the aging process because of this unique time of this exponential point we are in where we're a computer and science and digital biology come together?Daphne Koller (24:52):So I think the GLP-1s are an incredible achievement. And I would point out, I know you said and incorrectly that it didn't use any AI, but they did actually use an understanding of human genetics. And I think human genetics and the genotype phenotype statistical associations that they revealed is in some ways the biological precursor to AI it is a way of leveraging very large amounts of data, admittedly using simpler statistical tools, but still to discover in a data-driven way, novel therapeutic hypothesis. So I consider the work that we do to be a progeny of the kind of work that statistical geneticists have done. And of course a lot of heavy lifting needed to be done after that in order to make a drug that actually worked and kudos to the leaders in that space. In terms of the modulation of aging, I mean aging is a process of decline over time, and the rate of that decline is definitely something that is modifiable.Daphne Koller (26:07):And we all know that external factors such as lifestyle, diet, exercise, even exposure to sun or smoking, accelerates the aging process. And you could easily imagine, as we've seen in the GLP-1s that a therapeutic intervention can change that trajectory. So will we be able to using therapeutic interventions, increase health span so that we live healthy longer? I think the answer to that is undoubtedly, yes. And we've seen that consistently with therapeutic interventions, not even just the GLP-1s, but going backwards, I mean even statins and earlier things. Will we be able to increase the maximum life span so that people habitually live past 120, 150? I don't know. I don't know that anybody knows the answer to that question. I personally would be quite happy with increasing my health span so that at the age of 80, I'm still able to actively go hiking and scuba diving at 90 and 100 and that would be a pretty good place to start.Eric Topol (27:25):Well, I'm with you on that, but I just want to ask though, because the drugs we have today that are highly effective, I mean statins is a good example. They work at a particular level of the body. They don't have across the board modulation of effect. And I guess what I was asking is, do you foresee we will have some way to do that across all systems? I mean, that is getting to, now that we have so many different ways to intervene on the process, is there a way that you envision in the future that we'll be able to here, I'm not talking about in expanding lifespan, I'm talking about promoting health, whether it's the immune system or whether it's through mitochondria and mTOR, caloric, I mean all these different things you think that's conceivable or is that just, I mean companies like Calico and others have been chasing this. What do you think?Daphne Koller (28:30):Again, I think it's a thing that is hard to predict. I mean, we know that different organ systems age at different rates, and is there a single bio even in a single individual, and it's been well established that you can test brain age versus muscle health versus cardiovascular, and they can be quite different in the same individual, so is there a single hub? No, that governs all forms of aging. I don't know if that's true. I think it's oftentimes different. We know protein folding has an effect, you know DNA damage has an effect. That's why our skin ages because it's exposed to sun. Is there going to be a single switch that reverts it all back? Certainly some companies are pursuing that single bullet approach. I personally would probably say that based on the biology that I've seen, there's at least as much potential in trying to find ways to slow the decline in a way that specific to say as we discussed the immune system or correcting protein, misfolding dysfunction or things like that. And I'm not dismissing there is a single magic switch, but let's just say I think we should be exploring multiple alternatives.Eric Topol (29:58):Yeah, no, I like your reasoning. I think it's actually like everything else you said here. It makes a lot of sense. The logic is hard to argue with. Well, I think what you're doing there at insitro is remarkable and it seems to be quite distinct from other strategies, and that's not at all surprising knowing your background and your aspiration.Daphne Koller (30:27):Never like to follow the crowd. It's boring.Eric Topol (30:30):Right, and I do know you left an aging directed company effort at Calico to do what you're doing. So that must have been an opening for you that you saw was much more diverse perhaps, or maybe I'm mistaken that Calico is not really age specific in its goals.Daphne Koller (30:49):So what inspired me to go found insitro was the realization that we are making medicines today in a way that is not that different from the way in which we were making medicines 20 or 30 years ago in terms of the process by which we go from a, here's what I want to work on to here's a drug is a very much an artisanal one-off each one of them is a snowflake. There is very little commonality and sharing of insights and infrastructure across those efforts except in relatively limited tool-based ways. And I wanted to change that. I wanted to take the tools of engineering and data and machine learning and build a very different approach of going from a problem definition to a therapeutic intervention. And it didn't make sense to build that within a company that's focused on any single biology, not just aging because it is such a broad-based foundation.Daphne Koller (31:58):And I will tell you that I think we are on the path to building the thing that I set out to build. And as one example of that, I will use the work that we've recently done in metabolic disease where based on the foundations that we've built using both the clinical machine learning work and the cellular machine learning work, we were able to go from a problem articulation of this is the indication that we want to work on to a proof of concept in a translatable animal model in one year. That is pretty unusual. Admittedly, this is with an SiRNA tool compound. Nice thing about things that are liver directed is that it's not that difficult of a path to go from an SiRNA tool compound to an actual SiRNA drug. And so hopefully that's a fairly linear journey from there even, which is great.Daphne Koller (32:51):But the fact that we were able to go from problem articulation to a proof of concept in a translatable animal model in one year, that is unusual. And we're starting to see that now across our other therapeutic areas. It takes a long time to build a platform because you're basically building a foundation. It's like, okay, where's the fruit of all of that? I mean, you're building and building and building and nothing comes out for a while because you're building so much of the infrastructure. But once you've built it, you turn the crank and stuff starts to come out, you turn the crank again, and it works faster and better than the previous time. And so the essence of what we've built and what has turned into the tagline for the company is what we call pipeline through platform, which is we're building a pipeline of therapeutic interventions that comes off of a platform. And that's rare in biopharma, the only platform companies that really have emerged by and larger therapeutic modality platforms, things like Moderna and Alnylam, which have gotten really good at a particular modality and that's awesome. We're building a discovery platform and that is a fairly unusual thing.Eric Topol (34:02):Right. Well, I have no doubt you'll be discovering a lot of important things. That one sounds like it could be a big impact on NASH.Daphne Koller (34:14):Yeah, we hope so.Eric Topol (34:14):A big unmet need that's not going to be fixed by what we have today. So Daphne, it's really a joy to talk with you and palpable enthusiasm for where the field is going as one of its real leaders and we'll be cheering for you. I hope we'll reconnect in the times ahead to get another progress report because you're definitely rocking it there and you've got a lot of great ideas for how to change the life science medical world of the future.Daphne Koller (34:48):Thank you so much. It's a pleasure to meet you, and it's a long and difficult journey, but I think we're on the right path, so looking forward to seeing that all that pan out.Eric Topol (34:58):You made a compelling case in a short visit, so thank you.Daphne Koller (35:02):Thank you so much.Thanks for your subscription and listening/reading these posts.All content on Ground Truths—newsletter analyses and podcasts—is free.Voluntary paid subscriptions all go to support Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

Ground Truths
Jim Collins: Discovery of the First New Structural Class of Antibiotics in Decades, Using A.I.

Ground Truths

Play Episode Listen Later Feb 13, 2024 28:52


Jim Collins is one of the leading biomedical engineers in the world. He's been elected to all 3 National Academies (Engineering, Science, and Medicine) and is one of the founders of the field of synthetic biology. In this conversation, we reviewed the seminal discoveries that he and his colleagues are making at the Antibiotics-AI Project at MIT.Recorded 5 February 2024, transcript below with audio links and external links to recent publicationsEric Topol (00:05):Hello, it's Eric Topol with Ground Truths, and I have got an extraordinary guest with me today, Jim Collins, who's the Termeer Professor of Medical Engineering at MIT. He also holds appointments at the Wyss Institute and the Broad Institute. He is a biomedical engineer who's been making exceptional contributions and has been on a tear lately, especially in the work of discovery of very promising, exciting developments in antibiotics. So welcome, Jim.Jim Collins (00:42):Eric, thanks for having me on the podcast.Eric Topol (00:44):Well, this was a shock when I saw your paper in Nature in December about a new structure class of antibiotics, the one from 1962 to 2000. It took 38 years, and then there was another one that took 24 years yours, the structural antibiotics. Before I get to that though, I want to go back just a few years to the work you did published in Cell with halicin, and can you tell us about this? Because when I started to realize what you've been doing, what you've been chipping away here, this was a drug you found, halicin, as I can try to understand, it works against tuberculosis, c. difficile, enterobacter that are resistant, acinetobacter that are resistant. I mean, this is, and this is of course in mice models. Can you tell us how did you make that discovery before we get into I guess what's called the Audacious Project?Jim Collins (01:48):Yeah, sure. It's actually a fun story, so it is origins go broadly to institute wide event at MIT, so MIT in 2018 launched a major campus-wide effort focused on artificial intelligence. The institute, which had played a major role in the first wave of AI in the 1950s, 1960s, and a major wave in the second wave in the 1980s found itself kind of at the wheel in this third wave involving big data and deep learning and looked to correct that and to correct it the institute had a symposium and I had the opportunity to sit next to Regina Barzilay, one of our faculty here at MIT who specializes in AI and particularly AI applied to biomedicine and we really hit it off and realized we had interest in applying AI to drug discovery. My lab had focused on antibiotics to then close to 15 years, but primarily we're using machine learning and network biology to understand the mechanism of action of antibiotics and how resistance arise with the goal of boosting what we already had, with Regina we saw there was an opportunity to see if we could use deep learning to get after discovery.(02:55):And notably, as you kind of alluded in your introduction, there's really been a discovery void and the golden age of discovery antibiotics was in the forties, fifties and sixties before I was born and before you had the genomic revolution, the biotech revolution, AI revolution. Anyways, we got together with our two groups, and it was an unfunded project and we kind of cobbled together very small training set of 2,500 compounds that included 1,700 FDA approved drugs and 800 natural compounds. In 2018, 2019, when you started this, if you asked any AI expert should you initiate that study, they would say absolutely not, there's going to be two big data. The idea of these models are very data hungry. You need a million pictures of a dog, a million pictures of a cat to train a model to differentiate between the cat and the dog, but we ignored the naysayers and said, okay, let's see what we can do.(03:41):And we apply these to E. coli, so a model pathogen that's used in labs but is also underlies urinary tract infections. So it's a look to see which of the molecules inhibited growth of the bacteria as evidence for antibacterial activity and we could have measured and we quantified each of their effects, but because we had so few compounds, we just discretized instead, if you inhibited at least 80% of the growth you were antibacterial, and if you didn't achieve that, you weren't antibacterial zero in ones. We then took the structure of each molecule and trained a deep learning model, specifically a graphical neural net that could look at those structures, bond by bond, substructure by substructure associated with whatever features you look to train with. In our case, making for good antibiotic, not for good antibiotic. We then took the train model and applied it to a drug repurposing hub as part of the Broad Institute that consists of 6,100 molecules in various stages of development as a new drug.(04:40):And we asked the model to identify molecules that can make for a good antibiotic but didn't look like existing antibiotics. So part of the discovery void has been linked to this rediscovery issue we have where we just keep discovering quinolones like Cipro or beta-lactams like penicillin. Well, anyways, from those criteria as well as a small tox model, only one molecule came out of that, and that was this molecule we called halicin, which was named after HAL, the killing AI computer system from 2001 Space Odyssey. In this case, we don't want it to kill humans, we want it to kill bacteria and as you alluded, it turned out to be a remarkably potent novel antibiotic that killed off multi-drug resistant extensively drugs, a pan-resistant bacteria went after to infections. It was affected against TB, it was affected against C. diff and acinetobacter baumannii and acted to a completely new mechanism of action.(05:33):And so we were very excited to see how AI could open up possibilities and enable one to explore chemical spaces in new and different ways. We took them model, then applied it to a very large chemical library of 1.5 billion molecules, looked at a subset of about 110 million that would be impossible for any grad student, any lab really to look at that experimentally but we looked at it in a model computer system and in three days could screen those 110 million molecules and identified several new additional candidates, one which we call salicin, which is the cousin of halicin that similes broad spectrum and acts to a novel mechanism of action.Eric Topol (06:07):So before we go further with this initial burst of discovery, for those who are not used to deep neural networks, I think most now are used to the convolutional neural network for images, but what you use specifically here as you alluded to, were graph neural networks that you could actually study the binding properties. Can you just elaborate a little bit more about these GNN so that people know this is one of the tools that you used?Jim Collins (06:40):Yeah, so in this case, the underlying structure of the model can actually represent and capture a graphical structure of a molecule or it might be of a network so that the underlying structure itself of the model will also look at things like a carbon atom connects to an oxygen atom. The oxygen atom connects to a nitrogen atom and so when you think back to the chemical structures we learned in high school, maybe we learned in college, if we took chemistry class in college, it was actually a model that can capture the chemical structure representation and begin to look at sub aspects of it, associating different properties of it. In this case, again, ours was antibacterial, but it could be toxic, whether it's toxic against a human cell and the model, the train model, the graph neural model can now look at new structures that you input them and then make calculations on those bonds so a bond would be a connection between two atoms or substructures, be multiple bonds, interconnecting multiple atoms and assign it a score. Does it make, for example, in our case, for a good antibiotic.Eric Topol (07:48):Right. Now, what's also striking as you set up this collaboration that's interdisciplinary with Regina, who I know of her work through breast cancer AI and not through drug discovery and so this was, I think that new effort and this discovery led to this, I love the name of it, Audacious Project, right?Jim Collins (08:13):Right. Yeah, so a few points on the collaboration then I'll speak to Audacious Project. In addition to Regina, we also brought in Tommi Jaakkola, another AI faculty member and marvelous colleague here at MIT and really we've benefited from having outstanding young folks who were multilingual. We had very rich, deep trained grad students from ML on Regina and Tommi's side who appreciated the biology and we had very richly, deeply trained postdocs, Jon Stokes in particular from the microbiology side on my side, who could appreciate the machine learning and so they could speak across the divide. And so, as I look out in the next few decades in this exciting time of AI coming into biomedicine, I think the groups will make a difference of those that have these multilingual young trainees and two who are well set up to also inject human intelligence with machine intelligence.(09:04):Brings the Audacious Project. Now, prior to our publication of halicin, I was invited by the Audacious Project to submit a proposal, the Audacious Project is a new philanthropic effort run by TED, so the group that does the TED Talks that's run by Chris Anderson, so Chris had the idea that there was a need to bring together philanthropists around the world to go for a larger scale in a collective manner toward audacious projects. I pitched them on the idea that we could use AI to address the antibiotic resistance crisis. As you can appreciate, and many of your listeners can appreciate that we're doomed if we don't actually address this soon, in that the number of resistance strains that are in our communities, in our hospitals has been growing decade upon decade, and yet the number of new antibiotics being developed and approved has been dropping decade upon decade largely because the antibiotic market is broken, it costs just as much to develop an antibiotic as it does a cancer drug or a blood pressure drug.(09:58):But antibiotic you take once or maybe over the course of three to five days, blood pressure, drug cancer drug you might take for months if not for the rest of your life. Pricing points for antibiotics are small dollars, cancer drugs, blood pressure drugs, thousands if not hundreds of thousands. We pitched this idea that we can maybe turn to AI and use the power of AI to address this crisis and see if we could use our wits to outcompete the genes of superbugs and Chris and his team really were taken with this, and we worked with them over the course of nine months and learned how to make the presentations and pulled this together. Chris took our pitches to a number of really active and fantastic philanthropists, and they got behind us and gave us a good amount of money to launch what we have now called the Antibiotics-AI Project at MIT and in conjunction with it and also using funding from the Audacious Project, we've launched a nonprofit called Phare Bio which is French for lighthouse, so our notion is that antibiotics are public good that we need to get behind his community and Phare Bio, which is run by Akhila Kosaraju, she's the CEO and President, is the mission of which is to take the most promising molecules out of the antibiotics AI project and advance them towards the clinic through partnerships with biotech, with pharma, with other nonprofits, with nation states as needed.Eric Topol (11:18):Well, before I get to the next chain of discovery and as explain ability features, which we all like to see when you can explain stuff with AI, did halicin because of this remarkable finding, did it get into clinical trials yet?Jim Collins (11:36):It's being advanced quite nicely and aggressively by Phare Bio. So Phare Bio is in discussions with the Department of Defense and BARDA, and actually on an interesting feature of halicin is that it acts like a flash bomb in the gut, meaning that when delivered orally to the gut, it only acts briefly and very quickly in a fairly narrow spectrum manner as well, so that it can go after pathogens sparing the commensals. One of the challenges our US military face is one of the challenges many militaries face are gut issues when soldiers are first deployed to a new location, and it can disable the soldiers for three to four weeks. And so, there's a lot of excitement that halicin might be effective as a treatment to help prevent gut dysbiosis resulting from new deployments.Eric Topol (12:27):Oh wow. That's another application that I would never have thought of. Interesting, so you then moved on to this really big report in Nature, which I think this is now involving a transformer model as I recall. So you can explain the difference and you made a discovery from a massive, again, number of potential compounds to staph aureus resistant methicillin resistant agents that were very potent in vivo. So how did you make this big jump? This is a whole new structural class of antibiotics.Jim Collins (13:11):Yeah, so we made this jump, this was an effort led by Felix Wong, who's a really talented postdoc in my lab, and we got intrigued of to what extent could we expand the utility of AI and biology of medicine. As you can appreciate that, that many of our colleagues are underwhelmed by the black box nature of many AI models and by black box I mean that when you train your model, you then largely use it as a filter where you'll provide the model with some input. You look at the output and the outputs, what's of interest to you, but you don't really understand in most cases, what guided the model to make the prediction of the output that you look at and that can be very unsatisfactory for biology, interested in mechanism. It can be very unsatisfactory for physicians interested in understanding the underlying disease mechanism.(13:57):It can be unsatisfactory for biotech and drug discoveries that want to understand how drugs act and what maybe underlies meaningful structural features. So with Felix, we decided it'd be interesting if you could open up the box. So could you look inside the model to see what was being learned? We are able to open up, in this case actually, we primarily focused on graph neural nets. We now have a new piece we're just about to submit on transformers, but in this case, we could open up and look to see what were the rationales, what were the chemical substructures that the model was pointing to in each compound that was leading to the high prediction that it could make for a good antibiotic and these rationales we then used as hooks, I should notably say, that we were able to identify the rationales from these large collections using algorithms that would develop by DeepMind as part of their AlphaGo program.(14:51):So AlphaGo was developed by DeepMind as a deep learning platform to play and win go the ancient Asian board game and we used similar approaches called Monte Carlo Tree Search that allowed us to identify these rationales that we effectively then used as hooks and kind of organizing hooks on screens where you can envision or appreciate that most exposed screens give you one-offs. This molecule does what you want and silico screens are similarly designed with these rationales. We could use them as organizing hooks to say, ah, these compounds that are identified as making for very good antibiotics all have the same substructure and thus they likely in the same class and act in similar mechanism and this led us to identify five novel classes, one of which we highlighted in this piece that acts very effectively against MRSA, so methicillin-resistant staph aureus you alluded, which is probably the most famous of the antibiotic resistant pathogens that we even outside infectious are quite familiar with. It be devil's athletes, so NFL players are often hit with MRSA, whether from scraping their limbs on AstroTurf or from actually surgeries to say, for example, correct something at their knee. This new class had great efficacy in animal models, again, acting through a new mechanism.Eric Topol (16:12):Will you bring that forward like halicin through this same entity?Jim Collins (16:17):Yes. We've now provided the molecules to Phare Bio and they're digging in to see which of these might be the most exciting and interesting to advance clinically.Eric Topol (16:26):I mean, it's amazing because this area is so neglected. Maybe you can help explain, since we're talking about existential threats as we get more and more resistant antibiotics and the biopharma industry is basically not into this and it relies on the work that you've been doing perhaps or other groups, I don't know of any that are doing more than you. I mean, it's incredible to me. Is it just because of the financial aspects that there's no business in the life science industry?Jim Collins (17:03):It's an interesting challenge. So I've thought about it. I really haven't come up with a great solution yet, but I think you've got multiple factors at play. One is that I think all of us, every one of your listeners has lost someone to a bacterial infection, but in most cases you don't realize you lost them to a bacterial infection. It might be that your elderly relative went into the hospital with a condition but acquired hospital-based infection and died subsequently from that and happened quite quickly. Another cases, again, it's secondary. Notably, during the pandemic, one out of seven individuals hospitalized for Covid had a bacterial infection and 50% of those who died had a bacterial co-infection. And noted by going back to the Spanish flu of over a hundred years ago. It was as deadly as it was because we didn't have antibiotics and most of the folks that died had a bacterial co-infection.(17:56):So you have this in the backdrop, you then have that, nobody's kind of gotten behind it, so we don't have any major foundation addressing antibiotic resistance. There are no charity walks, there are no charity runs, there is no month, there is no color, there are no ribbons, there are no celebrity behind it, there's just not known so it hasn't captured the public's imagination. AThen you come with that, this backdrop of the broken market where I said shared, it's really expensive to develop a new antibiotic, but if you develop a new antibiotic, the tendency now will be to shelve it until it's desperate so now even the young companies that had developed and gotten an antibiotic through to approval often went bankrupt because the model, the market couldn't provide them with revenue to go after the next one or sustain their efforts. And so you have pharma biotech jumping out. I think we need two-pronged effort going forward. I do think we need nation states to come forward and get behind this, and I think we increasingly need philanthropists to come forward and go after it. As I share your term of existential threat, I think if you speak with most educated individuals, antibiotic resistance broadly, antimicrobial resistance will be on everyone's existential threat list but notably of that list, it's the cheapest one that can be solved.Eric Topol (19:09):Well, you're showing that you've got the most extraordinary candidates that have been found in decades. So that says a lot right there.Jim Collins (19:18):Important step, yeah. So I think we've got additional innovation needed in the models to address this, and until we have that address, then this interesting discoveries we and others are making will not get to patients. So we need to have that additional next step to close this gap.Eric Topol (19:32):Now, obviously this has relied on AI and the progress that's occurring in AI to enable some of your work. I am fascinated by the use of AlphaGo. Most times we hear about using AlphaFold2, but you actually use AlphaGo the original game DeepMind work but there also was the progress of from deep neural networks to transformer models and your ability now to basically exemplify what can be achieved in drug discovery using the progress in multimodal AI. Is this something that is making a difference for you and your group?Jim Collins (20:13):It is, it's huge. I think it's very early in terms of the introduction to these new tools extensively within drug discovery. Machine learning has been used for over two decades, both supervised learning and unsupervised learning. Now we're seeing groups coming in for the deep learning efforts. It's largely data-driven so in fact, with the exception of sequences, most of drug discoveries not yet big data in the big data phase, but it's beginning to change. It's truly been transformative for us, so we've used graph neural nets primarily for our discovery efforts. We're now beginning to incorporate language models as multimodal models along with the graph neural nets as well as to see to what extent pre-trained language models. For example, mobile form from IBM, which was trained on PubChem and the ZINC database could be fine-tuned with small amounts of training data, screening data from a resistant organism.(21:06):Third, and I made an indirect allusion already, we've been looking at using transformers and genetic algorithms in older form of AI tech for design of novel antibiotics so we've been now looking to see using fragments as a starting base, using trained models to build out novel antibiotics that can then be de novo designed. One of the big challenges in that space is how do you synthesize these molecules? So you have both the challenge of can you come up with a small number of steps that enable you to synthesize? And second is could you find somebody to synthesize them? And each of those remains very big challenges. My faculty colleague here at MIT, Connor Coley's probably one of the world leaders, easily, he's in using AI to calculate the synthesized ability of a molecule, but we still have gaps in that we don't have the community resources to make most of what we come up with.Eric Topol (21:58):Well, one of the features of large language models that David Baker at the Protein Design Institute exploited is its ability to hallucinate and come up with proteins that don't exist. Can you do the same thing in your design of antibiotic candidate molecules in a way that is not worrying about the synthesis, but just basically the hallucinatory behavior of large language models?Jim Collins (22:28):It's interesting, so yes and so David's work is marvelous and we're big fans and longtime friends of his work. Yes, so we've been driving these models truly to do de novo synthesis. So based on what has been learned, can you put together molecules that one's never seen before? We're doing it quite successfully. It becomes interesting from the hallucination in that it comes out really more of these models making stuff up and ours it's really more directing the hallucinations, right? Really looking to see can we harness the imagination of the models in order to move them forward in very creative design manners.Eric Topol (23:08):Yeah, I mean, I think most people have a negative connotation of hallucinations, but these are the smart variety potentially. This in many ways you could say there's so much crowded interest in the drug discovery AI world, but what you're doing now seems to be setting the pace in many respects for others to follow such remarkable advances in a short time. By the way, we'll link to that TED talk you gave in April 2020, where in seven minutes you went over what you're doing of course and who would've, and that was in 2020 that where you'd be three or four years later, and that was what you're going to do over the next seven years with seven new classes of new antibiotics. Now, before we wrap up, it isn't just that you're an AI antibiotic, you and your team antibiotic discover and doing compressing in time, what has taken decades that you're doing in months, but also you are a father of figure in the field of synthetic biology and I wonder if you, before we wrap up, can explain not only what synthetic biology is since a lot of people don't really know what that means, but how does that dovetail with your efforts in what we've been discussing?Jim Collins (24:33):Yeah, thanks. So synthetic biology is a relatively new field that's bringing together engineers with biologists to use engineering principles to model design and build synthetic gene networks and other molecular components that can be used to rewire and reprogram living cells and cell-free systems, endowing them with novel functions of a variety of applications. So the circuits, these programmable cells are impacting broad swats of the economy from food and water to health and sustainability of bioenergy to human health. Our focus is primarily human health and we've been advancing the idea that you can reprogram bacteria to detect and treat bacterial infections. So we've shown you can use this to go after cholera, we've shown you can use is to prevent antibiotic induced gut dysbiosis. We've also used synthetic biology to create whole new classes of diagnostics. For example, paper-based ones using RNA sensors for Ebola, for Zika and for Covid.(25:33):How it dovetails with what we talked about is that I think there's a great opportunity now to turn to AI to expand synthetic biology, both expanding the number of parts we have to re-engineer living systems as well as to better infer design principles that can be used to reprogram rewire living systems. We're beginning to advance, we're not yet at the SynBio AI project phase, but very early efforts and David's dominating the protein space and we and others are beginning to now movement to the RNA space. So to what extent can we create large libraries of RNA components, train language-based models, structure-based models that can both predict RNA structure more critically predict RNA function and as you know from your marvelous work and what's happening is that it's the exciting age of RNA of getting after RNA therapeutics, be it mRNA or CRISPR related and we still need to get better at our ability to design those therapeutics with certain functions in mind, and we think AI is going to help get us there faster.Eric Topol (26:34):Well, speaking of that, there was a paper this week in Cell by McCafferty and colleagues, and one of the sentences that struck me, we are standing on the cusp of a new era of biology, where the integration of multimodal structural datasets with multiscale physics-based simulation will enable the development of visible, virtual cells. This is yet another lineage or direction of where we're headed with AI, but this fusion of the advances that are occurring right now in biology with AI that extend in many different directions, it's so exciting and you are basically nailing it. I mean, you're putting points on the board, Jim, and I just have to say, I'm blown away by what you've been accomplishing in a time space that's so incredibly compressed.Jim Collins (27:40):Oh, well thanks. Well, you think back to the early days of molecular biology and physicists like Francis Crick and Max Delbrück played huge pioneering roles and then in the second wave in the eighties or so, you had other physicists like Walter Gilbert playing big roles. I do think physicists computer scientists are starting now to play big roles in this next phase where we need tools like AI in order to really grapple with and harness the complexity, both the biology and the chemistry that underlies living cells. They can kind of expand our intuitions both to understand and to really control these systems for good going forward.Eric Topol (28:15):Well, you're doing it and we're be cheering for the success of these drugs that you've come up with in the clinical trials as they go forward because they look so remarkably promising. You even highlighted ways that I wouldn't have envisioned where they could make a difference, so we'll follow your work, you and your colleagues with great interest. Thanks so much for joining,Jim Collins (28:37):Eric, thanks for having me. Enjoyed our conversation.******************************************************************************Thanks for listening to Ground Truths. Please share if you found this podcast informative.Full video interview will post here Get full access to Ground Truths at erictopol.substack.com/subscribe

Ground Truths
Liv Boeree: On Competition, Moloch Traps, and the A.I. Arms Race

Ground Truths

Play Episode Listen Later Jan 13, 2024 36:26


A snippet of our conversation belowTranscript of our conversation 8 January 2023, edited for accuracy, with external linksEric TopolIt's a pleasure for me to have Liv Boeree as our Ground Truths podcast guest today. I met her at the TED meeting in October dedicated to AI. I think she's one of the most interesting people I've met in years and the first time I've ever interviewed a professional poker player who has won world championships and we're going to go through that whole story, so welcome Liv.Liv BoereeThanks for having me, Eric.Eric TopolYou have an amazing background having been at the University of Manchester in physics and astrophysics. Back around in 2005 you landed into the poker world. Maybe you could help us understand how you went from physics to poker.From Physics to PokerLiv BoereeAh, yeah. It's a strange story, I graduated as you said in 2005 and I had student debt and needed to get a job I had plans to continue in academia. I wanted to do a masters and then a PhD to work in astrophysics in some way, but I needed to make some money, so I started applying for TV game shows and it was on one of these game shows that I first learned how to play poker. They were looking for beginners and the loose premise of the show was which personality type is best suited for learning the game and even though I didn't win that particular show we were playing for a winner take all prize of £100,000 which was a life changing amount of money had I won it at the time. It was like a light bulb moment just the game and I've always been a very competitive person, but poker in particular really spoke to my soul. I always wanted to play in games where it was often considered a boy's game and I could be a girl beating the boys at their own game. I hadn't played that much cards in particular, but I just loved any game that was very cutthroat which poker certainly is. From that point onwards I was like you know what I'm going to put physics on hold and see if I can make it in this poker world instead and then never really looked back.Eric TopolWell, you sure made it in that world. I know you retired back in about 2019, but that was after you won all sorts of world and European championships and beat a lot of men. No less. What were some of the things that that made you such a phenomenal player?Liv BoereeThe main thing with poker is well the most important ingredient if you really want to make it as a professional is you have to be extremely competitive. I have not met any top pros who don't have that degree of killer instinct when it comes to the game that doesn't mean it means you're competitive in everything else in life, but you have to have a passion for looking someone in the eye, mentally modeling them, thinking how to outwit them and put them into difficult situations within the game and then take pleasure in that. So, there's a certain personality type that tends to enjoy that. The other key facet is you have to be comfortable with thinking in terms of probability. The cards are shuffled between every hand so there's this inherent degree of randomness. On the scale of pure roulette which is all luck no skill to a game like chess which has almost no luck (close to 100% skill as you can get) poker lies somewhere in the middle and of course the more you play the bigger the skill edge and the smaller the luck factor. That's why professionals can exist. It's a game of both luck and skill which I think is what makes it so interesting because that's what life is really, right? We're trying to get our business off the ground, we're trying to compete in the dating market. Whatever it is. We're doing our strategy, the role of luck life can throw your curved balls that you can do everything right and still things don't go the way you intended them to or vice versa, but there's also strategies we can employ to improve our chances of success. Those are the sort of skills that poker players particularly this idea of gray scale probabilistic thinking that you really have to hone. I've always wondered whether having a background in science or at least you know studying having ah a scientific degree helped in that regard because of course the scientific method is about understanding variables and minimizing uncertainty as much as possible and understanding what cofounding factors can bias the outcome of your results. Again, that's always going on in a poker player's mind, you'll have concurrent hypotheses. Oh, this guy just made a huge bet into me when that ace came out, is it because he actually has an ace or is it because he's pretending to have an ace and so you've got to weigh all the bits of information up as unbiased as possible in an unbiased way as possible to come to a correct conclusion. Even then you can never be certain, so this idea of understanding biases understanding probabilities I think that's why a lot of top poker players have backgrounds in scientific degrees a very good friend of mine he had a PhD in in physics. Especially over time poker has become a much more sort of scientific pursuit. When I first allowed to play it was very much a game of street smarts and intuition in part because we didn't have the technological tools to understand really the mechanics of the game as well. You couldn't record all your playing data if you were playing just in a casino unless you were writing down your hands. Otherwise, this information wasn't getting stored anywhere, but then online poker came along which meant that you could store all this data on your laptop and then build tools to analyze that data and so the game became a much more technical scientific pursuit.Eric TopolThat actually gets to kind of the human side of poker. Not the online version —especially since we're going to be mainly talking about AI the term “poker face” the ability to bluff is that a big part of this?Liv BoereeOh, absolutely. You can't be a good poker player if you don't ever bluff because your opponents will start to notice that so that means you're only ever putting your money on the line when you have a good hand so why would they ever pay you off. The point of poker is to maximize the deception to your opponents so you have to use strategies where some of the time you might be having a strong hand and some of the time you might be bluffing where you might have a weak hand. The key is this is getting into the technical sort of game theory side of it, but you want to be doing these bluffs versus what we call value bets as in betting with a good hand with the right to sort of frequency. You need these right ratios between them, so bluffing is a very core part of the game and yes having a poker face obviously helps because you want to be as inscrutable to your opponents as possible. At the same time online poker is an enormously popular game where you can't see your opponent's faces.Eric TopolRight, right.Liv BoereeYet you can still bluff which could actually lead us into this topic of AI because now the best players in the world are actually AIs.Eric TopolWell, it's interesting because it takes out that human component of being able to bluff and it may be good for people who don't have a poker face. They can play online poker and be good at it because they don't have that disguise if you will.Liv BoereeRight.Game Theory and Moloch TrapsEric TopolThat gets me to game theory and a big part of the talk you gave at the TED conference about something that I think a lot of the folks listening aren't familiar with— Moloch traps. Could you enlighten us about that because what the talk which of course we'll link to is so illuminating and apropos to the AI landscape that we face today?Liv BoereeYeah, I'll leave it for people to go and watch the TED talk because that's going to be much more succinct than me to explain the backstory of how it came to be called a Moloch trap because Moloch is a sort of biblical figure a demon and it seems strange that you would be applying such a concept to what's basically a collection of game theoretic incentives, but essentially what a Moloch trap is the more formal name for it is a multipolar trap which some of the listeners may be familiar with. Essentially a Moloch trap or a multipolar trap is one of those situations where you have a lot of competing different people all competing for 1 particular thing that say who can collect the most fish out of a lake. The trap occurs when everyone is incentivized to get as much of that thing as possible so to go for a specific objective, but if everyone ends up doing it then the overall environment ends up being worse off than before. What we're seeing with plastic pollution – It's not like packaging companies want to fill the oceans with plastic. They don't want this outcome. It doesn't make them look good. They're all caught on the trap of needing to maximize profits and external and one of the most efficient ways of doing that is to externalize costs outside of their P&L by using cheap packaging that perhaps ends up in the lakes or the oceans and if everyone ends up doing this but well basically you're a CEO in a decision of I could do the more expensive selfless action, but if I don't do that then I know that my competitors are going to do the selfish thing. I might as well do it anyway because the world's going to end up in roughly the same outcome whether I do it or not because everyone ends up adopting this mindset they end up being trapped in this bad situation. Another way of thinking of it is if you're watching a football at a stadium or a concert and before the show starts everyone's sitting down, but then a few people near the front want to get a better view so they stand up. That now forces the people right behind them to make a decision. I don't really want to block the people behind me but I can't see anymore, so now I have to stand up. The whole thing sort of falls down until everyone is now stuck standing for the rest of the show. No one really actually has a comparative advantage anymore. No one's got a particularly better view than before because it's just the same that now everyone's standing, but overall everyone is net worse because now they have to stand for the whole thing and there's no easy way for everyone to coordinate. A Moloch trap is the result of a competitive landscape where the individual short-term incentives push people to take actions that from a God's eye view from the whole from the whole system's perspective makes everyone worse off than before and because there are so many people it's too hard for everyone to coordinate and really go back to the state before so it creates these kind of arms race dynamics these tragedy of the commons. These are all a result of these Moloch traps which is essentially just another name for bad short-term incentives that hurt the whole overall.Eric TopolNo, that's great. You know someday you should write the book on competition because you have a deep understanding of that. You understand the whole range from healthy, sometimes we call managed competition. The kind that brings out the best in people to unhealthy, I might even call reckless competition, as I mentioned when we were together. Now let's go to as you say arms race nuclear, there's so many examples of this but in the AI world you were polite during your talk because you referred to one of the major CEOs without actually mentioning his name about making another one of the large AI companies titans. Make them dance as part of the competition and I think you came on to something very important which is we're interested in the safety of AI. As we move towards what seems to be inevitable artificial general intelligence, we'll talk more about that there's certainly concerns at least by a significant perhaps plurality of people that this is or can be dangerous. The fact that these this arms race if you will of AI is ongoing. What are your thoughts about that? How seriously bad is this competition?“I hope with our [ChatGPT] innovation they will want to come out and show that they can dance. I want people to know we made them dance”—Satya Nadella, Microsoft CEO, on GoogleThe A.I. Arms RaceLiv BoereeIf it were the case that building powerful AI systems that it was trivially easy to align them with the best of humanity and minimize accidents then we would want more competition because more competition would encourage everyone to go faster and faster and we would want to get to that point as fast as possible. However, if we are in the world where it is not trivially easy to align powerful AI systems with what we want and make sure that they could not do reward hacking or create some kind of unintended consequence but fall into the wrong hands easily you know into the hands of people who want to use it for the various purposes then we wouldn't want as much competition as possible because that would make everything go faster. The thing is when your trajectory is pointing in the wrong direction the last thing you want is more speed, right? I have not yet seen a compelling argument that the current trajectory is sufficiently aligned with what is good for humanity and certainly not for the biosphere that we rely upon. This is not just with AI I mean it's the wider sort of techno capital system in many ways. Obviously, capitalism has been wonderful for us. We are living here speaking across the airwaves in a warm, comfortable environments. We have good food and God bless capitalism for providing us all with that. At the same time there are clearly externalities piling up in our biosphere whether it's through climate change whether it's through pollution and so on and so forth. One particular thing about AI is that if we're going to hack the process of intelligence itself it means intelligence by definition ubiquitous. It can be used to increase the process. It can be but can be used to make more of whatever you want to do. You can do it more efficiently faster more effectively. If you think the system is aligned with exactly what we want then that's a good thing, but I see lots of evidence of the ways it is not sufficiently aligned and I'm very concerned that if we're not thinking in more depth about which goals we should be optimizing for in the first place then we're going to just keep blindly going forward as fast as possible and create a bunch of unintended consequences or even in some cases intended ones with as I said it falling into the wrong hands of people.Eric TopolYou're right on it, I think the issue is how to get the right balance of progress versus guardrails.Liv BoereeYou mentioned this particular CEO that I quoted in the TED talk again I won't mention him by name, but anyone can go Google he basically said I want people to know we made our competitor dance and the reason why that resonated with me so much is because it reminded me of my old self in my early 20's when I first learned to play poker and as I said you need this to win at poker which is by definition a 0 sum game you need this cutthroat almost bordering on psychopathic type willingness to like go after your opponents and get them by the throat metaphorically speaking to get their money, right? That mindset can be very useful when you're playing a game where the boundaries are clearly defined. Everyone is opting in and there's minimal externalities and harms to the wider world, but if you're using that same mindset to build something as powerful as artificial general intelligence which we don't know whether that's no one's certain whether it's going to be trivially easy whether it's impossible whether it will be controllable, whether it be completely uncontrollable, whether we're making a new species, whether it's just another tool or technology. No one really knows, but what I do know is that that is not the mindset or the impetus we want of the leaders building such incredibly powerful tools. Tools that couldn't be used to make them more powerful than any human and ever in history, tools that they may even lose control of themselves, we don't know That's really what alarms me the most is that first of all, we might have leaders who have that mindset in the first place but again even if they were all very wise and positive some mindset they weren't out there trying to just compete against each other and it's like pardon my French but like dick swinging contest even if they were perfectly enlightened they're still trapped in this difficult game theoretic dilemma this Moloch trap. I want to let my team build this safely as a priority, but I know that the other guys might not do it as safely, so if I go too slowly, they're going to get there ahead of me and deploy their really powerful systems first, so I have to go faster myself. Again, what suffers if everyone's trying to go as fast as possible the slow boring stuff like safety checks like evaluation testing etc. This is the real the fundamental nature of the problem that we need to be having more honest conversations about it's twofold. It's the mindset of the people building it. Now again some of them I know some of them personally, they're amazing people. Some of these CEOs I deeply respect and I think they understand the nature of the problem and they're really trying to do their best to not fall into this Moloch mindset, but there are others who truly are just wanting to I don't know solve some childhood trauma thing that they have through. I don't want to psychoanalyze them too much but whatever's going on there plus you have the game theoretic dilemma itself and we need to be tackling both of these because we're building something as powerful. Whether again it's AGI or not even narrow AI systems. LLMs are getting increasingly generalizable multimodal, they're starting to encroach into your area of expertise into biology I was reading about which I can't remember which chatbot it was but there's a really cool paper you guys could link to on archive talking about whether LLMs could be used to democratize access to use of technology like DNA synthesis. Is that something we want no safeguards on because that's sort of what we're careening towards and there are people actively pushing to be like no, that you can't deny anyone access to information. Google right now if you search if you Google how do I build a bomb. There's it's something like they just put it on front page. That information they don't give you the step-by-step recipe and yes, okay, you could go and get your chemistry degree and get some books and figure out how to build a bomb, but the point is there's a high barrier to entry and as these LLMs become more generalizable and more and more accessible we have this problem where the barrier to entry for anyone who is really murderous or omnicide or a terrorist mindset these are going to be falling into the hands of more and more of these people it's going to be easier and easier for them to actually get hold of this information and there is no clear answer of what to do with this because how do we strike a balance between allowing free flow of information so that we're not stifling innovation which it also would be very terrible or even worse creating some kind of centrally  controlled top-down tyrannical control of the internet saying who can read what that's an awful outcome, but then in and the other direction we can't have it widely available to but people like ISIS or whoever how to build a pathogen that makes COVID look like the common cold. How do we navigate this terrain where we don't end up in tyranny or self-terminating chaos. I don't know but that those are the problems. That's all we have to figure out.Effective AltruismEric TopolThe idea that you conceptualize what's going on in AI as a Moloch trap I think is exceedingly important but now you also cited that there were a few companies that deserved at least credit with their words such as OpenAI where they're putting 20% of their resources towards alignment and Anthropic as well as DeepMind that's done a lot of great work with AlphaFold2 and life science, but as you said these are just words we haven't seen that actually translated into action. As we go forward one of the terms tossed around a lot that also was surrounding Sam Altman's temporary dismissal and brought back to OpenAI is effective altruism What is EA?Liv BoereeThere's two ways of thinking about EA. There's the body of ideas, the principles which to summarize them as quickly as I can and as best as I understand them would be there are many different problems on earth there are only finite resources in terms of intellectual capital and actual capital in order to be spent on fixing these problems and so because of that we need to triage and figure out where is the most effective place to spend our time and money in order to solve these problems. How do we rank these problems in terms of scale and electiveness and so on and then how do we deploy our resources as efficiently and as effectively as possible in order to achieve these big problems. So those are the sort of principles and then. Out of those principles over time sprang up a community of people who adhere to those principles and in part have been very aligned with that I started a fundraising organization alongside some other poker players back in 2014 following these principles and encouraging poker players basically to donate to a range of different charities. Most of which were to do with because it if you want to save a life on average the most cost-effective way to do that averages out to people in sub-Saharan Africa dying from extreme poverty related illnesses particularly malaria turns out that providing bed nets on average will save a life for about $5000 from malaria there's vitamin A supplementation etc. That was my involvement I'm going off track, but that was my involvement in EA, but basically out of that sprang a movement and as that movement evolved then it became there were sort of different categories because it's very hard to concretely go well that's definitely problem number one because you have some which are well right now we know that there are this many people dying per day needlessly from this particular tropical disease or you could zoom out and go okay but over the next thirty years these are the kind of risks that civilization is facing so actually if we give that a 10 % probability then that could be 10 % of this many people so actually this is the biggest issue or you could go I care more about I don't just care about human lives I care about animal lives in which case then you. Then math would lead you to conclude that factory farming is actually the biggest issue particularly the amount of needless suffering that is going on factory farms like there's small rules changes that could be made in the way that these animals are treated during slaughter or raised pigs in gestation crates. Small changes there could have a huge positive impact on billions upon billions of animals' lives per year so out of these ideas sprung sort of different subcategories of EA of people focusing on different areas depending on what their personal calculations may led them to and of the category of sort of risks to humanity AI if you follow the if appreciate the game theoretic dilemmas that are going on and see just how fast things are going and how much safety is fallen by the wayside there's strong arguments that AI becomes a very important topic. Effective altrurists became from what I can see very concerned about AI long before almost the rest of the world did and so they became I guess kind of synonymous with the idea of AI safety measures and then I don't really understand well I mean there's reasons why I guess that that seems like the way the Sam Altman thing came up was because two members of the board have been associated with AI safety and effective altruism and they were 2 of the 3 that seems like they tried to you know, vote him out. Then this whole hooha drama came up about it and I wish I knew more I would love to know their reasons why they felt like Sam had to go. What it seems like again I'm purely speculating here but what I've heard through the grapevine was that it was more to do with him lying and misrepresenting them as opposed to a safety concern, but I don't know so that's the I guess Sam Altman EA drama.The AGI ThreatEric TopolIn many ways it's emblematic of what we've been talking about because you know there were a couple of board members that were there was a lot of angst regarding pushing hard on AGI. Whether or not there are other things of course that's a different story, but this is the tension we live in now that is we have on one hand some leaders like Yann LeCun, Andrew Ng who are not afraid who say you know still humans are going to be calling the shots as this gets more and more refined to whatever you want to call AGI, but more comprehensive abilities for machines to do things. The other the real concerns like Jeff Hinton and so many others have voiced which is we may not be able to control this, so we'll see how this plays out over time.Liv BoereeLook I hope that Andrew Ng and Yann LeCun turn out to be right. I deeply hope so, but I have yet to see them make compelling arguments because really the precautionary principle should apply here, right? When we're when we're playing such high stakes when we're gambling so high and there's a lot of people who don't have any skin in the game whose lives are on the line even if it's with a very small probability then you need to have real air type proof that your systems will do exactly what you want them to and even with ChatGPT-4 when it came out you know obviously there wasn't a threat to humanity in any explicit way, but that went through six months of testing before they released it. Six months and they got lots of different people. They put a lot of effort into testing it to make sure that it reliably did what they wanted it to when users used it. Within three days of it being available on the internet there were all kinds of unintended consequences coming up. It made the front page of The New York Times. Even with six months of testing I believe you know OpenAI really worked hard to make that be as bounded as possible and they thought they'd I'm sure they were expecting some things to slip through, but it was trivial once you got thousands of users on it figuring out ways to jail break it.Liv BoereeThere hasn't been that's surely a data point to show that you know even with lots of testing this is not a trivially easy problem the people building a machine will always be able to control it and as systems get more and more powerful and more and more emerging properties come out of them as they increase in complexity that's what emergent seems to do. If anything is going to become harder to predict everything that they could do not easier and it's I don't know I as I say I would love for Yann and Andrew to be correct, but even I think even both of them when pushed for example on the topic of what about controlling access to LLMs that could be used for pathogen synthesis in some way or as a sort of put as a tool to help you figure out which DNA synthesis companies have the least stringent checks on their on their products and we'll just send you anything because that some really do have very low stringency there. They didn't have a good answer to that they couldn't answer it and they'll just sort of go back to yes, but you can't constrain information. It's still yeah, you have to give it all for free. It's like you can't be an absolutist here like there are tradeoffs. Yes, and we have to be very careful as a so civilization not to swing too much into censorship or to swing too much into just like letting all guardrails off. We have to navigate this, but it is not comforting to me as a semi layperson to see leaders who are building these technologies dismiss the concerns of alignment and unintended consequences as like trivially easy problems when they clearly aren't that's not filling me with confidence. They're hubris— I don't want a leader who's showing hubris and so that's end of my rant.Eric TopolIt's really healthy to kind of vet the ideas here and that's what's really unique about you Liv is that you have this poker probabilistic thinking you know competition is fierce as it can be and how we are in such exciting times, but also in many ways daunting with respect to you know where we're headed where this could lead to and I think it's great. I also want to make a plug for your Win-Win that's perfect name for a podcast that you do and continue to be very interested in your ideas as we go forward because you have such a unique perspective.Liv BoereeThank you so much, I really appreciate you plugging it. I remain optimistic there's a lot of well-intended people. Incredibly brilliant people working within the AI industry who do appreciate the nature of the problem. The question is I wish it was as simple as oh, just let the market decide just let profit maximization guide everything and that will always result in the best outcome I wish it was that simple that would make life much easier, but that's not the case externalities a real, misalignment of goals is real. We need people to reflect on just be honest, over the fact that move fast, and break things is not the solution to every problem and especially when the possible things you are breaking are the is the very biosphere or playing field that we all rely on and live on. Yeah, it's going to be interesting times.Eric TopolWell, we didn't solve it, but we sure heard a very refreshing insightful perspective. Liv, thanks for what you're doing to get us informed and to learn from other examples outside of the space of AI and your background and look forward to further discussions in the future.Liv BoereeThank you so much. Really appreciate you having me on. Get full access to Ground Truths at erictopol.substack.com/subscribe

Kanazawa University NanoLSI Podcast
Kanazawa University NanoLSI Podcast: Researchers identify the dynamic behavior of a key SARS-CoV-2 accessory protein

Kanazawa University NanoLSI Podcast

Play Episode Listen Later Jan 12, 2024 6:07


Researchers identify the dynamic behavior of a key SARS-CoV-2 accessory protein Transcript of this podcastHello and welcome to the NanoLSI podcast. Thank you for joining us today. In this episode we feature the latest research by Richard Wong at Kanazawa University alongside Noritaka Nishida at Chiba University.The research described in this podcast was published in the Journal of Physical Chemistry Letters in September 2023 Kanazawa University NanoLSI websitehttps://nanolsi.kanazawa-u.ac.jp/en/Researchers identify the dynamic behavior of a key SARS-CoV-2 accessory protein Researchers at Kanazawa University report in the Journal of Physical Chemistry Letters high-speed atomic force microscopy studies that shed light on the possible role of the open reading frame 6 (or, ORF6) protein in COVID19 symptoms. While many countries across the world are experiencing a reprieve from the intense spread of SARS-CoV-2 infections that led to tragic levels of sickness and multiple national lockdowns at the start of the decade, cases of infection persist. A better understanding of the mechanisms that sustain the virus in the body could help find more effective treatments against sickness caused by the disease, as well as arming against future outbreaks of similar infections. With this in mind there has been a lot of interest in the accessory proteins that the virus produces to help it thrive in the body. “Similar to other viruses, SARS-CoV-2 expresses an array of accessory proteins to re-program the host environment to favor its replication and survival,” explain Richard Wong at Kanazawa University and Noritaka Nishida at Chiba University and their colleagues in this latest report. Among those accessory proteins is ORF6. Previous studies have suggested that ORF6 potently interferes with the function of interferon 1 (that is, IFN-I), a particular type of small protein used in the immune system, which may explain the instances of asymptomatic infection with SARS-CoV2. There is also evidence that ORF6 causes the retention of certain proteins in the cytoplasm while disrupting mRNA transport from the cell, which may be means for inhibiting IFN-I signalling. However, the mechanism for this protein retention and transport disruption was not clear.So how did they figure it out? Well, to shed light on these mechanisms the researchers first looked into what clues various software programs might give as to the structure of ORF6. These indicated the likely presence of several intrinsically disordered regions. Nuclear magnetic resonance measurements also confirmed the presence of a very flexible disordered segment. Although the machine learning algorithm AlphaFold2 has proved very useful for determining how proteins fold, the presence of these intrinsically disordered regions limits its use for establishing the structure of ORF6 so the researchers used high-speed atomic force microscopy (or AFM), which is able to identify structures by “feeling” the topography of samples like a record player needle feels the grooves in vinyl. Using high speed AFM the researchers established that ORF 6 is primarily in the form of ellipsoidal filaments of oligomers – strings of repeating molecular units but shorter than polymers. The length and circumference of these filaments was greatest at 37 °C and least at 4 °C, so the presence of fever could be beneficial for producing larger filaments. Substrates made of lipids – fatty compounds – also encouraged the formation of larger oligomers. Because high speed AFM captures images so quickly it was possible to grasp not just the structures but also some of the dynamics of the ORF6 behavior, including circular motion, protein assembly and flipping. INanoLSI Podcast website

The Nonlinear Library
LW - AI's impact on biology research: Part I, today by octopocta

The Nonlinear Library

Play Episode Listen Later Dec 27, 2023 6:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI's impact on biology research: Part I, today, published by octopocta on December 27, 2023 on LessWrong. I'm a biology PhD, and have been working in tech for a number of years. I want to show why I believe that biological research is the most near term, high value application of machine learning. This has profound implications for human health, industrial development, and the fate of the world. In this article I explain the current discoveries that machine learning has enabled in biology. In the next article I will consider what this implies will happen in the near term without major improvements in AI, along with my speculations about how our expectations that underlie our regulatory and business norms will fail. Finally, my last article will examine the longer term possibilities for machine learning and biology, including crazy but plausible sci-fi speculation. TL;DR Biology is complex, and the potential space of biological solutions to chemical, environmental, and other challenges is incredibly large. Biological research generates huge, well labeled datasets at low cost. This is a perfect fit with current machine learning approaches. Humans without computational assistance have very limited ability to understand biological systems enough to simulate, manipulate, and generate them. However, machine learning is giving us tools to do all of the above. This means things that have been constrained by human limits such as drug discovery or protein structure are suddenly unconstrained, turning a paucity of results into a superabundance in one step. Biology and data Biological research has been using technology to collect vast datasets since the bioinformatics revolution of the 1990's. DNA sequencing costs have dropped by 6 orders of magnitude in 20 years ($100,000,000 dollars per human genome to $1000 dollars per genome)[1]. Microarrays allowed researchers to measure changes in mRNA expression in response to different experimental conditions across the entire genome of many species. High throughput cell sorting, robotic multi-well assays, proteomics chips, automated microscopy, and many more technologies generate petabytes of data. As a result, biologists have been using computational tools to analyze and manipulate big datasets for over 30 years. Labs create, use, and share programs. Grad students are quick to adapt open source software, and lead researchers have been investing in powerful computational resources. There is a strong culture of adopting new technology, and this extends to machine learning. Leading Machine Learning experts want to solve biology Computer researchers have long been interested in applying computational resources to solve biological problems. Hedge fund billionaire David E. Shaw intentionally started a hedge fund so that he could fund computational biology research[2]. Demis Hassabis, Deepmind founder, is a PhD neuroscientist. Under his leadership Deepmind has made biological research a major priority, spinning off Isomorphic Labs[3] focused on drug discovery. The Chan Zuckerberg Institute is devoted to enabling computational research in biology and medicine to "cure, prevent, or manage all diseases by the end of this century"[4]. This shows that the highest level of machine learning research is being devoted to biological problems. What have we discovered so far? In 2020, Deepmind showed accuracy equal to the best physical methods of protein structure measurement at the CASP 14 protein folding prediction contest with their AlphaFold2 program.[5] This result "solved the protein folding problem"[6] for the large majority of proteins, showing that they could generate a high quality, biologically accurate 3D protein structure given the DNA sequence that encodes the protein. Deepmind then used AlphaFold2 to generate structures for all proteins kn...

The Nonlinear Library: LessWrong
LW - AI's impact on biology research: Part I, today by octopocta

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 27, 2023 6:35


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI's impact on biology research: Part I, today, published by octopocta on December 27, 2023 on LessWrong. I'm a biology PhD, and have been working in tech for a number of years. I want to show why I believe that biological research is the most near term, high value application of machine learning. This has profound implications for human health, industrial development, and the fate of the world. In this article I explain the current discoveries that machine learning has enabled in biology. In the next article I will consider what this implies will happen in the near term without major improvements in AI, along with my speculations about how our expectations that underlie our regulatory and business norms will fail. Finally, my last article will examine the longer term possibilities for machine learning and biology, including crazy but plausible sci-fi speculation. TL;DR Biology is complex, and the potential space of biological solutions to chemical, environmental, and other challenges is incredibly large. Biological research generates huge, well labeled datasets at low cost. This is a perfect fit with current machine learning approaches. Humans without computational assistance have very limited ability to understand biological systems enough to simulate, manipulate, and generate them. However, machine learning is giving us tools to do all of the above. This means things that have been constrained by human limits such as drug discovery or protein structure are suddenly unconstrained, turning a paucity of results into a superabundance in one step. Biology and data Biological research has been using technology to collect vast datasets since the bioinformatics revolution of the 1990's. DNA sequencing costs have dropped by 6 orders of magnitude in 20 years ($100,000,000 dollars per human genome to $1000 dollars per genome)[1]. Microarrays allowed researchers to measure changes in mRNA expression in response to different experimental conditions across the entire genome of many species. High throughput cell sorting, robotic multi-well assays, proteomics chips, automated microscopy, and many more technologies generate petabytes of data. As a result, biologists have been using computational tools to analyze and manipulate big datasets for over 30 years. Labs create, use, and share programs. Grad students are quick to adapt open source software, and lead researchers have been investing in powerful computational resources. There is a strong culture of adopting new technology, and this extends to machine learning. Leading Machine Learning experts want to solve biology Computer researchers have long been interested in applying computational resources to solve biological problems. Hedge fund billionaire David E. Shaw intentionally started a hedge fund so that he could fund computational biology research[2]. Demis Hassabis, Deepmind founder, is a PhD neuroscientist. Under his leadership Deepmind has made biological research a major priority, spinning off Isomorphic Labs[3] focused on drug discovery. The Chan Zuckerberg Institute is devoted to enabling computational research in biology and medicine to "cure, prevent, or manage all diseases by the end of this century"[4]. This shows that the highest level of machine learning research is being devoted to biological problems. What have we discovered so far? In 2020, Deepmind showed accuracy equal to the best physical methods of protein structure measurement at the CASP 14 protein folding prediction contest with their AlphaFold2 program.[5] This result "solved the protein folding problem"[6] for the large majority of proteins, showing that they could generate a high quality, biologically accurate 3D protein structure given the DNA sequence that encodes the protein. Deepmind then used AlphaFold2 to generate structures for all proteins kn...

Ground Truths
Geoffrey Hinton: Large Language Models in Medicine. They Understand and Have Empathy

Ground Truths

Play Episode Listen Later Dec 8, 2023 36:33


This is one of the most enthralling and fun interviews I've ever done (in 2 decades of doing them) and I hope that you'll find it stimulating and provocative. If you did, please share with your network.And thanks for listening, reading, and subscribing to Ground Truths.Recorded 4 December 2023Transcript below with external links to relevant material along with links to the audioERIC TOPOL (00:00):This is for me a real delight to have the chance to have a conversation with Geoffrey Hinton. I followed his work for years, but this is the first time we've actually had a chance to meet. And so this is for me, one of the real highlights of our Ground Truths podcast. So welcome Geoff.GEOFFREY HINTON (00:21):Thank you very much. It's a real opportunity for me too. You're an expert in one area. I'm an expert in another and it's great to meet up.ERIC TOPOL (00:29):Well, this is a real point of conversion if there ever was one. And I guess maybe I'd start off with, you've been in the news a lot lately, of course, but what piqued my interest to connect with you was your interview on 60 Minutes with Scott Pelley. You said: “An obvious area where there's huge benefits is healthcare. AI is already comparable with radiologists understanding what's going on in medical images. It's going to be very good at designing drugs. It already is designing drugs. So that's an area where it's almost entirely going to do good. I like that area.”I love that quote Geoff, and I thought maybe we could start with that.GEOFFREY HINTON (01:14):Yeah. Back in 2012, one of my graduate students called George Dahl who did speech recognition in 2009, made a big difference there. Entered a competition by Merck Frost to predict how well particular chemicals would bind to something. He knew nothing about the science of it. All he had was a few thousand descriptors of each of these chemicals and 15 targets that things might bind to. And he used the same network as we used for speech recognition. So he treated the 2000 descriptors of chemicals as if they were things in a spectrogram for speech. And he won the competition. And after he'd won the competition, he wasn't allowed to collect the $20,000 prize until he told Merck how he did it. And one of their questions was, what qsar did you use? So, he said, what's qsar? Now qsar is a field, it has a journal, it's had a conference, it's been going for many years, and it's the field of quantitative structural activity relationships. And that's the field that tries to predict whether some chemical is going to bind to something. And basically he'd wiped out that field without knowing its name.ERIC TOPOL (02:46):Well, it's striking how healthcare, medicine, life science has had somewhat of a separate path in recent AI with transformer models and also going back of course to the phenomenal work you did with the era of bringing in deep learning and deep neural networks. But I guess what I thought I'd start with here with that healthcare may have a special edge versus its use in other areas because, of course, there's concerns which you and others have raised regarding safety, the potential, not just hallucinations and confabulation of course a better term or the negative consequences of where AI is headed. But would you say that the medical life science AlphaFold2 is another example of from your colleagues Demis Hassabis and others at Google DeepMind where this is something that has a much more optimistic look?GEOFFREY HINTON (04:00):Absolutely. I mean, I always pivot to medicine as an example of all the good it can do because almost everything it's going to do there is going to be good. There are some bad uses like trying to figure out who to not insure, but they're relatively limited almost certainly it's going to be extremely helpful. We're going to have a family doctor who's seen a hundred million patients and they're going to be a much better family doctor.ERIC TOPOL (04:27):Well, that's really an important note. And that gets us to a paper preprint that was just published yesterday, on arXiv, which interestingly isn't usually the one that publishes a lot of medical preprints, but it was done by folks at Google who later informed me was a model large language model that hadn't yet been publicized. They wouldn't disclose the name and it wasn't MedPaLM2. But nonetheless, it was a very unique study because it randomized their LLM in 20 internists with about nine years of experience in medical practice for answering over 300 clinical pathologic conferences of the New England Journal. These are the case reports where the master clinician is brought in to try to come up with a differential diagnosis. And the striking thing on that report, which is perhaps the best yet about medical diagnoses, and it gets back Geoff to your hundred million visits, is that the LLM exceeded the clinicians in this randomized study for coming up with a differential diagnosis. I wonder what your thoughts are on this.GEOFFREY HINTON (05:59):So in 2016, I made a daring and incorrect prediction was that within five years, the neural nets were going to be better than radiologists that interpreting medical scans, it was sometimes taken out of context. I meant it for interpreting medical scans, not for doing everything a radiologist does, and I was wrong about that. But at the present time, they're comparable. This is like seven years later. They're comparable with radiologists for many different kinds of medical scans. And I believe that in 10 years they'll be routinely used to give a second opinion and maybe in 15 years they'll be so good at giving second opinions that the doctor's opinion will be the second one. And so I think I was off by about a factor of three, but I'm still convinced I was completely right in the long term.(06:55):So this paper that you're referring to, there are actually two people from the Toronto Google Lab as authors of that paper. And like you say, it was based on the large language PaLM2 model that was then fine-tuned. It was fine-tuned slightly differently from MedPaLM2  I believe, but the LLM [large language model] by themselves seemed to be better than the internists. But what was more interesting was the LLMs when used by the internists made the internists much better. If I remember right, they were like 15% better when they used the LLMs and only 8% better when they used Google search and the medical literature. So certainly the case that as a second opinion, they're really already extremely useful.ERIC TOPOL (07:48):It gets again, to your point about that corpus of knowledge that is incorporated in the LLM is providing a differential diagnosis that might not come to the mind of the physician. And this is of course the edge of having ingested so much and being able to play back those possibilities and the differential diagnosis. If it isn't in your list, it's certainly not going to be your final diagnosis. I do want to get back to the radiologist because we're talking just after the annual massive Chicago Radiologic Society of North America RSNA meeting. And at those meetings, I wasn't there, but talking to my radiology colleagues, they say that your projection is already happening. Now that is the ability to not just read, make the report. I mean the whole works. So it may not have been five years when you said that, which is one of the most frequent quotes in all of AI and medicine of course, as you probably know, but it's approximating your prognosis. Even nowGEOFFREY HINTON (09:02):I've learned one thing about medicine, which is just like other academics, doctors have egos and saying this stuff is going to replace them is not the right move. The right move is to say it's going to be very good at giving second opinions, but the doctor's still going to be in charge. And that's clearly the way to sell things. And that's fine, just I actually believe that after a while of that, you'll be listening to the AI system, not the doctors. And of course there's dangers in that. So we've seen the dangers in face recognition where if you train on a database that contains very few black people, you'll get something that's very good at recognizing faces. And the people who use it, the police will think this is good at recognizing faces. And when it gives you the wrong identity for a person of color, then the policemen are going to believe it. And that's a disaster. And we might get the same with medicine. If there's some small minority group that has some distinctly different probabilities of different diseases, it's quite dangerous for doctors to get to trust these things if they haven't been very carefully controlled for the training data.ERIC TOPOL (10:17):Right. And actually I did want to get back to you. Is it possible for the reason why in this new report that the LLMs did so well is that some of these case studies from New England Journal were part of the pre-training?GEOFFREY HINTON (10:32):That is always a big worry. It's worried me a lot and it's worried other people a lot because these things have pulled in so much data. There is now a way round that at least for showing that the LLMs are genuinely creative. So he's a very good computer science theorist at Princeton called Sanjeev Arora, and I'm going to attribute all this to him, but of course, all the work was done by his students and postdocs and collaborators. And the idea is you can get these language models to generate stuff, but you can then put constraints on what they generate by saying, so I tried an example recently, I took two Toronto newspapers and said, compare these two newspapers using three or four sentences, and in your answer demonstrate sarcasm, a red herring empathy, and there's something else. But I forget what metaphor. Metaphor.ERIC TOPOL (11:29):Oh yeah.GEOFFREY HINTON (11:29):And it gave a brilliant comparison of the two newspapers exhibiting all those things. And the point of Sanjeev Arora's work is that if you have a large number of topics and a large number of different things you might demonstrate in the text, then if I give an topic and I say, demonstrate these five things, it's very, anything in the training data will be on that topic demonstrating those five skills. And so when it does it, you can be pretty confident that it's original. It's not something it saw in the training data. That seems to me a much more rigorous test of whether it generates new stuff. And what's interesting is some of the LLMs, the weaker ones don't really pass the test, but things like GPT-4 that passes the test with flying colors, that definitely generates original stuff that almost certainly was not in the training data.ERIC TOPOL (12:25):Yeah. Well, that's such an important tool to ferret out the influence of pre-training. I'm glad you reviewed that. Now, the other question that most people argue about, particularly in the medical sphere, is does the large language model really understand? What are your thoughts about that? We're talking about what's been framed as the stochastic parrot versus a level of understanding or enhanced intelligence, whatever you want to call it. And this debate goes on, where do you fall on that?GEOFFREY HINTON (13:07):I fall on the sensible side. They really do understand. And if you give them quizzes, which involve a little bit of reasoning, it's much harder to do now because of course now GPT-4 can look at what's on the web. So you are worried if I mention a quiz now, someone else may have given it to GPT-4, but a few months ago when you did this before, you could see the web, you could give it quizzes for things that it had never seen before and it can do reasoning. So let me give you my favorite example, which was given to me by someone who believed in symbolic reasoning, but a very honest guy who believed in symbolic reasoning and was very puzzled about whether GT four could do symbolic reasoning. And so he gave me a problem and I made it a bit more complicated.(14:00):And the problem is this, the rooms in my house are painted white or yellow or blue, yellow paint fade to white within a year. In two years' time, I would like all the rooms to be white. What should I do and why? And it says, you don't need to paint the white rooms. You don't need to paint the yellow rooms because they'll fade to white anyway. You need to paint the blue rooms white. Now, I'm pretty convinced that when I first gave it that problem, it had never seen that problem before. And that problem involves a certain amount of just basic common sense reasoning. Like you have to understand that if it faded to yellow in a year and you're interested in the stage in two years' time, two years is more than one year and so on. When I first gave it the problem and didn't ask you to explain why it actually came up with a solution that involved painting the blue rooms yellow, that's more of a mathematician solution because it reduces it to a solved problem. But that'll work too. So I'm convinced it can do reasoning. There are people, friends of mine like Jan Leike, who is convinced it can't do reasoning. I'm just waiting for him to come to his sense.ERIC TOPOL (15:18):Well, I've noticed the back and forth with you and Yann (LeCun) [see above on X]. I know it's a friendly banter, and you, of course, had a big influence in his career as so many others that are now in the front leadership lines of AI, whether it's Ilya Sutskever at OpenAI, who's certainly been in the news lately with the turmoil there. And I mean actually it seems like all the people that did some training with you are really in the leadership positions at various AI companies and academic groups around the world. And so it says a lot about your influence that's not just as far as deep neural networks. And I guess I wanted to ask you, because you're frequently regarded to as the godfather of AI, and what do you think of that getting called that?GEOFFREY HINTON (16:10):I think originally it wasn't meant entirely beneficially. I remember Andrew Ng actually made up that phrase at a small workshop in the town of Windsor in Britain, and it was after a session where I'd been interrupting everybody. I was the kind of leader of the organization that ran the workshop, and I think it was meant as kind of I would interrupt everybody, and it wasn't meant entirely nicely, I think, but I'm happy with it.ERIC TOPOL (16:45):That's great.GEOFFREY HINTON (16:47):Now that I'm retired and I'm spending some of my time on charity work, I refer to myself as the fairy godfather.ERIC TOPOL (16:57):That's great. Well, I really enjoyed the New Yorker profile by Josh Rothman, who I've worked with in the past where he actually spent time with you up in your place up in Canada. And I mean it got into all sorts of depth about your life that I wasn't aware of, and I had no idea about the suffering that you've had with the cancer of your wives and all sorts of things that were just extraordinary. And I wonder, as you see the path of medicine and AI's influence and you look back about your own medical experiences in your family, do you see where we're just out of time alignment where things could have been different?GEOFFREY HINTON (17:47):Yeah, I see lots of things. So first, Joshua is a very good writer and it was nice of him to do that.(17:59):So one thing that occurs to me is actually going to be a good use of LLMs, maybe fine tune somewhat differently to produce a different kind of language is for helping the relatives of people with cancer. Cancer goes on a long time, unlike, I mean, it's one of the things that goes on for longest and it's complicated and most people can't really get to understand what the true options are and what's going to happen and what their loved one's actually going to die of and stuff like that. I've been extremely fortunate because in that respect, I had a wife who died of ovarian cancer and I had a former graduate student who had been a radiologist and gave me advice on what was happening. And more recently when my wife, a different wife died of pancreatic cancer, David Naylor, who you knowERIC TOPOL (18:54):Oh yes.GEOFFREY HINTON (18:55):Was extremely kind. He gave me lots and lots of time to explain to me what was happening and what the options were and whether some apparently rather flaky kind of treatment was worth doing. What was interesting was he concluded there's not much evidence in favor of it, but if it was him, he'd do it. So we did it. That's where you electrocute the tumor, being careful not to stop the heart. If you electrocute the tumor with two electrodes and it's a compact tumor, all the energy is going into the tumor rather than most of the energy going into the rest of your tissue and then it breaks up the membranes and then the cells die. We don't know whether that helped, but it's extremely useful to have someone very knowledgeable to give advice to the relatives. That's just so helpful. And that's an application in which it's not kind of life or death in the sense that if you happen to explain it to me a bit wrong, it's not determining the treatment, it's not going to kill the patient.(19:57):So you can actually tolerate it, a little bit of error there. And I think relatives would be much better off if they could talk to an LLM and consult with an LLM about what the hell's going on because the doctors never have time to explain it properly. In rare cases where you happen to know a very good doctor like I do, you get it explained properly, but for most people it won't be explained properly and it won't be explained in the right language. But you can imagine an LLM just for helping the relatives, that would be extremely useful. It'd be a fringe use, but I think it'd be a very helpful use.ERIC TOPOL (20:29):No, I think you're bringing up an important point, and I'm glad you mentioned my friend David Naylor, who's such an outstanding physician, and that brings us to that idea of the sense of intuition, human intuition, versus what an LLM can do. Don't you think those would be complimentary features?GEOFFREY HINTON (20:53):Yes and no. That is, I think these chatbots, they have intuition that is what they're doing is they're taking strings of symbols and they're converting each symbol into a big bunch of features that they invent, and then they're learning interactions between the features of different symbols so that they can predict the features of the next symbol. And I think that's what people do too. So I think actually they're working pretty much the same way as us. There's lots of people who say, they're not like us at all. They don't understand, but there's actually not many people who have theories of how the brain works and also theories of how they understand how these things work. Mostly the people who say they don't work like us, don't actually have any model of how we work. And it might interest them to know that these language models were actually introduced as a theory of how our brain works.(21:44):So there was something called what I now call a little language model, which was tiny. I introduced in 1985, and it was what actually got nature to accept our paper on back propagation. And what it was doing was predicting the next word in a three word string, but the whole mechanism of it was broadly the same as these models. Now, the models are more complicated, they use attention, but it was basically you get it to invent features for words and interactions between features so that it can predict the features of the next word. And it was introduced as a way of trying to understand what the brain was doing. And at the point at which it was introduced, the symbolic AI peoples didn't say, oh, this doesn't understand. They were perfectly happy to admit that this did learn the structure in the tiny domain, the tiny toy domain it was working on. They just argued that it would be better to learn that structure by searching through the space of symbolic rules rather than through the space of neural network weights. But they didn't say this is an understanding. It was only when it really worked that people had to say, well, it doesn't count.ERIC TOPOL (22:53):Well, that also something that I was surprised about. I'm interested in your thoughts. I had anticipated that in Deep Medicine book that the gift of time, all these things that we've been talking about, like the front door that could be used by the model coming up with the diagnoses, even the ambient conversations made into synthetic notes. The thing I didn't think was that machines could promote empathy. And what I have been seeing now, not just from the notes that are now digitized, these synthetic notes from the conversation of a clinic visit, but the coaching that's occurring by the LLM to say, well, Dr. Jones, you interrupted the patient so quickly, you didn't listen to their concerns. You didn't show sensitivity or compassion or empathy. That is, it's remarkable. Obviously the machine doesn't necessarily feel or know what empathy is, but it can promote it. What are your thoughts about that?GEOFFREY HINTON (24:05):Okay, my thoughts about that are a bit complicated, that obviously if you train it on text that exhibits empathy, it will produce text that exhibits empathy. But the question is does it really have empathy? And I think that's an open issue. I am inclined to say it does.ERIC TOPOL (24:26):Wow, wow.GEOFFREY HINTON (24:27):So I'm actually inclined to say these big chatbots, particularly the multimodal ones, have subjective experience. And that's something that most people think is entirely crazy. But I'm quite happy being in a position where most people think I'm entirely crazy. So let me give you a reason for thinking they have subjective experience. Suppose I take a chatbot that has a camera and an arm and it's being trained already, and I put an object in front of it and say, point at the object. So it points at the object, and then I put a prism in front of its camera that bends the light race, but it doesn't know that. Now I put an object in front of it, say, point at the object, and it points straight ahead, sorry, it points off to one side, even though the object's straight ahead and I say, no, the object isn't actually there, the object straight ahead. I put a prism in front of your camera and imagine if the chatbot says, oh, I see the object's actually straight ahead, but I had the subjective experience that it was off to one side. Now, if the chatbot said that, I think it would be using the phrase subjective experience in exactly the same way as people do,(25:38):Its perceptual system told it, it was off to one side. So what its perceptual system was telling, it would've been correct if the object had been off to one side. And that's what we mean by subjective experience. When I say I've got the subjective experience of little pink elephants floating in front of me, I don't mean that there's some inner theater with little pink elephants in it. What I really mean is if in the real world there were little pink elephants floating in front of me, then my perceptual system would be telling me the truth. So I think what's funny about subjective experiences, not that it's some weird stuff made of spooky qualia in an inner theater, I think subjective experiences, a hypothetical statement about a possible world. And if the world were like that, then your perceptual system will be working properly. That's how we use subjective experience. And I think chatbots can use it like that too. So I think there's a lot of philosophy that needs to be done here and got straight, and I didn't think we can lead it to the philosophers. It's too urgent now.ERIC TOPOL (26:44):Well, that's actually a fascinating response and added to what your perception of understanding it gets us to perhaps where you were when you left Google in May this year where you had, you saw that this was a new level of whatever you want to call it, not AGI [artificial general intelligence], but something that was enhanced from prior AI. And you basically, in some respects, I wouldn't say sounded any alarms, but you were, you've expressed concern consistently since then that we're kind of in a new phase. We're heading in a new direction with AI. Could you elaborate a bit more about where you were and where your mind was in May and where you think things are headed now?GEOFFREY HINTON (27:36):Okay, let's get the story straight. It's a great story. The news media puts out there, but actually I left Google because I was 75 and I couldn't program any longer because I kept forgetting what the variables stood for. I took the opportunity also, I wanted to watch a lot of Netflix. I took the opportunity that I was leaving Google anyway to start making public statements about AI safety. And I got very concerned about AI safety a couple of months before. What happened was I was working on trying to figure out analog ways to do the computation so you could do these larger language models for much less energy. And I suddenly realized that actually the digital way of doing the computation is probably hugely better. And it's hugely better because you can have thousands of different copies of exactly the same digital model running on different hardware, and each copy can look at a different bit of the internet and learn from it.(28:38):And they can all combine what they learned instantly by sharing weights or by sharing weight gradients. And so you can get 10,000 things to share their experience really efficiently. And you can't do that with people. If 10,000 people go off and learn 10,000 different skills, you can't say, okay, let's all average our weight. So now all of us know all of those skills. It doesn't work like that. You have to go to university and try and understand what on earth the other person's talking about. It's a very slow process where you have to get sentences from the other person and say, how do I change my brain? So I might've produced that sentence, and it's very inefficient compared with what these digital models can do by just sharing weights. So I had this kind of epiphany. The digital models are probably much better. Also, they can use the back propagation algorithm quite easily, and it's very hard to see how the brain can do it efficiently. And nobody's managed to come up with anything that'll work in real neural nets as comparable to back propagation at scale. So I had this sort of epiphany, which made me give up on the analog research that digital computers are actually just better. And since I was retiring anyway, I took the opportunity to say, Hey, they're just better. And so we'd better watch out.ERIC TOPOL (29:56):Well, I mean, I think your call on that and how you back it up is really, of course had a big impact. And of course it's still an ongoing and intense debate, and in some ways it really was about what was the turmoil at OpenAI was rooted with this controversy about where things are, where they're headed. I want to just close up with the point you made about the radiologists, and not to insult them by saying they'll be replaced gets us to where we are, the tension of today, which is our humans as the pinnacle of intelligence going to be not replaced, but superseded by the likes of AI's future, which of course our species can't handle that a machine, it's like the radiologist, our species can't handle that. There could be this machine that could be with far less connections, could do things outperform us, or of course, as we've, I think emphasized in our conversation in concert with humans to even take it to yet another level. But is that tension about that there's this potential for machines outdoing people part of the problem that it's hard for people to accept this notion?GEOFFREY HINTON (31:33):Yes, I think so. So particularly philosophers, they want to say there's something very special about people. That's to do with consciousness and subjective experience and sentience and qualia, and these machines are just machines. Well, if you're a sort of scientific materialist, most of us are brain's just a machine. It's wrong to say it's just a machine because a wonderfully complex machine that does incredible things that are very important to people, but it is a machine and there's no reason in principle why there shouldn't be better machines than better ways of doing computation, as I now believe there are. So I think people have a very long history of thinking. They're special.(32:19):They think God made them in his image and he put them at the center of the universe. And a lot of people have got over that and a lot of people haven't. But for the people who've got over that, I don't think there's any reason in principle to think that we are the pinnacle of intelligence. And I think it may be quite soon these machines are smarter than us. I still hope that we can reach a agreement with the machines where they act like benevolent parents. So they're looking out for us. They have, we've managed to motivate them, so the most important thing for them is our success, like it is with a mother and child, not so much for men. And I would really like that solution. I'm just fearful we won't get it.ERIC TOPOL (33:15):Well, that would be a good way for us to go forward. Of course, the doomsayers and the people that are much worse at their level of alarm tend to think that that's not possible. But we'll see obviously over time. Now, one thing I just wanted to get a quick read from you before we close is as recently, Demis Hassabis and John Jumper got the Lasker Award, like a pre Nobel Award for AlphaFold2. But this transformer model, which of course has helped to understand the structure 3D of 200 million proteins, they don't understand how it works. Like most models, unlike the understanding we were talking about earlier on the LLM side. I wrote that I think that with this award, an asterisk should have been given to the AI model. What are your thoughts about that idea?GEOFFREY HINTON (34:28):It's like this, I want people to take what I say seriously, and there's a whole direction you could go in that I think Larry Page, one of the founders of Google has gone in this direction, which is to say there's these super intelligences and why shouldn't they have rights? If you start going in that direction, you are going to lose people. People are not going to accept that these things should have political rights, for example. And being a co-author is the beginning of political rights. So I avoid talking about that, but I'm sort of quite ambivalent and agnostic about whether they should. But I think it's best to stay clear of that issue just because the great majority of people will stop listening to you if you say machines should have rights.ERIC TOPOL (35:28):Yeah. Well, that gets us course of what we just talked about and how it's hard the struggle between humans and machines rather than the thought of humans plus machines and symbiosis that can be achieved. But Geoff, this has been a great, we've packed a lot in. Of course, we could go on for hours, but I thoroughly enjoyed hearing your perspective firsthand and your wisdom, and just to reinforce the point about how many of the people that are leading the field now derive a lot of their roots from your teaching and prodding and challenging and all that. We're indebted to you. And so thanks so much for all you've done and we'll continue to do to help us, guide us through the very rapid dynamic phase as AI moves ahead.GEOFFREY HINTON (36:19):Thanks, and good luck with getting AI to really make a big difference in medicine.ERIC TOPOL (36:25):Hopefully we will, and I'll be consulting with you from time to time to get some of that wisdom to help usGEOFFREY HINTON (36:32):Anytime. Get full access to Ground Truths at erictopol.substack.com/subscribe

El Explicador Sitio Oficial
Proteínas e Inteligencia Artificial 2023/11/23. El Explicador. Cápsula.

El Explicador Sitio Oficial

Play Episode Listen Later Nov 23, 2023 38:19


El sistema AlphaFold2, desarrollado por una de las empresas más importantes del mundo de la inteligencia artificia, se encuentra cerca de permitirnos entender a fondo el funcionamiento de todas las proteínas del mundo vivo, con enormes consecuencias tecnológicas. Gracias por sus comentarios, interacciones, apoyo económico y suscripción. Escuche y descargue gratuitamente en MP3 2023/11/23 Proteínas e Inteligencia Artificial . Gracias por su apoyo a El Explicador en: Patreon, https://www.patreon.com/elexplicador_enriqueganem PayPal, elexplicadorpatrocinio@gmail.com SoundCloud, https://soundcloud.com/el-explicador Spotify, https://open.spotify.com/show/01PwWfs1wV9JrXWGQ2MrbY iTunes, https://podcasts.apple.com/mx/podcast/el-explicador-sitio-oficial/id1562019070 Amazon Music, https://music.amazon.com/podcasts/f2656899-46c8-4d0b-85ef-390aaf20f366/el-explicador-sitio-oficial YouTube, https://youtube.com/c/ElExplicadorSitioOficial Twitter @enrique_ganem Lo invitamos a suscribirse a estas redes para recibir avisos de nuestras publicaciones y visitar nuestra página http://www.elexplicador.net. En el título de nuestros trabajos aparece la fecha año/mes/día de grabación, lo que facilita su consulta cronológica, ya sabe usted que el conocimiento cambia a lo largo del tiempo. Siempre leemos sus comentarios, no tenemos tiempo para reponder a cada uno personalmente pero todos son leídos y tomados en cuenta. Este es un espacio de divulgación científica en el que nos interesa informar de forma clara y amena, que le invite a Ud. a investigar sobre los temas tratados y a que Ud. forme su propia opinión. Serán borrados todos los comentarios que promuevan la desinformación, charlatanería, odio, bullying, violencia verbal o incluyan enlaces a páginas que no sean de revistas científicas arbitradas, que sean ofensivos hacia cualquier persona o promuevan alguna tendencia política o religiosa ya sea en el comentario o en la fotografía de perfil. Aclaramos que no somos apolíticos, nos reservamos el derecho de no expresar nuestra opinión política, ya que éste es un canal cuya finalidad es la divulgación científica. ¡Gracias por su preferencia!

Digital Health Section Podcast- Royal Society of Medicine
AI, AlphaFold & drug discovery. With Max Jaderberg, Director of Machine Learning at Isomorphic Labs- A Google DeepMind sister company

Digital Health Section Podcast- Royal Society of Medicine

Play Episode Listen Later Nov 6, 2023 30:09


Max Jaderberg, Director of Machine learning at Isomorphic labs, shares the Isomorphic approach to drug discovery in one of their first interviews as they emerge from shadow mode. In 2020 Google DeepMind made history when they took on the grand challenge of biology- the protein folding problem- with their algorithm AlphaFold2.  Their success prompted the creation of a sister company, Isomorphic Labs. Our conversation topics include:   - The concept of “Digital Biology”- How AI can be used model the biological world  - Bottlenecks in the drug discovery process and how AI could circumvent them - How AI may unlock 'undraggable targets' to find treatments for incurable diseases  Links: Isomorphic labs website: https://www.isomorphiclabs.com/ AlphaMissence: https://www.deepmind.com/blog/alphamissense-catalogue-of-genetic-mutations-to-help-pinpoint-the-cause-of-diseases

the bioinformatics chat
#66 AlphaFold and shape-mers with Janani Durairaj

the bioinformatics chat

Play Episode Listen Later Jul 10, 2023 20:51


This is the second episode in the AlphaFold series, originally recorded on February 14, 2022, with Janani Durairaj, a postdoctoral researcher at the University of Basel. Janani talks about how she used shape-mers and topic modelling to discover classes of proteins assembled by AlphaFold 2 that were absent from the Protein Data Bank (PDB). The bioinformatics discussion starts at 03:35. Links: A structural biology community assessment of AlphaFold2 applications (Mehmet Akdel, Douglas E. V. Pires, Eduard Porta Pardo, Jürgen Jänes, Arthur O. Zalevsky, Bálint Mészáros, Patrick Bryant, Lydia L. Good, Roman A. Laskowski, Gabriele Pozzati, Aditi Shenoy, Wensi Zhu, Petras Kundrotas, Victoria Ruiz Serra, Carlos H. M. Rodrigues, Alistair S. Dunham, David Burke, Neera Borkakoti, Sameer Velankar, Adam Frost, Jérôme Basquin, Kresten Lindorff-Larsen, Alex Bateman, Andrey V. Kajava, Alfonso Valencia, Sergey Ovchinnikov, Janani Durairaj, David B. Ascher, Janet M. Thornton, Norman E. Davey, Amelie Stein, Arne Elofsson, Tristan I. Croll & Pedro Beltrao) The Protein Universe Atlas What is hidden in the darkness? Deep-learning assisted large-scale protein family curation uncovers novel protein families and folds (Janani Durairaj, Andrew M. Waterhouse, Toomas Mets, Tetiana Brodiazhenko, Minhal Abdullah, Gabriel Studer, Mehmet Akdel, Antonina Andreeva, Alex Bateman, Tanel Tenson, Vasili Hauryliuk, Torsten Schwede, Joana Pereira) Geometricus: Protein Structures as Shape-mers derived from Moment Invariants on GitHub The group page The Folded Weekly newsletter A New York Times article about the Kramatorsk missile strike. The Instagram video, part of which you can hear at the beginning of the episode, appears to have been deleted.

PaperPlayer biorxiv neuroscience
Calcium and Integrin-binding protein 2 (CIB2) controls force sensitivity of the mechanotransducer channels in cochlear outer hair cells

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jul 9, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.07.09.545606v1?rss=1 Authors: Aristizabal-Ramirez, I., Dragich, A. K., Giese, A. P. J., Zuluaga-Osorio, K. S., Watkins, J., Davies, G. K., Hadi, S. E., Riazuddin, S., Vander Kooi, C. W., Ahmed, Z. M., Frolenkov, G. I. Abstract: Calcium and Integrin-Binding Protein 2 (CIB2) is an essential subunit of the mechano-electrical transduction (MET) complex in mammalian auditory hair cells. CIB2 binds to MET channel subunits TMC1/2 and is required for their transport and/or retention at the tips of mechanosensory stereocilia. Since genetic ablation of CIB2 results in complete loss of MET currents, the exact role of CIB2 in the MET complex remains elusive. Here, we generated a new mouse strain with deafness-causing p.R186W mutation in Cib2 and recorded small but still measurable MET currents in the cochlear outer hair cells at postnatal days 4-8. We found that p.R186W mutation results in increase of the resting open probability of MET channels, steeper MET current dependence on hair bundle deflection (I-X curve), loss of fast adaptation, and increased leftward shifts of I-X curves upon hair cell depolarization. Combined with AlphaFold2 prediction that p.R186W disrupts one of the multiple interacting sites between CIB2 and TMC1/2, our data suggest that CIB2 mechanically constrains TMC1/2 conformations to ensure proper force sensitivity and dynamic range of the MET channels. Using a custom piezo-driven stiff probe deflecting the hair bundles in less than 10 s, we also found that p.R186W mutation slows down the activation of MET channels. This phenomenon, however, is unlikely to be the direct effect on MET channels, since we also observed p.R186W-evoked disruption of the electron-dense material at the tips of mechanotransducing stereocilia and the loss of membrane-shaping BAIAP2L2 protein from the same location. We concluded that p.R186W mutation in CIB2 disrupts force sensitivity of the MET channels and force transmission to these channels. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

the bioinformatics chat
#65 AlphaFold and protein interactions with Pedro Beltrao

the bioinformatics chat

Play Episode Listen Later Jun 21, 2023 52:23


In this episode, originally recorded on February 9, 2022, Roman talks to Pedro Beltrao about AlphaFold, the software developed by DeepMind that predicts a protein's 3D structure from its amino acid sequence. Pedro is an associate professor at ETH Zurich and the coordinator of the structural biology community assessment of AlphaFold2 applications project, which involved over 30 scientists from different institutions. Pedro talks about the origins of the project, its main findings, the importance of the confidence metric that AlphaFold assigns to its predictions, and Pedro's own area of interest — predicting pockets in proteins and protein-protein interactions. Links: Pedro's group at ETH Zurich

PaperPlayer biorxiv cell biology
Family-wide analysis of integrin structures predicted by AlphaFold2

PaperPlayer biorxiv cell biology

Play Episode Listen Later May 2, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.05.02.539023v1?rss=1 Authors: Zhang, H., Zhu, D. S., Zhu, J. Abstract: Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv cell biology
A hierarchical strategy to decipher protein dynamics in vivo with chemical cross-linking mass spectrometry

PaperPlayer biorxiv cell biology

Play Episode Listen Later Mar 21, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.21.533582v1?rss=1 Authors: Zhang, B., Gong, Z., Zhao, L., An, Y., Gao, H., Chen, J., Liang, Z., Liu, M., Zhang, Y., Zhao, Q., Zhang, L. Abstract: Protein dynamics are essential for their various functions. Meanwhile, the intracellular environment would affect protein structural dynamics, especially for the intrinsically disordered proteins (IDPs). Chemical cross-linking mass spectrometry (CXMS) can unbiasedly capture the protein conformation information in cells and can also represent the protein dynamics. Here, we proposed a hierarchy deciphering strategy for protein dynamics in vivo. With the prior structure from AlphaFold2, the steady local conformation can be extensively evaluated. On this basis, the full-length structure of multi-domain proteins with various dynamic features can be characterized using CXMS. Furthermore, the complementary strategy with unbiased sampling and distance-constrained sampling enables an objective description of the intrinsic motion of the IDPs. Therefore, the hierarchy strategy we presented herein could help us better understand the molecular mechanisms of protein functions in cells. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

The Mind Killer
Episode 78 - Who Judges the Judges?

The Mind Killer

Play Episode Listen Later Mar 14, 2023 84:57


Wes, Eneasz, and David discuss the news from the last two weeks from a rationalist perspectiveSupport us on Substack!News discussed:S**t is going down in IsraelBig protests over Netanyahu's proposed judicial reformsNew proposals would give political appointees a majorityElite fighter squadron is boycotting trainingSilicon Valley Bank is insolventJacob Falkovich blames the governmentBank run stopped by new Fed policy the Bank Term Funding ProgramAnthropic Shadow? “this is what it would look like if you wanted to stifle innovative sectors”In Nebraska a mom & daughter will stand trial for abortionThe American Association of Pediatric released new guidelines for childhood obesityEating disorder experts are not happyFTC is going after MuskFTC is unhappy about “twitter files” exposing this, demands list of journalists from twitter, issues finesFTC commissioner resigns, cites FTC Chairman's “disregard for the rule of law and due process,” and “abuses of government power,” Partisan rift forming around FTC? Republicans investigating the investigation!AlphaFold2 continues to do lots of stuffGPT-4 coming in the next week or soYou can't stop AI progressHappy News!Toddler vaccinations are up!The asteroid diversion was a success!Growing Nuclear Acceptance! Start-up Sakuu uses 3D printers to print solid-state batteriesGot something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!Follow us!RSS: http://feeds.feedburner.com/themindkillerGoogle: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekmPocket Casts: https://pca.st/vvcmifu6 Stitcher: https://www.stitcher.com/podcast/the-mind-killer Apple: Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe

PaperPlayer biorxiv cell biology
Evolution of the ribbon-like organization of the Golgi apparatus in animal cells

PaperPlayer biorxiv cell biology

Play Episode Listen Later Feb 16, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.02.16.528797v1?rss=1 Authors: Benvenuto, G., Leone, S., Astoricchio, E., Bormke, S., Jasek, S., D'Aniello, E., Kittelmann, M., McDonald, K., Hartenstein, V., Baena, V., Escriva, H., Bertrand, S., Schierwater, B., Burkhardt, P., Ruiz-Trillo, I., Jekely, G., Ullrich-Luter, J., Luter, C., D'Aniello, S., Arnone, M. I., Ferraro, F. Abstract: The structural and functional unit of the Golgi apparatus is the stack, formed by piled membranous cisternae. Among eukaryotes the number of stacks ranges from one to several copies per cell. When present in multiple copies, the Golgi is observed in two arrangements: stacks either remain separated or link into a centralized structure referred to as the ribbon, after its description by Camillo Golgi. This Golgi architecture is considered to be restricted to vertebrate cells and its biological functions remain unclear. Here we show that the ribbon-like Golgi organization is instead present in the cells of several animals belonging to the cnidarian and bilaterian clades, implying its appearance in their common ancestor. We hypothesize a possible scenario driving this structural innovation. The Golgi Reassembly and Stacking Proteins, GRASPs, are central to the formation of the mammalian Golgi ribbon by mediating stack tethering. To link the stacks, GRASPs must be correctly oriented on Golgi membranes through dual anchoring including myristoylation and interaction with a protein partner of the Golgin class. We propose that the evolution of binding of Golgin-45 to GRASP led to Golgi stack tethering and the appearance of the ribbon-like organization. This hypothesis is supported by AlphaFold2 modelling of Golgin-45/GRASP complexes of animals and their closest unicellular relatives. Early evolution and broad conservation of the ribbon-like Golgi architecture imply its functional importance in animal cellular physiology. We anticipate that our findings will stimulate a wave of new studies on the so far elusive biological roles of this Golgi arrangement. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Aparici en Órbita
Aparici en Órbita s05e10: Inteligencias artificiales para desvelar el secreto de las proteínas, con Gonzalo Jiménez Osés

Aparici en Órbita

Play Episode Listen Later Feb 3, 2023 17:58


El funcionamiento de la materia viva es uno de los grandes temas de la ciencia moderna. Ahora sabemos que nuestra biología es el resultado de millones de procesos químicos y físicos que tienen lugar continuamente dentro de nuestros cuerpos y, a menudo, dentro de nuestras células. Las proteínas juegan un papel muy importante dentro de esos procesos: ellas son los "obreros", la fuerza de trabajo fundamental de los organismos vivos. Las proteínas encuentran patógenos y los neutralizan, realizan labores de limpieza dentro de las células, transportan el oxígeno y el CO2 dentro de nuestros cuerpos... la lista de sus funciones es prácticamente infinita. Sabemos desde hace más de 50 años que las "instrucciones" para fabricar estas proteínas están escritas en el ADN, y que por eso es tan importante este ADN y está presente en casi todas las células de nuestro cuerpo. Pero el proceso de construcción de una proteína es tan complejo que sólo conocer el ADN no basta: después de eso la proteína va a tener que "madurar", como los vinos. La proteína se va a retorcer, a enrollar y a ensamblar hasta convertirse en las máquinas moleculares que vemos en acción en los seres vivos. Y esta segunda parte del proceso es muy complicada y, en muchos aspectos, muy desconocida. Hoy os hablamos en la sección sobre todo este proceso, y cómo dos inteligencias artificiales nos están enseñando cómo se produce este proceso de ensamblado. Han sido dos inteligencias "no humanas" las que primero han logrado desentrañar cómo se construyen las proteínas: se llaman AlphaFold2 y RoseTTAFold, y sus creadores acaban de recibir el Premio Fronteras del Conocimiento en Biomedicina de la Fundación BBVA. Se trata de Demis Hassabis y John Jumper por AlphaFold2 y David Baker por RoseTTAFold. Hoy os contamos cómo estos dos programas han cambiado nuestra forma de abordar la biología molecular con la ayuda de Gonzalo Jiménez Osés, químico computacional y experto en diseño de proteínas en el instituto CIC bioGUNE de Bilbao. Si os interesa este tema podéis repasar otros programas de nuestro pódcast hermano, La Brújula de la Ciencia, en los que hemos hablado de él desde varios puntos de vista. En los episodios s10e19 y s01e17 os hablamos de ese proceso de construcción de una proteína, y en los episodios s08e16 y s10e17 os contamos en directo el terremoto que supuso la irrupción de AlphaFold en el panorama científico. Este programa se emitió originalmente el 2 de febrero de 2023. Podéis escuchar el resto de audios de Más de Uno en la app de Onda Cero y en su web, ondacero.es

PaperPlayer biorxiv neuroscience
Serine-129 phosphorylation of α-synuclein is a trigger for physiologic protein-protein interactions and synaptic function

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Dec 23, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.12.22.521485v1?rss=1 Authors: Parra-Rivas, L. A., Madhivanan, K., Wang, L., Boyer, N. P., Prakashchand, D. D., Aulston, B. D., Pizzo, D. P., Branes-Guerrero, K., Tang, Y., Das, U., Scott, D. A., Rangamani, P., Roy, S. Abstract: Phosphorylation of -synuclein at the Serine-129 site (-syn Ser129P) is an established pathologic hallmark of synucleinopathies, and also a therapeutic target. In physiologic states, only a small fraction of total -syn is phosphorylated at this site, and consequently, almost all studies to date have focused on putative pathologic roles of this post-translational modification. We noticed that unlike native (total) -syn that is widely expressed throughout the brain, the overall pattern of -syn Ser129P is restricted, suggesting intrinsic regulation and putative physiologic roles. Surprisingly, preventing phosphorylation at the Ser-129 site blocked the ability of -syn to attenuate activity-dependent synaptic vesicle (SV) recycling; widely thought to reflect its normal function. Exploring mechanisms, we found that neuronal activity augments -syn Ser-129P, and this phosphorylation is required for -syn binding to VAMP2 and synapsin - two functional binding-partners that are necessary for -syn function. AlphaFold2-driven modeling suggests a scenario where Ser129P induces conformational changes in the C-terminus that stabilizes this region and facilitates protein-protein interactions. Our experiments indicate that the pathology-associated Ser129P is an unexpected physiologic trigger of -syn function, which has broad implications for pathophysiology and drug-development. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Brad & Will Made a Tech Pod.
163: JWST (Just Wonderful Science and Technology)

Brad & Will Made a Tech Pod.

Play Episode Listen Later Dec 18, 2022 72:32


Our friend Kishore Hari becomes our first three-time guest by joining us this week to run down some of our favorite science and tech stories of 2022, including the latest developments in nuclear fusion, some ML-driven mid-podcast protein prediction, the latest addition to the dark energy debate, extremely American asteroid deflection strategies, one of the more exciting Antarctic discoveries in recent memory, and more.Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod

PaperPlayer biorxiv cell biology
The Kelch13 compartment is a hub of highly divergent vesicle trafficking proteins in malaria parasites

PaperPlayer biorxiv cell biology

Play Episode Listen Later Dec 15, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.12.15.520209v1?rss=1 Authors: Schmidt, S., Wichers-Misterek, J. S., Behrens, H. M., Birnbaum, J., Henshall, I., Jonscher, E., Flemming, S., Castro-Pena, C., Spielmann, T. Abstract: Single amino acid changes in the parasite protein Kelch13 (K13) result in reduced susceptibility of P. falciparum parasites to Artemisinin and its derivatives (ART). Recent work indicated that K13 and other proteins co-localising with K13 (K13 compartment proteins) are involved in the endocytic uptake of host cell cytosol (HCCU) and that a reduction in HCCU results in ART resistance. HCCU is critical for parasite survival but is poorly understood, with the K13 compartment proteins are among the few proteins so far functionally linked to this process. Here we further defined the composition of the K13 compartment by identifying four novel proteins at this site. Functional analyses, tests for ART susceptibility as well as comparisons of structural similarities using AlphaFold2 predictions of these, and previously identified proteins, showed that canonical vesicle trafficking and endocytosis domains were frequent in proteins involved in resistance and endocytosis, strengthening the link to endocytosis. Despite this, most showed unusual domain combinations and large parasite-specific regions, indicating a high level of taxon-specific adaptation. A second group of proteins did not influence endocytosis or ART resistance and was characterised by a lack of vesicle trafficking domains. We here identified the first essential protein of the second group and showed that it is needed in late-stage parasites. Overall, this work identified novel proteins functioning in endocytosis and at the K13 compartment. Together with comparisons of structural predictions it provides a repertoire of functional domains at the K13 compartment that indicate a high level of adaption of the endocytosis in malaria parasites. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv cell biology
Tethering by Uso1 is dispensable: The Uso1 monomeric globular head domain interacts with SNAREs to maintain viability.

PaperPlayer biorxiv cell biology

Play Episode Listen Later Dec 5, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.11.30.518472v1?rss=1 Authors: Bravo-Plaza, I., Tagua, V. G., Arst, H. N., Alonso, A., Pinar, M., Monterroso, B., Galindo, A., Penalva, M. A. Abstract: Uso1/p115 and RAB1 tether ER-derived vesicles to the Golgi. Uso1/p115 contains a globular-head-domain (GHD), a coiled-coil (CC) mediating dimerization/tethering and a C-terminal region (CTR) interacting with golgins. Uso1/p115 is recruited to vesicles by RAB1. Paradoxically, genetic studies placed Uso1 acting upstream of, or in conjunction with RAB1 (Sapperstein et al., 1996). We selected two missense mutations in uso1 resulting in E6K and G540S substitutions in the GHD permitting growth of otherwise inviable rab1-deficient Aspergillus nidulans. Remarkably, the double mutant suppresses the complete absence of RAB1. Full-length Uso1 and CTR{Delta} proteins are dimeric and the GHD lacking the CC/CTR is monomeric irrespective of whether they carry or not E6K/G540S. Microscopy showed recurrence of Uso1 on puncta (60 sec half-life) colocalizing with RAB1 and less so with early Golgi markers Sed5 and GeaA/Gea1/Gea2. Localization of Uso1 but not of Uso1E6K/G540S to puncta is abolished by compromising RAB1 function, indicating that E6K/G540S creates interactions bypassing RAB1. By S-tag-coprecipitation we demonstrate that Uso1 is an associate of the Sed5/Bos1/Bet1/Sec22 SNARE complex zippering vesicles with the Golgi, with Uso1E6K/G540S showing stronger association. Bos1 and Bet1 bind the Uso1 GHD directly, but Bet1 is a strong E6K/G540S-independent binder, whereas Bos1 is weaker but becomes as strong as Bet1 when the GHD carries E6K/G540S. AlphaFold2 predicts that G540S actually increases binding of GHD to the Bos1 Habc domain. In contrast, E6K seemingly increases membrane targeting of an N-terminal amphipathic a-helix, explaining phenotypic additivity. Overexpression of E6K/G540S and wild-type GHD complemented uso1{Delta}. Thus, a GHD monomer provides the essential Uso1 functions, demonstrating that long-range tethering activity is dispensable. Therefore, when enhanced by E6K/G540S, Uso1 binding to Bos1/Bet1 required to regulate SNAREs bypasses both the contribution of RAB1 to Uso1 recruitment and the reported role of RAB1 in SNARE complex formation (Lupashin and Waters, 1997), suggesting that the latter is consequence of the former. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

The Stephen Wolfram Podcast
Business, Innovation and Managing Life (December 22, 2021)

The Stephen Wolfram Podcast

Play Episode Listen Later Dec 2, 2022 80:35


Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa Questions include: Can you use Wolfram Language to log onto a website with username and password and read data from a website? - Will computational chem/Biochem programs (Alphafold2?) be accurate enough in its predictions to completely dominate private R&D to reduce the costs and duration of expensive wet lab experimentation? - When did you first decide to hire people at Wolfram Research? How did you recruit & evaluate them? What have you learned about hiring since then? - ​have you ever authentically read and replied to an unsolicited email if someone has an important idea for Mathematica and/or the Wolfram Language? - Do you have developers that work in a large variety of topics (changing monthly perhaps), or are most in a 'fixed' position/topic? - When you reach the level you do with Wolfram Research, what steps do you undertake to ensure that you continue to innovate and don't lose ground to your competitors, and that you don't take the wrong business decisions? - As someone with a technical background, how do you maintain a holistic overview of your company? For instance, do you better attempt to understand the company's financial books? - what is your work out routine? - how do you balance time being creative (for projects) vs the everyday necessary work? - At the start of crypto projects there is always this battle between centralization and decentralization. You need an amount of centralization in the beginning to get things going. How long should a project be given before you let in the masses? - Do you have an opinion concerning "Poor Charlie's almanacs: The Wit and Wisdom of Charles T. Munger"? - Do you use rules similar to those in Cellular Automata when you manage your company or your company's projects?

Frekvenca X
Proteini, gradniki življenja - napovednik serije

Frekvenca X

Play Episode Listen Later Nov 5, 2022 2:53


Stroju je uspelo tisto, česar človek ni zmogel. S pomočjo umetne inteligence AlphaFold2 so pred dvema letoma napovedali tridimenzionalno obliko 200 milijonov proteinov. Prej smo jih poznali približno 170 tisoč. V novi seriji Frekvence X se bomo spraševali, zakaj sploh je pomembno poznati oblike proteinov, kaj znanstvenikom ena oblika proteina pove o njegovih lastnostih, kaj sploh so proteini? Zanimali nas bodo tisti molekularni stroji, ki nam omogočajo, da živimo. Proteini v našem telesu. Pridružite se nam naslednje tri četrtke, naročite se na podkast, da česa ne zamudite.

prej pridru alphafold2 napovednik
ITmedia NEWS
“ほぼ全て”のタンパク質構造予測データ、英DeepMindが公開 解析AI「AlphaFold2」が成果

ITmedia NEWS

Play Episode Listen Later Jul 29, 2022 0:35


“ほぼ全て”のタンパク質構造予測データ、英DeepMindが公開 解析AI「AlphaFold2」が成果。 米Alphabet傘下の英DeepMindは7月28日(現地時間)、2億を超える種類のタンパク質構造の予測情報を専用データベース「AlphaFold Protein Structure Database」で公開した。この数は科学的に知られているほぼ全てのタンパク質の数に及ぶとしており、2021年同時期に公開していた約100万種類から200倍以上増加したことになる。

deepmind ai alphafold2
AI News
#2230 NVIDIA & NASA / Cerner / AlphaFold2 / Sonantic / Peltarion

AI News

Play Episode Listen Later Jul 25, 2022 5:13


NVIDIA GPUs will play a key role in interpreting data coming in from the James Webb Space Telescope, with NASA preparing to release the $10 billion scientific instrument's first full-color images next month. https://blogs.nvidia.com/blog/2022/06/08/deep-learning-james-webb-space-telescope/ Days after Oracle completed its $28 billion purchase of Cerner, Larry Ellison laid out a comprehensive vision for the cloud giant's healthcare business. https://www.statnews.com/2022/06/09/oracle-cerner-health-records-cloud-2/ Of the three rotavirus groups that cause gastroenteritis in humans, known as groups A, B, and C, groups A and C primarily affect children and are the best characterized. https://www.sciencedaily.com/releases/2022/06/220609131855.htm Spotify has constantly brought new features to its platform to differentiate itself from competitors like Apple Music. https://9to5mac.com/2022/06/13/spotify-invests-in-ai/ King, a leading interactive entertainment company serving the mobile gaming world, announced the acquisition of artificial intelligence software company Peltarion. https://aithority.com/machine-learning/mobile-game-developer-king-acquires-artificial-intelligence-company-peltarion/ Visit www.integratedaisolutions.com

AI News po polsku
#2230 NVIDIA & NASA / Cerner / AlphaFold2 / Sonantic / Peltarion

AI News po polsku

Play Episode Listen Later Jul 25, 2022 5:28


Podcast jest dostępny także w formie newslettera: https://ainewsletter.integratedaisolutions.com/ Procesory graficzne NVIDIA odegrają kluczową rolę w interpretacji danych przesyłanych strumieniowo z Kosmicznego Teleskopu Jamesa Webba, a NASA przygotowuje się do opublikowania w przyszłym miesiącu pierwszych pełnokolorowych obrazów z instrumentu naukowego wartego 10 miliardów dolarów. https://blogs.nvidia.com/blog/2022/06/08/deep-learning-james-webb-space-telescope/ Kilka dni po tym https://www.statnews.com/2022/06/09/oracle-cerner-health-records-cloud-2/ Spośród trzech grup rotawirusów, które powodują zapalenie żołądka i jelit u ludzi, zwanych grupami A, B i C, grupy A i C dotykają głównie dzieci i są najlepiej scharakteryzowane https://www.sciencedaily.com/releases/2022/06/220609131855.htm Spotify stale wprowadza nowe funkcje na swoją platformę, aby wyróżnić się na tle konkurencji, takiej jak Apple Music. https://9to5mac.com/2022/06/13/spotify-invests-in-ai/ King, wiodąca firma zajmująca się interaktywną rozrywką dla świata gier mobilnych, ogłosiła przejęcie firmy Peltarion zajmującej się oprogramowaniem do sztucznej inteligencji. https://aithority.com/machine-learning/mobile-game-developer-king-acquires-artificial-intelligence-company-peltarion/ Odwiedź www.integratedaisolutions.com

AI News auf Deutsch
#2230 NVIDIA & NASA / Cerner / AlphaFold2 / Sonantic / Peltarion

AI News auf Deutsch

Play Episode Listen Later Jul 25, 2022 5:44


NVIDIA-GPUs werden eine Schlüsselrolle bei der Interpretation der vom James-Webb-Weltraumteleskop eingehenden Daten spielen, wobei die NASA die Veröffentlichung der ersten Vollfarbbilder des wissenschaftlichen Instruments im Wert von 10 Milliarden US-Dollar im nächsten Monat vorbereitet. https://blogs.nvidia.com/blog/2022/06/08/deep-learning-james-webb-space-telescope/ Tage nachdem Oracle den Kauf von Cerner im Wert von 28 Milliarden US-Dollar abgeschlossen hatte, entwarf Larry Ellison eine umfassende Vision für das Gesundheitsgeschäft des Cloud-Riesen. https://www.statnews.com/2022/06/09/oracle-cerner-health-records-cloud-2/ Von den drei Rotavirus-Gruppen, die beim Menschen Gastroenteritis verursachen, die so genannten Gruppen A, B und C, betreffen die Gruppen A und C hauptsächlich Kinder und sind am besten charakterisiert. https://www.sciencedaily.com/releases/2022/06/220609131855.htm Spotify hat ständig neue Funktionen auf seine Plattform gebracht, um sich von Konkurrenten wie Apple Music abzuheben. https://9to5mac.com/2022/06/13/spotify-invests-in-ai/ King, ein führendes interaktives Unterhaltungsunternehmen für die Welt der mobilen Spiele, gab die Übernahme des Softwareunternehmens für künstliche Intelligenz Peltarion bekannt. https://aithority.com/machine-learning/mobile-game-developer-king-acquires-artificial-intelligence-company-peltarion/ Visit www.integratedaisolutions.com

NeuroRadio
#35 Introduction to structural biology for neuroscientists

NeuroRadio

Play Episode Listen Later May 10, 2022 113:15


東大先進科学機構の加藤英明さん(@emeKato)をゲストに、今回のChRmine構造論文やこれまでの仕事の背景、神経科学者が構造生物学について知っておくと良いこと、今後の構造生物学の展開等を伺いました。(2/20収録) Show Notes(完全版は番組HP): 加藤研 加藤さん過去インタビュー 1 2 ChRmineの論文 濡木研 Brian Kobilkaラボ Brianノーベル賞のページ 2012年のNature論文(ChR1とChR2のキメラ、C1C2の構造解析) の新着論文レビュー 服部素之さん タンパク質結晶化の基礎と蒸気拡散法の解説 (pdf注意) 芳賀先生のNature 2012 (註:だいぶ年が間違ってましたね・・・ by 加藤さん) 脂質キュービックフェーズ(LCP)法 (pdf注意) Nanobodyを使ったBrianたちの論文 iC++の構造論文(Nature筆頭論文その3) GtACR1の構造論文(Nature筆頭論文その4) 上記論文の新着論文レビュー 微生物型ロドプシンの作動メカニズムについての神取先生の解説記事 加藤先生のアドバンスト理科 Kalium rhodopsins クライオ電顕の原理の解説(吉川研) David Juliusが取ってた NTSR1-Gi1のCanonical と Non Canonicalクライオ構造論文(N筆頭その5) AlphaFold2論文 死の脳内表象について(pdf注意) Yusteがやってたoptogeneticsのonline meeting質問のシーン KR2の構造解析(N筆頭その2) の新着論文レビュー 神取研 木暮先生 2013年のNature Communicationsの論文 吉澤研 Editorial Notes: なんか一人でベラベラ話していて、聞き直してみると非常に恥ずかしいですね。。穴があったら入りたい気分です。あと、ビタミンD不足で病院のお世話になった時は我ながらビックリしました。構造解析の比重は減らすと言いましたが、普通の構造解析の比重は減らす一方、dynamicsやin situでの構造解析は今後も続ける予定です(加藤) フェイク修論発表→本丸ねーちゃー、太陽の降り注ぐかりふぉーにゃでVitD不足、などなど、ついったー経由で仕入れていた武勇伝に関して突っ込むのを失念しておりやらかしたー (萩原) 思えば分子生物学にハマったのはK+チャネルのイオン選択性フィルターがK+より小さいNa+イオンを透過させない仕組みに感動したのがきっかけでした(宮脇)

Singularity Hub Daily
Alphabet Chases Wonder Drugs With DeepMind AI Spinoff Isomorphic Labs

Singularity Hub Daily

Play Episode Listen Later Nov 7, 2021 5:51


AI research wunderkind, DeepMind, has long been all fun and games. The London-based organization, owned by Google parent company Alphabet, has used deep learning to train algorithms that can take down world champions at the ancient game of Go and top players of the popular strategy video game Starcraft. Then last year, things got serious when DeepMind trounced the competition at a protein folding contest. Predicting the structure of proteins, the complex molecules underpinning all biology, is notoriously difficult. But DeepMind's AlphaFold2 made a quantum leap in capability, producing results that matched experimental data down to a resolution of a few atoms. In July, the company published a paper describing AlphaFold2, open-sourced the code, and dropped a library of 350,000 protein structures with a promise to add 100 million more. This week, Alphabet announced it will build on DeepMind's AlphaFold2 breakthrough by creating a new company, Isomorphic Labs, in an effort to apply AI to drug discovery. “We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself,” wrote Demis Hassabis, DeepMind founder and CEO, in a post announcing the company. “Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.” Hassabis is Isomorphic's founder and will serve as its CEO while the fledgling company gets its feet, setting the agenda and culture, building a team, and connecting the effort to DeepMind. The two companies will collaborate, but be largely independent. “You can think of [Isomorphic] as a sort of sister company to DeepMind,” Hassabis told Stat. “The idea is to really forge ahead with the potential for computational AI methods to reimagine the whole drug discovery process.” While AlphaFold2's success sparked the effort, protein folding is only one step—arguably simpler than others—in the arduous drug discovery process. Hassabis is thinking bigger. Though details are scarce, it appears the new company will build a line of AI models to ease key choke points in the process. Instead of identifying and developing drugs themselves, they'll sell a platform of models as a service to pharmaceutical companies. Hassabis told Stat these might tackle how proteins interact, the design of small molecules, how well molecules bind, and the prediction of toxicity. That the work will be separated from DeepMind itself is interesting. The company's not insignificant costs have largely been dedicated to pure research. DeepMind turned its first profit in 2020, but its customers are mostly Alphabet companies. Some have wondered if it'd face more pressure to focus on commercial products. The decision to create a separate enterprise based on DeepMind research seems to indicate that's not yet the case. If it can keep pushing the field ahead as a whole, perhaps it makes sense to fund a new organization—or organizations, seeded by future breakthroughs—as opposed to diverting resources from DeepMind's more foundational research. Isomorphic Labs has plenty of company in its drug discovery efforts. In 2020, AI in cancer, molecular, and drug discovery received the most private investment in the field, attracting over $13.8 billion, more than quadruple 2019's total. There have been three AI drug discovery IPOs in the last year, and mature startups—including Exscientia, Insilico Medicine, Insitro, Atomwise, and Valo Health—have earned hundreds of millions in funding. Companies like Genentech, Pfizer, and Merck are likewise working to embed AI in their processes. To a degree, Isomorphic will be building its business from the ground up. AlphaFold2 is without a doubt a big deal, but protein modeling is the tip of the drug discovery iceberg. Also, while AlphaFold2 had the benefit of access to hundreds of thousands of freely available, already modeled protein s...

Towards Data Science
101. Ayanna Howard - AI and the trust problem

Towards Data Science

Play Episode Listen Later Nov 3, 2021 53:15


Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There's a lot to be excited about. But as we've seen in other episodes of the podcast, there's a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output — or if people trust it when they shouldn't — you can end up doing more harm than good. That's why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She joined me to talk about her research, its applications in medicine and education, and the future of human-machine trust. --- Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:30 Ayanna's background - 6:10 The interpretability of neural networks - 12:40 Domain of machine-human interaction - 17:00 The issue of preference - 20:50 Gelman/newspaper amnesia - 26:35 Assessing a person's persuadability - 31:40 Doctors and new technology - 36:00 Responsibility and accountability - 43:15 The social pressure aspect - 47:15 Is Ayanna optimistic? - 53:00 Wrap-up

Innovation Celebration
Modeling Proteins with AI and the Moon's Effect on Oxygen

Innovation Celebration

Play Episode Listen Later Aug 17, 2021 39:25


Angelica and Thomas discuss the AlphaFold2 program that predicts the structure of proteins, the relationship between the Moon's orbit and Earth's oxygen levels, and more.   Subscribe in Apple Podcasts, Spotify, or wherever you're listening right now.   Facebook: https://www.facebook.com/objectivestandard Twitter: https://twitter.com/ObjStdInstitute LinkedIn: https://www.linkedin.com/company/objectivestandardinstitute/   Here are some links related to the information discussed on the show:   AlphaFold2: https://www.sciencemag.org/news/2021/07/new-public-database-ai-predicted-protein-structures-could-transform-biology?utm_campaign=news_daily_2021-07-27&et_rid=763412993&et_cid=3862395   Cyanobacteria and the moon: https://www.sciencemag.org/news/2021/08/totally-new-idea-suggests-longer-days-early-earth-set-stage-complex-life Heart attack treatment: https://www.goodnewsnetwork.org/spider-venom-blocks-damage-after-heart-attacks Supernova image:  https://www.theguardian.com/science/2021/aug/06/champagne-moment-as-supernova-captured-in-detail-for-the-first-time   Cheap thermogenerator: https://www.sciencemag.org/news/2021/08/cheap-material-converts-heat-electricity?utm_campaign=news_daily_2021-08-04&et_rid=763412993&et_cid=3871706   Zero-emission jet fuel: https://www.theguardian.com/environment/2021/aug/14/they-said-we-were-eccentrics-the-uk-team-developing-clean-aviation-fuel Aviation and environmentalism: https://theobjectivestandard.com/2021/04/the-anti-progress-crusade-against-flight/

Ciencia del Fin del Mundo
Proteínas: están develados todos los códigos de la biología? ¡AlphaFold2 dice que sí! ¿Le creemos?

Ciencia del Fin del Mundo

Play Episode Listen Later Aug 4, 2021 15:22


El programa de Inteligencia Artificial que promete "dame tu secuencia de aminoácidos y te diré como es tu proteína en 3D" fue hecho público finalmente en un artículo de "Nature", discutimos su aplicación y si es tan "desinteresado" como parece!

Coffee Break: Señal y Ruido
Ep326: Weinberg y el Modelo Estándar; Alphafold 2; Oído de Dinosaurios

Coffee Break: Señal y Ruido

Play Episode Listen Later Jul 29, 2021 171:53


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Adiós a Steven Weinberg. Repasamos sus contribuciones a la física de partículas (min 13); Se publica Alphafold2, la revolucionaria IA de Deepmind y su competidora RosettaFold (52:00); Entrevista: Carlos Outeiral (55:00); Lo que podemos aprender del oído interno de los dinosaurios (2:08:00); Proyecto GALILEO: Loeb investiga OVNIs (2:29:00); Señales de los oyentes (2:27:00). Contertulios: Alberto Aparici, Francis Villatoro, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso. CB:SyR es una actividad del Museo de la Ciencia y el Cosmos de Tenerife.

Science in Action
Your molecular machinery, now in 3D

Science in Action

Play Episode Listen Later Jul 22, 2021 34:23


Back in November it was announced that an AI company called DeepMind had essentially cracked the problem of protein folding – that is they had managed to successfully predict the 3D structures of complex biochemical molecules by only knowing the 2D sequence of amino acids from which they are made. They are not the only team to use machine learning to approach the vast amounts of data involved. But last week, they released the source code and methodology behind their so called AlphaFold2 tool. Today, they are publishing via a paper in the journal Nature, a simply huge database of predicted structures including most of the human proteome and 20 other model species such as yes, mice. The possibilities for any biochemists are very exciting. As DeepMind CEO Demis Hassabis tells Roland Pease, they partnered with the European Molecular Biology Laboratory to make over 350,000 protein predictions available to researchers around the world free of charge and open sourced. Dr Benjamin Perry of the Drugs for Neglected Diseases Initiative told us how it may help in the search for urgently needed drugs for difficult diseases such as Chagas disease. Prof John McGeehan of the Centre for Enzyme Innovation at Portsmouth University in the UK is on the search for enzymes that might be used to digest otherwise pollutant plastics. He received results (that would have taken years using more traditional methods) back from the AlphaFold team in just a couple of days. Prof Julia Gog of Cambridge University is a biomathematician who has been modelling Covid epidemiology and behaviour. In a recent paper in Royal Society Open Science, she and colleagues wonder whether the vaccination strategy of jabbing the most vulnerable in a population first, rather than the most gregarious or mobile, is necessarily the optimal way to protect them. Should nations still at an early stage in vaccine rollout consider her model? And did you know that elephants can hear things up to a kilometre away through their feet? And that sometimes they communicate by bellowing and rumbling such the ground shakes? Dr Beth Mortimer of Oxford University has been planting seismic detectors in savannah in Kenya to see if they can tap into the elephant messaging network, to possibly help conservationists track their movements. Image: Protein folding Credit: Nicolas_/iStock/Getty Images Presenter: Roland Pease Producers: Alex Mansfield and Samara Linton

Shadow Warrior by Rajeev Srinivasan
Episode 23: AI 2.0 and the coming language wars

Shadow Warrior by Rajeev Srinivasan

Play Episode Listen Later Apr 6, 2021 13:21


podcast abovea version is published at https://chintan.indiafoundation.in/articles/artificial-intelligence-2-0-the-language-wars-are-coming/There have been major technological breakthroughs recently, such as CRISPR-cas9 in gene-editing, AlphaFold2 in protein-folding, and the spread of blockchains and cryptocurrency. But it’s a fair bet that natural language processing via AI and Deep Learning will have the most impact on the most people. For, advances like GPT neural networks mean that machines can process language in ways practically indistinguishable from the way humans do. These language systems give us the first glimpse of Artificial General Intelligence, something that broadly comprehends the real world, as opposed to domain-specific systems (e.g. AlphaGo) that are competent only in narrow domains. This is, literally, AI 2.0.Language can be dangerous. We have heard of Deep Fakes, fake videos that look like the real thing, but fake text is equally, if not even more, frightening. Some of the scenarios Rajiv Malhotra outlines in Artificial Intelligence and the Future of Power show how the destructive capacity of AI is being harnessed by certain nations and companies, and their impact is already overwhelming. India’s very sovereignty may be in jeopardy.The rapid evolution of GPTsThe non-profit research entity OpenAI of San Francisco started its development of a series of GPTs (Generative Pre-trained Transformers) a few years ago. These are Neural Networks that don’t need human training but use Deep Learning techniques to crunch gigantic amounts of content to discover their own rules of, in this case, language.  … deleted, please listen to the podcast aboveFive Problems with GPT-3There are, however, at least five huge problems with something like GPT-3. … deleted, please listen to the podcast aboveCan these problems be fixed?Is there a way of alleviating concerns about the five above-mentioned problems? … deleted, please listen to the podcast aboveThe fourth problem is related, and it is sociological. We tend to trust anything the computers spew out, and more worryingly, anything that comes from the West. India is a country that has been gaslighted so much already that we willingly put our faith in charlatans of all sorts: politicians, epidemic experts, economists, et al. The Cambridge Analytica scandal showed how it is possible to sway sentiments; with Deep Fake text it is possible to misdirect people to astonishing levels. Is any kind of oversight or regulation even possible?The fifth problem is perhaps a little theoretical. It goes to the question of whether computers have consciousness, and whether they can ever become conscious. Subhash Kak has suggested they cannot, in Why a computer will never be truly conscious; but can they appear to be conscious to the extent that they can be malicious? The Skynet of the Terminator series comes to mind; so do the machines of the Matrix series. But if computers can never become conscious, can they ever be truly creative? We don’t know the answer.The arrival of natural-language processing models appears to be a significant breakthrough in Artificial Intelligence. Whether it will fulfill its promise, or fall by the wayside as earlier revolutions did, is not known at the moment.What is known is that once again India finds itself on the back foot when a new technology appears. We have neither the trained people nor the hardware infrastructure to play more than a bit part here. And that means almost certainly that the technology will be weaponized against us, as previous technologies were, by previous invaders. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit rajeevsrinivasan.substack.com

Let's Know Things
Protein Folding Problem

Let's Know Things

Play Episode Listen Later Dec 15, 2020 26:21


This week we talk about SETI@home, macromolecules, and AlphaFold2.We also discuss proteins, Foldit, and CASP. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

seti casp protein folding alphafold2 foldit
Gesundheit 4.0
AlphaFold2 und das Geheimnis der Proteinfaltung

Gesundheit 4.0

Play Episode Listen Later Dec 12, 2020 51:06


Die DeepMind AI “AlphaFold2” wurde in letzter Zeit viel diskutiert - was das genau ist, was sie kann und warum wir davon sprechen, ein 50 Jahre altes Problem der Biologie gelöst zu haben erklären euch Sina und Leon in der fünften Folge von Gesundheit 4.0. Nebenbei lernt Ihr auch noch etwas über Proteine und welchem Obst wir ähnlicher sind als man vielleicht denkt … Wie immer: Wir hoffen es gefällt Euch. Rein klicken lohnt sich.