POPULARITY
“Eventually, my dream would be to simulate a virtual cell.”—Demis HassabisThe aspiration to build the virtual cell is considered to be equivalent to a moonshot for digital biology. Recently, 42 leading life scientists published a paper in Cell on why this is so vital, and how it may ultimately be accomplished. This conversation is with 2 of the authors, Charlotte Bunne, now at EPFL and Steve Quake, a Professor at Stanford University, who heads up science at the Chan-Zuckerberg Initiative The audio (above) is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.TRANSCRIPT WITH LINKS TO AUDIO Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths and we've got a really hot topic today, the virtual cell. And what I think is extraordinarily important futuristic paper that recently appeared in the journal Cell and the first author, Charlotte Bunne from EPFL, previously at Stanford's Computer Science. And Steve Quake, a young friend of mine for many years who heads up the Chan Zuckerberg Initiative (CZI) as well as a professor at Stanford. So welcome, Charlotte and Steve.Steve Quake (00:42):Thanks, Eric. It's great to be here.Charlotte Bunne:Thanks for having me.Eric Topol (00:45):Yeah. So you wrote this article that Charlotte, the first author, and Steve, one of the senior authors, appeared in Cell in December and it just grabbed me, “How to build the virtual cell with artificial intelligence: Priorities and opportunities.” It's the holy grail of biology. We're in this era of digital biology and as you point out in the paper, it's a convergence of what's happening in AI, which is just moving at a velocity that's just so extraordinary and what's happening in biology. So maybe we can start off by, you had some 42 authors that I assume they congregated for a conference or something or how did you get 42 people to agree to the words in this paper?Steve Quake (01:33):We did. We had a meeting at CZI to bring community members together from many different parts of the community, from computer science to bioinformatics, AI experts, biologists who don't trust any of this. We wanted to have some real contrarians in the mix as well and have them have a conversation together about is there an opportunity here? What's the shape of it? What's realistic to expect? And that was sort of the genesis of the article.Eric Topol (02:02):And Charlotte, how did you get to be drafting the paper?Charlotte Bunne (02:09):So I did my postdoc with Aviv Regev at Genentech and Jure Leskovec at CZI and Jure was part of the residency program of CZI. And so, this is how we got involved and you had also prior work with Steve on the universal cell embedding. So this is how everything got started.Eric Topol (02:29):And it's actually amazing because it's a who's who of people who work in life science, AI and digital biology and omics. I mean it's pretty darn impressive. So I thought I'd start off with a quote in the article because it kind of tells a story of where this could go. So the quote was in the paper, “AIVC (artificial intelligence virtual cell) has the potential to revolutionize the scientific process, leading to future breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.” That's a pretty big statement. So maybe we can just kind of toss that around a bit and maybe give it a little more thoughts and color as to what you were positing there.Steve Quake (03:19):Yeah, Charlotte, you want me to take the first shot at that? Okay. So Eric, it is a bold claim and we have a really bold ambition here. We view that over the course of a decade, AI is going to provide the ability to make a transformative computational tool for biology. Right now, cell biology is 90% experimental and 10% computational, roughly speaking. And you've got to do just all kinds of tedious, expensive, challenging lab work to get to the answer. And I don't think AI is going to replace that, but it can invert the ratio. So within 10 years I think we can get to biology being 90% computational and 10% experimental. And the goal of the virtual cell is to build a tool that'll do that.Eric Topol (04:09):And I think a lot of people may not understand why it is considered the holy grail because it is the fundamental unit of life and it's incredibly complex. It's not just all the things happening in the cell with atoms and molecules and organelles and everything inside, but then there's also the interactions the cell to other cells in the outside tissue and world. So I mean it's really quite extraordinary challenge that you've taken on here. And I guess there's some debate, do we have the right foundation? We're going to get into foundation models in a second. A good friend of mine and part of this whole I think process that you got together, Eran Segal from Israel, he said, “We're at this tipping point…All the stars are aligned, and we have all the different components: the data, the compute, the modeling.” And in the paper you describe how we have over the last couple of decades have so many different data sets that are rich that are global initiatives. But then there's also questions. Do we really have the data? I think Bo Wang especially asked about that. Maybe Charlotte, what are your thoughts about data deficiency? There's a lot of data, but do you really have what we need before we bring them all together for this kind of single model that will get us some to the virtual cell?Charlotte Bunne (05:41):So I think, I mean one core idea of building this AIVC is that we basically can leverage all experimental data that is overall collected. So this also goes back to the point Steve just made. So meaning that we basically can integrate across many different studies data because we have AI algorithms or the architectures that power such an AIVC are able to integrate basically data sets on many different scales. So we are going a bit away from this dogma. I'm designing one algorithm from one dataset to this idea of I have an architecture that can take in multiple dataset on multiple scales. So this will help us a bit in being somewhat efficient with the type of experiments that we need to make and the type of experiments we need to conduct. And again, what Steve just said, ultimately, we can very much steer which data sets we need to collect.Charlotte Bunne (06:34):Currently, of course we don't have all the data that is sufficient. I mean in particular, I think most of the tissues we have, they are healthy tissues. We don't have all the disease phenotypes that we would like to measure, having patient data is always a very tricky case. We have mostly non-interventional data, meaning we have very limited understanding of somehow the effect of different perturbations. Perturbations that happen on many different scales in many different environments. So we need to collect a lot here. I think the overall journey that we are going with is that we take the data that we have, we make clever decisions on the data that we will collect in the future, and we have this also self-improving entity that is aware of what it doesn't know. So we need to be able to understand how well can I predict something on this somewhat regime. If I cannot, then we should focus our data collection effort into this. So I think that's not a present state, but this will basically also guide the future collection.Eric Topol (07:41):Speaking of data, one of the things I think that's fascinating is we saw how AlphaFold2 really revolutionized predicting proteins. But remember that was based on this extraordinary resource that had been built, the Protein Data Bank that enabled that. And for the virtual cell there's no such thing as a protein data bank. It's so much more as you emphasize Charlotte, it's so much dynamic and these perturbations that are just all across the board as you emphasize. Now the human cell atlas, which currently some tens of millions, but going into a billion cells, we learned that it used to be 200 cell types. Now I guess it's well over 5,000 and that we have 37 trillion cells approximately in the average person adult's body is a formidable map that's being made now. And I guess the idea that you're advancing is that we used to, and this goes back to a statement you made earlier, Steve, everything we did in science was hypothesis driven. But if we could get computational model of the virtual cell, then we can have AI exploration of the whole field. Is that really the nuts of this?Steve Quake (09:06):Yes. A couple thoughts on that, maybe Theo Karaletsos, our lead AI person at CZI says machine learning is the formalism through which we understand high dimensional data and I think that's a very deep statement. And biological systems are intrinsically very high dimensional. You've got 20,000 genes in the human genome in these cell atlases. You're measuring all of them at the same time in each single cell. And there's a lot of structure in the relationships of their gene expression there that is just not evident to the human eye. And for example, CELL by GENE, our database that collects all the aggregates, all of the single cell transcriptomic data is now over a hundred million cells. And as you mentioned, we're seeing ways to increase that by an order of magnitude in the near future. The project that Jure Leskovec and I worked on together that Charlotte referenced earlier was like a first attempt to build a foundational model on that data to discover some of the correlations and structure that was there.Steve Quake (10:14):And so, with a subset, I think it was the 20 or 30 million cells, we built a large language model and began asking it, what do you understand about the structure of this data? And it kind of discovered lineage relationships without us teaching it. We trained on a matrix of numbers, no biological information there, and it learned a lot about the relationships between cell type and lineage. And that emerged from that high dimensional structure, which was super pleasing to us and really, I mean for me personally gave me the confidence to say this stuff is going to work out. There is a future for the virtual cell. It's not some made up thing. There is real substance there and this is worth investing an enormous amount of CZIs resources in going forward and trying to rally the community around as a project.Eric Topol (11:04):Well yeah, the premise here is that there is a language of life, and you just made a good case that there is if you can predict, if you can query, if you can generate like that. It is reminiscent of the famous Go game of Lee Sedol, that world champion and how the machine came up with a move (Move 37) many, many years ago that no human would've anticipated and I think that's what you're getting at. And the ability for inference and reason now to add to this. So Charlotte, one of the things of course is about, well there's two terms in here that are unfamiliar to many of the listeners or viewers of this podcast, universal representations (UR) and virtual instrument (VIs) that you make a pretty significant part of how you are going about this virtual cell model. So could you describe that and also the embeddings as part of the universal representation (UR) because I think embeddings, or these meaningful relationships are key to what Steve was just talking about.Charlotte Bunne (12:25):Yes. So in order to somewhat leverage very different modalities in order to leverage basically modalities that will take measurements across different scales, like the idea is that we have large, may it be transformer models that might be very different. If I have imaging data, I have a vision transformer, if I have a text data, I have large language models that are designed of course for DNA then they have a very wide context and so on and so forth. But the idea is somewhat that we have models that are connected through the scales of biology because those scales we know. We know which components are somewhat involved or in measurements that are happening upstream. So we have the somewhat interconnection or very large model that will be trained on many different data and we have this internal model representation that somewhat capture everything they've seen. And so, this is what we call those universal representation (UR) that will exist across the scales of biology.Charlotte Bunne (13:22):And what is great about AI, and so I think this is a bit like a history of AI in short is the ability to predict the last years, the ability to generate, we can generate new hypothesis, we can generate modalities that we are missing. We can potentially generate certain cellular state, molecular state have a certain property, but I think what's really coming is this ability to reason. So we see this in those very large language models, the ability to reason about a hypothesis, how we can test it. So this is what those instruments ultimately need to do. So we need to be able to simulate the change of a perturbation on a cellular phenotype. So on the internal representation, the universal representation of a cell state, we need to simulate the fact the mutation has downstream and how this would propagate in our representations upstream. And we need to build many different type of virtual instruments that allow us to basically design and build all those capabilities that ultimately the AI virtual cell needs to possess that will then allow us to reason, to generate hypothesis, to basically predict the next experiment to conduct to predict the outcome of a perturbation experiment to in silico design, cellular states, molecular states, things like that. And this is why we make the separation between internal representation as well as those instruments that operate on those representations.Eric Topol (14:47):Yeah, that's what I really liked is that you basically described the architecture, how you're going to do this. By putting these URs into the VIs, having a decoder and a manipulator and you basically got the idea if you can bring all these different integrations about which of course is pending. Now there are obviously many naysayers here that this is impossible. One of them is this guy, Philip Ball. I don't know if you read the language, How Life Works. Now he's a science journalist and he's a prolific writer. He says, “Comparing life to a machine, a robot, a computer, sells it short. Life is a cascade of processes, each with a distinct integrity and autonomy, the logic of which has no parallel outside the living world.” Is he right? There's no way to model this. It's silly, it's too complex.Steve Quake (15:50):We don't know, alright. And it's great that there's naysayers. If everyone agreed this was doable, would it be worth doing? I mean the whole point is to take risks and get out and do something really challenging in the frontier where you don't know the answer. If we knew that it was doable, I wouldn't be interested in doing it. So I personally am happy that there's not a consensus.Eric Topol (16:16):Well, I mean to capture people's imagination here, if you're successful and you marshal a global effort, I don't know who's going to pay for it because it's a lot of work coming here going forward. But if you can do it, the question here is right today we talk about, oh let's make an organoid so we can figure out how to treat this person's cancer or understand this person's rare disease or whatever. And instead of having to wait weeks for this culture and all the expense and whatnot, you could just do it in a computer and in silico and you have this virtual twin of a person's cells and their tissue and whatnot. So the opportunity here is, I don't know if people get, this is just extraordinary and quick and cheap if you can get there. And it's such a bold initiative idea, who will pay for this do you think?Steve Quake (17:08):Well, CZI is putting an enormous amount of resources into it and it's a major project for us. We have been laying the groundwork for it. We recently put together what I think is if not the largest, one of the largest GPU supercomputer clusters for nonprofit basic science research that came online at the end of last year. And in fact in December we put out an RFA for the scientific community to propose using it to build models. And so we're sharing that resource within the scientific community as I think you appreciate, one of the real challenges in the field has been access to compute resources and industry has it academia at a much lower level. We are able to be somewhere in between, not quite at the level of a private company but the tech company but at a level beyond what most universities are being able to do and we're trying to use that to drive the field forward. We're also planning on launching RFAs we this year to help drive this project forward and funding people globally on that. And we are building a substantial internal effort within CZI to help drive this project forward.Eric Topol (18:17):I think it has the looks of the human genome project, which at time as you know when it was originally launched that people thought, oh, this is impossible. And then look what happened. It got done. And now the sequence of genome is just a commodity, very relatively, very inexpensive compared to what it used to be.Steve Quake (18:36):I think a lot about those parallels. And I will say one thing, Philip Ball, I will concede him the point, the cells are very complicated. The genome project, I mean the sort of genius there was to turn it from a biology problem to a chemistry problem, there is a test tube with a chemical and it work out the structure of that chemical. And if you can do that, the problem is solved. I think what it means to have the virtual cell is much more complex and ambiguous in terms of defining what it's going to do and when you're done. And so, we have our work cut out for us there to try to do that. And that's why a little bit, I established our North Star and CZI for the next decade as understanding the mysteries of the cell and that word mystery is very important to me. I think the molecules, as you pointed out earlier are understood, genome sequenced, protein structure solved or predicted, we know a lot about the molecules. Those are if not solved problems, pretty close to being solved. And the real mystery is how do they work together to create life in the cell? And that's what we're trying to answer with this virtual cell project.Eric Topol (19:43):Yeah, I think another thing that of course is happening concurrently to add the likelihood that you'll be successful is we've never seen the foundation models coming out in life science as they have in recent weeks and months. Never. I mean, I have a paper in Science tomorrow coming out summarizing the progress about not just RNA, DNA, ligands. I mean the whole idea, AlphaFold3, but now Boltz and so many others. It's just amazing how fast the torrent of new foundation models. So Charlotte, what do you think accounts for this? This is unprecedented in life science to see foundation models coming out at this clip on evolution on, I mean you name it, design of every different molecule of life or of course in cells included in that. What do you think is going on here?Charlotte Bunne (20:47):So on the one hand, of course we benefit profits and inherit from all the tremendous efforts that have been made in the last decades on assembling those data sets that are very, very standardized. CELLxGENE is very somehow AI friendly, as you can say, it is somewhat a platform that is easy to feed into algorithms, but at the same time we actually also see really new building mechanisms, design principles of AI algorithms in itself. So I think we have understood that in order to really make progress, build those systems that work well, we need to build AI tools that are designed for biological data. So to give you an easy example, if I use a large language model on text, it's not going to work out of the box for DNA because we have different reading directions, different context lens and many, many, many, many more.Charlotte Bunne (21:40):And if I look at standard computer vision where we can say AI really excels and I'm applying standard computer vision, vision transformers on multiplex images, they're not going to work because normal computer vision architectures, they always expect the same three inputs, RGB, right? In multiplex images, I'm measuring up to 150 proteins potentially in a single experiment, but every study will measure different proteins. So I deal with many different scales like larger scales and I used to attention mechanisms that we have in usual computer vision. Transformers are not going to work anymore, they're not going to scale. And at the same time, I need to be completely flexible in whatever input combination of channel I'm just going to face in this experiment. So this is what we right now did for example, in our very first work, inheriting the design principle that we laid out in the paper AI virtual cell and then come up with new AI architectures that are dealing with these very special requirements that biological data have.Charlotte Bunne (22:46):So we have now a lot of computer scientists that work very, very closely have a very good understanding of biologists. Biologists that are getting much and much more into the computer science. So people who are fluent in both languages somewhat, that are able to now build models that are adopted and designed for biological data. And we don't just take basically computer vision architectures that work well on street scenes and try to apply them on biological data. So it's just a very different way of thinking about it, starting constructing basically specialized architectures, besides of course the tremendous data efforts that have happened in the past.Eric Topol (23:24):Yeah, and we're not even talking about just sequence because we've also got imaging which has gone through a revolution, be able to image subcellular without having to use any types of stains that would disrupt cells. That's another part of the deep learning era that came along. One thing I thought was fascinating in the paper in Cell you wrote, “For instance, the Short Read Archive of biological sequence data holds over 14 petabytes of information, which is 1,000 times larger than the dataset used to train ChatGPT.” I mean that's a lot of tokens, that's a lot of stuff, compute resources. It's almost like you're going to need a DeepSeek type of way to get this. I mean not that DeepSeek as its claim to be so much more economical, but there's a data challenge here in terms of working with that massive amount that is different than the human language. That is our language, wouldn't you say?Steve Quake (24:35):So Eric, that brings to mind one of my favorite quotes from Sydney Brenner who is such a wit. And in 2000 at the sort of early first flush of success in genomics, he said, biology is drowning in a sea of data and starving for knowledge. A very deep statement, right? And that's a little bit what the motivation was for putting the Short Read Archive statistic into the paper there. And again, for me, part of the value of this endeavor of creating a virtual cell is it's a tool to help us translate data into knowledge.Eric Topol (25:14):Yeah, well there's two, I think phenomenal figures in your Cell paper. The first one that kicks across the capabilities of the virtual cell and the second that compares the virtual cell to the real or the physical cell. And we'll link that with this in the transcript. And the other thing we'll link is there's a nice Atlantic article, “A Virtual Cell Is a ‘Holy Grail' of Science. It's Getting Closer.” That might not be quite close as next week or year, but it's getting close and that's good for people who are not well grounded in this because it's much more taken out of the technical realm. This is really exciting. I mean what you're onto here and what's interesting, Steve, since I've known you for so many years earlier in your career you really worked on omics that is being DNA and RNA and in recent times you've made this switch to cells. Is that just because you're trying to anticipate the field or tell us a little bit about your migration.Steve Quake (26:23):Yeah, so a big part of my career has been trying to develop new measurement technologies that'll provide insight into biology. And decades ago that was understanding molecules. Now it's understanding more complex biological things like cells and it was like a natural progression. I mean we built the sequencers, sequenced the genomes, done. And it was clear that people were just going to do that at scale then and create lots of data. Hopefully knowledge would get out of that. But for me as an academic, I never thought I'd be in the position I'm in now was put it that way. I just wanted to keep running a small research group. So I realized I would have to get out of the genome thing and find the next frontier and it became this intersection of microfluidics and genomics, which as you know, I spent a lot of time developing microfluidic tools to analyze cells and try to do single cell biology to understand their heterogeneity. And that through a winding path led me to all these cell atlases and to where we are now.Eric Topol (27:26):Well, we're fortunate for that and also with your work with CZI to help propel that forward and I think it sounds like we're going to need a lot of help to get this thing done. Now Charlotte, as a computer scientist now at EPFL, what are you going to do to keep working on this and what's your career advice for people in computer science who have an interest in digital biology?Charlotte Bunne (27:51):So I work in particular on the prospect of using this to build diagnostic tools and to make diagnostics in the clinic easier because ultimately we have somewhat limited capabilities in the hospital to run deep omics, but the idea of being able to somewhat map with a cheaper and lighter modality or somewhat diagnostic test into something much richer because a model has been seeing all those different data and can basically contextualize it. It's very interesting. We've seen all those pathology foundation models. If I can always run an H&E, but then decide when to run deeper diagnostics to have a better or more accurate prediction, that is very powerful and it's ultimately reducing the costs, but the precision that we have in hospitals. So my faculty position right now is co-located between the School of Life Sciences, School of Computer Science. So I have a dual affiliation and I'm affiliated to the hospitals to actually make this possible and as a career advice, I think don't be shy and stick to your discipline.Charlotte Bunne (28:56):I have a bachelor's in biology, but I never only did biology. I have a PhD in computer science, which you would think a bachelor in biology not necessarily qualifies you through. So I think this interdisciplinarity also requires you to be very fluent, very comfortable in reading many different styles of papers and publications because a publication in a computer science venue will be very, very different from the way we write in biology. So don't stick to your study program, but just be free in selecting whatever course gets you closer to the knowledge you need in order to do the research or whatever task you are building and working on.Eric Topol (29:39):Well, Charlotte, the way you're set up there with this coalescence of life science and computer science is so ideal and so unusual here in the US, so that's fantastic. That's what we need and that's really the underpinning of how you're going to get to the virtual cells, getting these two communities together. And Steve, likewise, you were an engineer and somehow you became one of the pioneers of digital biology way back before it had that term, this interdisciplinary, transdisciplinary. We need so much of that in order for you all to be successful, right?Steve Quake (30:20):Absolutely. I mean there's so much great discovery to be done on the boundary between fields. I trained as a physicist and kind of made my career this boundary between physics and biology and technology development and it's just sort of been a gift that keeps on giving. You've got a new way to measure something, you discover something new scientifically and it just all suggests new things to measure. It's very self-reinforcing.Eric Topol (30:50):Now, a couple of people who you know well have made some pretty big statements about this whole era of digital biology and I think the virtual cell is perhaps the biggest initiative of all the digital biology ongoing efforts, but Jensen Huang wrote, “for the first time in human history, biology has the opportunity to be engineering, not science.” And Demis Hassabis wrote or said, ‘we're seeing engineering science, you have to build the artifact of interest first, and then once you have it, you can use the scientific method to reduce it down and understand its components.' Well here there's a lot to do to understand its components and if we can do that, for example, right now as both of AI drug discoveries and high gear and there's umpteen numbers of companies working on it, but it doesn't account for the cell. I mean it basically is protein, protein ligand interactions. What if we had drug discovery that was cell based? Could you comment about that? Because that doesn't even exist right now.Steve Quake (32:02):Yeah, I mean I can say something first, Charlotte, if you've got thoughts, I'm curious to hear them. So I do think AI approaches are going to be very useful designing molecules. And so, from the perspective of designing new therapeutics, whether they're small molecules or antibodies, yeah, I mean there's a ton of investment in that area that is a near term fruit, perfect thing for venture people to invest in and there's opportunity there. There's been enough proof of principle. However, I do agree with you that if you want to really understand what happens when you drug a target, you're going to want to have some model of the cell and maybe not just the cell, but all the different cell types of the body to understand where toxicity will come from if you have on-target toxicity and whether you get efficacy on the thing you're trying to do.Steve Quake (32:55):And so, we really hope that people will use the virtual cell models we're going to build as part of the drug discovery development process, I agree with you in a little of a blind spot and we think if we make something useful, people will be using it. The other thing I'll say on that point is I'm very enthusiastic about the future of cellular therapies and one of our big bets at CZI has been starting the New York Biohub, which is aimed at really being very ambitious about establishing the engineering and scientific foundations of how to engineer completely, radically more powerful cellular therapies. And the virtual cell is going to help them do that, right? It's going to be essential for them to achieve that mission.Eric Topol (33:39):I think you're pointing out one of the most important things going on in medicine today is how we didn't anticipate that live cell therapy, engineered cells and ideally off the shelf or in vivo, not just having to take them out and work on them outside the body, is a revolution ongoing, and it's not just in cancer, it's in autoimmune diseases and many others. So it's part of the virtual cell need. We need this. One of the things that's a misnomer, I want you both to comment on, we keep talking about single cell, single cell. And there's a paper spatial multi-omics this week, five different single cell scales all integrated. It's great, but we don't get to single cell. We're basically looking at 50 cells, 100 cells. We're not doing single cell because we're not going deep enough. Is that just a matter of time when we actually are doing, and of course the more we do get down to the single or a few cells, the more insights we're going to get. Would you comment about that? Because we have all this literature on single cell comes out every day, but we're not really there yet.Steve Quake (34:53):Charlotte, do you want to take a first pass at that and then I can say something?Charlotte Bunne (34:56):Yes. So it depends. So I think if we look at certain spatial proteomics, we still have subcellular resolutions. So of course, we always measure many different cells, but we are able to somewhat get down to resolution where we can look at certain colocalization of proteins. This also goes back to the point just made before having this very good environment to study drugs. If I want to build a new drug, if I want to build a new protein, the idea of building this multiscale model allows us to actually simulate different, somehow binding changes and binding because we simulate the effect of a drug. Ultimately, the redouts we have they are subcellular. So of course, we often in the spatial biology, we often have a bit like methods that are rather coarse they have a spot that averages over certain some cells like hundreds of cells or few cells.Charlotte Bunne (35:50):But I think we also have more and more technologies that are zooming in that are subcellular where we can actually tag or have those probe-based methods that allow us to zoom in. There's microscopy of individual cells to really capture them in 3D. They are of course not very high throughput yet, but it gives us also an idea of the morphology and how ultimately morphology determine certain somehow cellular properties or cellular phenotype. So I think there's lots of progress also on the experimental and that ultimately will back feed into the AI virtual cell, those models that will be fed by those data. Similarly, looking at dynamics, right, looking at live imaging of individual cells of their morphological changes. Also, this ultimately is data that we'll need to get a better understanding of disease mechanisms, cellular phenotypes functions, perturbation responses.Eric Topol (36:47):Right. Yes, Steve, you can comment on that and the amazing progress that we have made with space and time, spatial temporal resolution, spatial omics over these years, but that we still could go deeper in terms of getting to individual cells, right?Steve Quake (37:06):So, what can we do with a single cell? I'd say we are very mature in our ability to amplify and sequence the genome of a single cell, amplify and sequence the transcriptome of a single cell. You can ask is one cell enough to make a biological conclusion? And maybe I think what you're referring to is people want to see replicates and so you can ask how many cells do you need to see to have confidence in any given biological conclusion, which is a reasonable thing. It's a statistical question in good science. I think I've been very impressed with how the mass spec people have been doing recently. I think they've finally cracked the ability to look at proteins from single cells and they can look at a couple thousand proteins. That was I think one of these Nature method of the year things at the end of last year and deep visual proteomics.Eric Topol (37:59):Deep visual proteomics, yes.Steve Quake (38:00):Yeah, they are over the hump. Yeah, they are over the hump with single cell measurements. Part of what's missing right now I think is the ability to reliably do all of that on the same cell. So this is what Charlotte was referring to be able to do sort of multi-modal measurements on single cells. That's kind of in its infancy and there's a few examples, but there's a lot more work to be done on that. And I think also the fact that these measurements are all destructive right now, and so you're losing the ability to look how the cells evolve over time. You've got to say this time point, I'm going to dissect this thing and look at a state and I don't get to see what happens further down the road. So that's another future I think measurement challenge to be addressed.Eric Topol (38:42):And I think I'm just trying to identify some of the multitude of challenges in this extraordinarily bold initiative because there are no shortage and that's good about it. It is given people lots of work to do to overcome, override some of these challenges. Now before we wrap up, besides the fact that you point out that all the work has to be done and be validated in real experiments, not just live in a virtual AI world, but you also comment about the safety and ethics of this work and assuming you're going to gradually get there and be successful. So could either or both of you comment about that because it's very thoughtful that you're thinking already about that.Steve Quake (41:10):As scientists and members of the larger community, we want to be careful and ensure that we're interacting with people who said policy in a way that ensures that these tools are being used to advance the cause of science and not do things that are detrimental to human health and are used in a way that respects patient privacy. And so, the ethics around how you use all this with respect to individuals is going to be important to be thoughtful about from the beginning. And I also think there's an ethical question around what it means to be publishing papers and you don't want people to be forging papers using data from the virtual cell without being clear about where that came from and pretending that it was a real experiment. So there's issues around those sorts of ethics as well that need to be considered.Eric Topol (42:07):And of those 40 some authors, do you around the world, do you have the sense that you all work together to achieve this goal? Is there kind of a global bonding here that's going to collaborate?Steve Quake (42:23):I think this effort is going to go way beyond those 40 authors. It's going to include a much larger set of people and I'm really excited to see that evolve with time.Eric Topol (42:31):Yeah, no, it's really quite extraordinary how you kick this thing off and the paper is the blueprint for something that we are all going to anticipate that could change a lot of science and medicine. I mean we saw, as you mentioned, Steve, how that deep visual proteomics (DVP) saved lives. It was what I wrote a spatial medicine, no longer spatial biology. And so, the way that this can change the future of medicine, I think a lot of people just have to have a little bit of imagination that once we get there with this AIVC, that there's a lot in store that's really quite exciting. Well, I think this has been an invigorating review of that paper and some of the issues surrounding it. I couldn't be more enthusiastic for your success and ultimately where this could take us. Did I miss anything during the discussion that we should touch on before we wrap up?Steve Quake (43:31):Not from my perspective. It was a pleasure as always Eric, and a fun discussion.Charlotte Bunne (43:38):Thanks so much.Eric Topol (43:39):Well thank you both and all the co-authors of this paper. We're going to be following this with the great interest, and I think for most people listening, they may not know that this is in store for the future. Someday we will get there. I think one of the things to point out right now is the models we have today that large language models based on transformer architecture, they're going to continue to evolve. We're already seeing so much in inference and ability for reasoning to be exploited and not asking for prompts with immediate answers, but waiting for days to get back. A lot more work from a lot more computing resources. But we're going to get models in the future to fold this together. I think that's one of the things that you've touched on the paper so that whatever we have today in concert with what you've laid out, AI is just going to keep getting better.Eric Topol (44:39):The biology that these foundation models are going to get broader and more compelling as to their use cases. So that's why I believe in this. I don't see this as a static situation right now. I just think that you're anticipating the future, and we will have better models to be able to integrate this massive amount of what some people would consider disparate data sources. So thank you both and all your colleagues for writing this paper. I don't know how you got the 42 authors to agree to it all, which is great, and it's just a beginning of something that's a new frontier. So thanks very much.Steve Quake (45:19):Thank you, Eric.**********************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access, with no ads..Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding by US biomedical research at NIH and other governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
Roman Rivera talks about the effects of electronic monitoring for US pretrial defendants. “Release, Detain, or Surveil? The Effect of Electronic Monitoring on Defendant Outcomes” by Roman Rivera. OTHER RESEARCH WE DISCUSS IN THIS EPISODE: "The Effects of Pretrial Detention on Conviction, Future Crime, and Employment: Evidence from Randomly Assigned Judges" by Will Dobbie, Jacob Goldin, and Crystal S. Yang. “Criminal Recidivism after Prison and Electronic Monitoring” by Rafael Di Tella and Ernesto Schargrodsky. “Better at Home Than in Prison? The Effects of Electronic Monitoring on Recidivism in France” by Anaïs Henneguelle, Benjamin Monnery, and Annie Kensey. “Can Electronic Monitoring Reduce Reoffending?” by Jenny Williams and Don Weatherburn. Probable Causation Episode 79: Jenny Williams. “The Effects of Electronic Monitoring on Offenders and Their Families" by Julien Grenet, Hans Grönqvist, and Susan Niknami. "Human Decisions and Machine Predictions" by Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan. "Algorithmic Risk Assessments and the Double-Edged Sword of Youth" by Megan T. Stevenson and Christopher Slobogin. "The Effects of Parental and Sibling Incarceration: Evidence from Ohio" by Samuel Norris, Matthew Pecenco, and Jeffrey Weaver.
In this episode, Jure Leskovec, Co-founder of Kumo AI and Professor of Computer Science at Stanford University, discusses Relational Deep Learning (RDL) and its role in automating feature engineering. Highlights include:- How RDL enhances predictive modeling.- Applications in fraud detection and recommendation systems.- The use of graph neural networks to simplify complex data structures.
What's up everyone,If you haven't checked out our previous 3 episodes in our AI series you might want to before this episode, we give you a lot of context around some of the events that have happened and will shape the conversation today.So basically How fast could AI change or replace marketing jobs? How marketers can stay informed and become AI fluent Exploring new paths to future-proof your marketing career in the age of AI Today we're diving into specific tools… there's a lot of noise out there right now.What tools you should play around withIn TMW #107 | ChatGPT and the artificial marketer, Juan Mendoza explains that“...generative AI tools are already everywhere. From text generation to video and audio production, to image creation, there's a thriving industry of technologies taking small slices out of our creative talents, packaging them up, and selling them as a SaaS product on a recurring revenue model. If you're wanting to stay relevant five years from now in the marketing technology industry, you're probably going to have to learn some of these platforms. In 2010 we used to say: “there's an app for that”. In 2023, we will be saying: “there's an AI for that.””OutlineHere are some of the topics for this third AI episode: Key AI technology definitions and how to differentiate real AI tools vs all the noise out there Deep dive into tools Content marketing tools Email and marketing automation tools Predictive analytics tools Text to presentation and pitch deck tools 3D animation tools for product marketers Sales and outreach tools Text to website creator tools Ad and social creative tools AutoGPT and AI agents And a bunch of other tools like conversational search engines, 1-1 convos with celebrities and an even longer list of honorable mentions Here's today's main takeaway:The key to future proofing your marketing career with the ever changing AI landscape is to stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools.Definition of tech termsI'll be using some of these terms throughout my analysis of some of these tools so here's a primer explaining the three most common AI technologies used for marketing applications: MLMachine Learning): ML is a way to teach computers to learn by themselves, without having to be programmed for every task. They learn from examples and data patterns to make predictions or decisions. Applications include segmentation, predictive analytics and propensity models. NLPNatural Language Processing: NLP is a subset of ML and focuses on enabling computers to understand, interpret, and generate human language. Includes sentiment analysis, machine translation, named entity recognition, text summarization, and more. NLP techniques usually helps computers understand and communicate with humans using everyday language. GNNGraph Neural Network: GNN also a subset of ML is a type of neural network that aims to handle graph-structured data, data organized like a network or web of connected points. Applications include analyzing relationships between different things like users in a social network or users in your database or recommending additional products based on past purchase history. Real AI vs noisePart of the reason AI gets a really bad rep, especially in martech, is that anything that's built on if statements or simple Javascript logic gets called AI. There's still plenty of AI startups that shout about their proprietary AI when it's probably just a few decision trees and a few interns running spreadsheets.Now though, you have an even bigger bucket of noise that's essentially “slight tweak on Chat-GPT”. Developing AI that was comparable to human performance was a challenging feat prior to GPT's arrival. To achieve this level of sophistication, a company would have had to: make a substantial investment, amounting to millions of dollars developing its own algorithms performing extensive data cleanup But it's so easy now because GPT is so good out of the box. Allen Cheng puts it simply. Starting a new AI venture can be achieved by simply assembling a few elements: a product developed on GPT-4's user-friendly API a website, and a marketing campaign. This is why we're seeing hundreds of AI tolls pop up every week.A lot of these GPT-based products are pretty much indistinguishable from one another. Maybe a handful have a significant advantage over others but most are gimmicky. And over the next few months, every tool is going to be integrating ChatGPT features inside their products in the hopes of making it stickier.The threat of GPT-nThe part that I find trickiest and the most discouraging about building anything on top of GPT is that any progress you make on fine tuning GPT-4 will totally be wiped out by GPT-5 or GPT-n… Kind of like we talked about in a previous episode with all the tools GPT's plugins killed. So let's cut through the noise and dive into legit AI tools, the ones you should be playing with and experimenting. Content marketing toolsCopy.ai and Jasperhttps://copy.ai/ https://jasper.ai/ AI text generators are very common these days, the two most popular tools, especially for marketers are Copy.ai and Jasper. Both allow you to bypass the initial stage of writing where you face a blank page. The promise of these tools is that they help you in generating ideas, saving time on brainstorming and drafting, and ensuring a consistent production flow, freeing you to focus on higher-level strategic tasks, original research, and connecting with your audience.I've played around with both Jasper and Copy.ai before ChatGPT came out… and they were super unique. But both Copy.ai and Jasper are built on top of GPT, they essentially rent usage of the platform. So they built a pretty nice UI on top of GPT… but now that ChatGPT came out, I'm sure they've seen a drop in usage. Plus GPT-4 is 3 times more expensive.They still offer marketing specific value though and can get you up to speed faster than using CGPT in the form of templates, prompts and workflows. Both are super powerful, you could make a case that Jasper outshines its counterpart a bit, especially on the longer content format but it's also way more expensive. Miguel Rebelo from Zapier has a solid breakdown comparison here https://zapier.com/blog/jasper-vs-copy-ai/ GramarlyGOhttps://www.grammarly.com/grammarlygo Grammarly, the popular spelling and grammar checker which has been using AI for over a decade already, also entered the generative AI niche last month unveiling GrammarlyGO. You guessed it, built on GPT. It has a particular advantage because Grammarly is already widely used and this is just an extra feature so to speak. Instead of just checking your grammar it can now also help you with drafting documents, composing and responding to emails, editing writing for tone, clarity, and length, and brainstorming ideas or outlines for projects within the apps you're already working in. Lots of tools are going the Grammarly route in adding GPT features to their product, like Notion and more recently Superhuman.Other AI writing toolsSome of these specialize in SEO, some in long form content, some in short form… they all do similar things:Copysmith https://copysmith.ai/ Anyword https://anyword.com/ Writesonic https://writesonic.com/Copymatic https://copymatic.ai/ Yaara https://www.yaara.ai/ Rytr https://rytr.me/ Frase https://frase.io/ EmailEmail is just a channel of potential generative content tools so it's not totally distinguishable from the tools we chatted about in the content category. Chances are that the Marketing Automation platform or the customer engagement platform you're already using as a suite of features they are throwing AI next to. Most of these though are just ML. Some tools like Iterable and Braze have propensity models to find users that are likely to perform a purchase event, that's ML, and it's only based on your data set. Some tools like Seventh Sense throw AI in front of Send Time Optimization features, these have been around for a long time and are only based on your recipients. This is also ML. Some tools throw AI in front of Sentiment Analysis features allowing you to analyze and classify the emotional tone of text into useful data. This is a subset of NLP that uses ML. Some tools like 6sense throw AI in front of conversational email assistants that are supposed to be a step up from static drip emails sequences. This is also a form of NLP and generative AI. You're likely to start seeing copy assistants and additional AI features powered on GPT inside of these tools. I wanted to chat about one product that I feel like stands out from others in terms of being built around AI rather than it simply being a feature on the side.Persadohttps://www.persado.com/ One example I've seen (but haven't used myself) of a CEP using more advanced AI is a tool called Persado. The two co-founders are former founders of Upstream in the mobile marketing space. Similar to 6sense's conversational email platform, they've been around for a decade and they claim to leverage NLP + ML to create, optimize, and personalize engagement messaging. So they essentially analyze a bunch of data and identify the most effective emotions, CTAs, language, phrases to drive engagement. It's worth calling it out because it's more than just predicting user behavior and optimizing the best time to send a message, it takes it a step further by also incorporating NLP techniques, understanding the nuances of human language, and generating custom marketing messages that resonate with a unique user. One thing that makes Persado unique is it's not built on GPT, it has its own model that's trained on more than 100 billion digital business language impressions across industries. Potentially less data points than GPT but arguably better and more relevant impressions. As Lisa Spira, VP of Content Intelligence at Persado explains in an interview with Martechseries, “models like OpenAI's ChatGPT are trained on the entire “Wild West” of the Internet, so their results, while delivered confidently by the model, can be riddled with inaccuracies, or even offensive content”. She adds that “Generative AI tools might be capable of writing anything, but we're able to cut through that noise, and train our generative AI to write in a way that motivates recipients to take actions: to open an email, convert on a shopping cart webpage, or stop scrolling and click on a social media ad.”And not just generative AI. Persado is pushing a term called Motivation AI and they have a really cool example of it. Let's say you're in D2C and you're selling sweaters. Generative AI gives you level 1 and 2 content which is Intelligibility and Relevance so it might spit out content like: Sweaters are a knitted garment that usually have long sleeves and are designed to be worn over other clothing In the fall, sweaters provide an extra layer of warmth in the cooler temperatures. They are stylish choice for fall fashion because they come in a variety of colors and styles But the next stage of this is including Emotion to get an output like this:You're going to love these cozy sweaters for fallAnd the following stage, where Persado claims to play is Motivation where you would get an output like this:A cozy invitation: indulge in our luxuriously soft sweaters this fallNow this might all be semantics. I'd argue that with great prompting you can get to generative content that includes motivation and emotion. Optimovehttps://www.optimove.com/ This tool could actually go in the CRM, the CDP or even the email category with the acquisitions they've made in the last few years but another name that I've heard a few times is Optimove. They play in the enterprise arena and primarily serve retailers and gaming operators but they have an interesting suite of AI tools for marketers. I've personally not used the tool but they claim to provide personalized recommendation campaign orchestration with their AI-based marketing bot (Optibot). It provides what they call self-optimizing campaigns that are essentially advanced A/B/n campaign testing that automatically serves the best treatment to individual customers through the use of ML. Source: https://www.optimove.com/pdf/Optimove-Product-Features.pdf Predictive AnalyticsPredictive analytics is a super cool niche of AI application. It essentially enables us to anticipate trends and patterns of consumer behavior based on a bunch of user data. Done right, you can do cool things like uncover users ready to buy amongst a sea of tire kickers, find free users primed for upsells and anticipate customers about to churn.Vidora Cortex (mParticle Intelligent Attributes)https://www.vidora.com/platform/ I wanted to start by calling out a few no-code / low-code predictive analytics / ML pipeline platforms. There's not a ton in this space but it has been growing over the past few years. Many of the big analytics players like Qlik's AutoML, IBM's Watson Studio, Amazon SageMaker, Google's AutoML, have a ML specific tool that does this but it's built for data scientists. Vidora is worth calling out here because it was built more for knowledge workers.Vidora was founded by 3 academics while pursuing their Ph.Ds Caltech and Berkeley, they built systems that utilized machine learning to convert raw data into informed decisions. Their initial vision was to democratize machine learning capabilities. They are a small 20 person startup in SF but their product offers huge potential, so much so that they were acquired by mParticle, a popular CDP. Vidora's platform, Cortex, provides you with the ability to build distinct pipelines for your customer data, so you can then produce various predictive attributes tailored for particular applications. For example if your goal is to identify the customers with the highest probability of purchasing your product in the next 30 days, the pipeline allows you to enhance your retargeting ROI by focusing on this user segment. Similar: https://www.pecan.ai/ https://www.akkio.com/ 6sensehttps://6sense.com/It's worth highlighting 6sense here as well. Not specifically an ML tool but it has a wider set of use cases that are a mix of intent data and predictive analytics and a slice of lead scoring… but they've been around for a decade. They use real-time data about your buyers to predict their buying intent and what their current purchasing stage might be. They uncover some of the magic in a blog post about their predictive engine here. They claim to be using: Intent data, web visits, content engagement (1st party) and 3rd party user behavior data from all over the Internet Historical data, events that led up to sales in the past ICP, firmographic and technographic data So they layer intent data on top of historical data and filter it through your ICP and the output is a target list of accounts that show intent to buy and are a good fit for your solution.The secret sauce here is really the 3rd party intent data. This is hidden data that they are able to assign to leads and accounts. Part of the source here is 6sense's proprietary intent network and other B2B review sites like G2, TrustRadius, and PeerSpot.Kumohttps://kumo.ai/ Founded by Jure Leskovec, former Stanford University computer science professor who's known for his bold assertion that AI would eventually be capable of predicting the future. He partnered with the former CTO of Pinterest and Airbnb as well as the former Head of Growth AI at LinkedIn to build Kumo.Using Kumo, companies can not only analyze past events but also predict future opportunities. Kumo uses GNNs (Graph Neural Networks) to identify patterns and relationships in complex data sets that cannot be easily analyzed using traditional statistical or machine learning techniques. This essentially allows marketers to anticipate customer behavior (how much they will spend, which new products they will be more interested in, things that would make them leave for a competitor) and offer personalized product recommendations, promotions, and communication.Predictive analytics isn't a new concept though. We talked a bit about this in our first episode when we mentioned propensity models which tons of larger companies employ today. But this is operationalizing it a step further and not just on your company's datasets. And imagine unlocking this capability for startups. So the idea is that marketing operations teams would change their focus to future customer behaviors. PresentationTomehttps://beta.tome.app/ There are countless AI productivity tools that are all super similar. But one that's been buzzing lately is Tome. They are founded by product leads from Facebook and Instagram, and recently got $43 million in their Series B funding round. They launched an innovative document-to-presentation AI tool, which utilizes GPT-4 to generate text and images and transform a document into compelling presentations, narratives, and stories. Tome's tile system sets it apart from other generative PowerPoint tools on the market and gives Microsoft a run for its money. Not only does it offer robust functionality, but it also boasts a sleek and impressive design.https://gamma.app/ 3D AnimationSpline AIhttps://spline.design/ai Spline AI was created by Alejandro León, it's a YC21 startup that's building a 3D design platform comparable to Figma, it basically allows you to convert text to 3D. For decades, creating in 3D has been super hard. Spline is changing this.You can easily create objects and scenes, edit objects, colors, and properties, add physics and randomness, create animations and events, generate style alternatives, collaborate with others in real-time, and much more. The applications for videos and product marketing are endless here, see a demo here. SalesAI-powered sales tools are popping up every week. Especially in the niche of email outreach. Personalization, subject line optimization, send-time optimization, sales rep coaching, auto suggest follow-up cadences… just a few of the areas where AI can enhance your email campaigns. NLP can analyze email responses and suggest appropriate next steps, helping your sales team respond efficiently. There's tons of players in this space like Cresta, Lyne, Regie. Cresta was funded out of the AI Grant program organized by some of the brightest tech minds in AI. Their founder, S. Zayd Enam chose to leave his PhD program at Stanford to establish the startup. They specifically provide AI guidance software and support that elevates each representative, behavioral mentoring of agents to enhance performance and locate solutions and areas to streamline monotonous tasks.AI Sales Email Coach. It assists you in real-time. Get more positive replies and write better emails faster.https://www.lavender.ai/ https://cresta.com/ https://lyne.ai/cold-email-ai/ https://www.regie.ai/ Web creatorsButternut AIhttps://butternut.ai/ Tagline says it all: Create a website for your business in 20 seconds with AI. Butternut.ai uses generative AI technology that allows users to create websites by simply answering text prompts. The AI technology is designed to function like a personal developer, providing users with an efficient and cost-effective solution for creating and editing websites.Butternut.ai is continuously improving its website builder and is currently working on its v2 version that will offer more design and editing functionalities. Users can even command the AI to perform specific tasks like creating a pricing page with a CTA to sign up.http://Stunning.so https://10web.io Ad and social creativeAlbert.aihttps://albert.ai/ Albert is an autonomous and self-learning digital marketing tool that uses ML + NLP as well as analytics to automate and optimize your paid ad campaigns, specifically Google's search and programmatic channels, as well as Facebook, Instagram, YouTube and Bing. It can automate bidding and budget allocation, keyword and audience targeting, as well as creative testing and optimization. So obviously the part that stands out here is that unlike many other campaign AI tools that just spit out recommendations and then a marketer takes the action, Albert claims to be one of the first tools that's an autonomous AI, it does the action also. Not only that, it's also making adjustments and improvements constantly. You seem to be able to set guardrails of course. They also claim the ability to identify trends, uncover new audience segments, and optimize ad placements. In their docs they say they are more useful in large data sets and B2C environments. AdCreativehttps://adcreative.ai/ AutoGPT and AI agentsI don't think we can go a full episode about AI tools without talking about AutoGPT and AI agents.Essentially you can assign an objective to an AI agent and they work on tasks that lead to accomplish this objective. It's making prompting a bit easier, instead of giving full instructions, the AI identifies the necessary steps to achieve your goal and some of the more advanced ones generate additional AI to assist. You may have seen this on Twitter, I think the first true demo was Yohei Nakajima's impressive demonstration of babyAGI.Things started blowing up with AutoGPT, released by SigGravitas just last week. Described as a self-running AI agent that can write its own code, heal itself from errors, and access the internet via Google search. It does sound really cool for several reasons: Internet access, long-term and short-term memory management, text generation, and integration with 11 Labs for AI text to speech generation. It's arguable that all of this will be possible with GPT plugins but this is doable today and has different applications, let's unpack a few examples:Social media is primed for automation, lots of the work here can already be automated. But imagine setting an AI agent loose with the objective of creating content, scheduling posts (not just posting them) and replying to comments and questions. Replying to questions is a huge part of customer service, you could argue this is another area that's already being automated with chat bots, especially the better ones connected to help docs. But imagine setting an AI agent loose with the objective of really understanding customer questions (not just picking up a keyword and recommending a help article), it could speak any language and you could train it on your brand tone.We're still early but there's already been a ton of innovation here.Here are some of the tools you can play around with today without reading a GitHub repo:Cognosyshttps://www.cognosys.ai/ Cognosys allows you to operate your personal AutoGPT Agent in-browser which is currently available and free to utilize. I've personally played around with it quite a bit and it's pretty wild, it's not perfect yet but it's really cool seeing how it will add new tasks to get to the objective. The beta is still on GPT-3.5. Here's my favorite example:Main objective was to: Build a list of AI tools that have a marketing application, that were released in the last 3 months. Here were the follow-up tasks it added to get to the objective: 1. Retrieve a list of AI tools that were released in the last 3 months. 2. Filter the list to only include tools that have a marketing application. 3. Compile the filtered list into a final list of AI tools with marketing applications released in the last 3 months. The final list of AI tools with marketing applications released in the last 3 months is as follows: 1. Receptiviti: A tool that uses AI to analyze consumer emotions and sentiment in order to optimize marketing strategies. 2. Cortex: A platform that uses AI to help marketers personalize content and improve customer engagement. 3. Adext AI: A tool that uses AI to optimize ad campaigns and improve ad performance. 4. Acoustic: An AI-powered marketing platform that helps businesses automate and optimize their marketing strategies. 5. Pathmatics: A tool that uses AI to provide insights into ad spending and campaign performance for competitive analysis. It then added a few extra tasks because it didn't seem satisfied with the list. Research and compile a list of AI tools that have a marketing application and were released in the last 6 months. The result was 6 tools, but not all 5 from the first task were there. Identify the unique features and functionalities of each AI tool on the list and create a comparison chart to evaluate which tools are most suitable for specific marketing applications. Conduct a survey of marketing professionals to gather feedback and opinions on the identified AI tools and their effectiveness in improving marketing strategies.But then it apologized and said it cannot perform that task… Research and compile a list of case studies showcasing the successful implementation of AI tools in marketing campaigns.It spun up a case study about Coke using "Albert" to optimize their social media advertising campaigns and Amazon using "Rekognition" to analyze customer behavior and personalize their shopping experience. Create a step-by-step guide on how to integrate an AI tool into a marketing strategy, including best practices and potential challenges.The step by step wasn't too bad. 1: Define the problem or opportunity, 2: Identify the right AI tool by researching and comparing different tools, 3: Understand the data requirements like type of data, format, frequency updates, 4: Prepare the data for integration like cleaning and formatting 5: Integrate the AI tool Research and compile a list of AI tools that have a marketing application and were released in the last 2 months.Not sure why it did this for 2 months but this time it gave me tools with release dates which was pretty cool but also obvious that I wasn't getting tools released in the last 2 months, I was only getting tools released in the last 2 months since GPT's latest batch of data which was mid 2021. Create a comparison chart to evaluate the unique features and functionalities of each AI tool on the list and determine which tools are most suitable for specific marketing applications. Also try: https://agentgpt.reworkd.ai/ https://app.hyperwriteai.com/earlyAccess https://godmode.space/ OtherRewind.aihttps://twitter.com/dsiroker/status/1646895452317700097?s=20 https://www.rewind.ai/ You may have seen their public investor pitch on Twitter, the founder is the former creator of Optimizely and his team built a way for you to record and store everything you've said or heard or seen and they make it searchable. Obviously there's huge privacy considerations with something like this. But people don't seem to care haha they went from 0 to 700k in ARR in 3 months. Perplexity AI https://www.perplexity.ai/ Perplexity is a startup that's changing the way we search the web. With their conversational search engine, you can ask questions in plain English and get accurate answers from various sources. And with $26 million in Series A funding, (including investments from Ed Gil) they're looking to revolutionize the search engine game.Their AI technology sets it apart from traditional search engines like Google, and their chatbot-like interface is user-friendly and intuitive, it is built on top of GPT.Perplexity's focus on accuracy is a breath of fresh air in an era where search engines can be manipulated by advertisers and search engine optimization. The Series A funds will be used to optimize their knowledge database and expand their reach. All in all, Perplexity AI is definitely one to watch in the coming years!Character AIhttps://beta.character.ai/ Imagine being able to have a one-on-one conversation with your favorite celebrity or fictional character - well, that's now a possibility with Character.ai, an innovative AI website created by two former Google engineers. The platform has been growing in popularity since its launch last September, offering users the opportunity to chat with a wide range of characters for free. What's more, the interactions are so seamless that some users have reported spending hours chatting with their chosen personality, almost forgetting that they're talking to a machine.However, there's a catch - the interactions are not real, and the AI's responses may not always be accurate or reliable. Despite this, Character.ai has been embraced by fans who are looking for new ways to engage with their favorite personalities, particularly online. This is especially true for fan fiction lovers who are excited about the tool's potential for creating new experiences and making the barrier to entry for writing fan fiction much lower.But as with any new technology, there are concerns about its impact on real-world relationships, especially if users spend more time on it than with the people they care about. Furthermore, the AI technology could be used by stans to go after a perceived enemy of their favorite star, which could be a problem if it leads to harmful interactions or behavior towards other users.Despite these concerns, Character.ai represents a new frontier for fan culture, offering a new way for fans to engage with their favorite personalities and characters. The platform's growth in popularity is a testament to the human need for connection, and while it may not substitute actual interactions, it provides a unique and exciting way for fans to engage with their favorite personalities.Magichttps://magic.dev/ Magic, a startup developing an AI-driven tool to help software engineers write, review, debug, and plan code changes, just raised $23 million in a funding round led by Alphabet's CapitalG. The platform's CEO, Eric Steinberger, was inspired by AI at a young age and is using his experience as an AI researcher to create a tool that will communicate with users in natural language, helping to improve the speed and cost of developing software.Steinberger claims that Magic can understand legacy code and collaborate with users on code changes, operating like a pair programmer that learns more about the context of both coding projects and developers. The AI colleague that Magic offers will be able to understand code and can help developers navigate it, allowing companies to scale the impact of their current employees and train new employees with less personal coaching.What sets it apart is that it allows developers to describe what they want in English, and the AI will understand it and collaborate with them on the changes. This means that developers can work with Magic like a colleague and send messages telling it what changes they want to be made to the code. This one isn't built on GPT, the company has developed its own neural network architecture that can rapidly read code and is capable of detecting, warning about and overcoming potential bugs in the code. Honorable mentionsGalileo AI - Create editable UI designs from text descriptionhttps://usegalileo.ai/ Notocat - Write your newsletters in Notion and send them to your subscribershttps://twitter.com/razvanilin/status/1643565644061999104 Brainf FM - music made by AI that's scientifically proven to increase focushttps://www.brain.fm/ Meeting notes and transcription appshttps://tldv.io/ https://vowel.com/ Image Vectorizer - Turn small images to vectorhttps://vectorizer.io/Speech-to-text generator for podcasts that creates notes, timestamps and summary contenthttps://castmagic.io/ https://podsqueeze.com/ Text-to-speech AI voice generatorhttps://murf.ai/ https://beta.elevenlabs.io/ Text-to-music AI music generator https://mubert.com/ Text-to-SQL query, connect your database, ask a question, get an answerhttps://usechannel.com/ Teachable's AI course curriculum generatorhttps://teachable.com/ai-curriculum-generator https://twitter.com/teachable/status/1641154548419969048?s=20 The opportunities are endless with AI tools and applications go far beyond marketing. I see too many people on Twitter dunking on using it to help you generate text or as a search engine or fact checker… and they're missing the big picture. People are using AI to Help them build custom meal plans Custom exercise plans Help them build sleeping plans Help them build routines and schedules with their newborns Planning road trips Planning date ideas Acting as a therapist Getting movie and book recommendations Planning a party or event Designing personal budgets Assisting with resume and cover letter writing Summarizing long articles and youtube videos Write SQL queries Explain python and CSS code in plain English ConclusionIt's not like our marketing jobs are gonna vanish overnight, but the shift is happening faster than many of us realize. AI's no longer just a loosely backed buzzword; it's doing things today that we used to think were impossible. So, as marketers, we've gotta take this tech seriously.There's 4 main things marketers should be doing to future-proof their careers: Instead of asking if AI's gonna replace our roles in marketing, we should be talking about how quickly it could happen and what it'll look like if it does. Staying informed and learning from some of the best podcasts and newsletters about AI. Ben's Bites, No Priors Podcast, A Guide for Thinking Humans and the AI Exchange are great resources. Now is the time to figure out if you need to make changes to your current area of speciality. Ask yourself if you should double down on additional areas like data and API services, getting closer to product and customers or starting to learn about ethics and data privacy. Stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools. ✌️--Intro music by Wowa via UnminusCover art created with Midjourney
V oddaji se pogovarjamo s profesorjem dr. Juretom Leskovcem z ameriške univerze Stanford, ki med drugim podrobneje predstavlja sobotni dogodek Ameriško slovenske izobraževalne fundacije ASEF. Dotaknil se je tudi raziskave o vračanju slovenskih strokovnjakov v domovino oziroma sodelovanju s Slovenijo. Posvečamo se tudi dogajanju ob Svetovnih dnevih slovenske literature, ki te dni potekajo na 40 univerzah po svetu, kjer se je mogoče učiti slovenski jezik in sicer lektorica slovenskega jezika v Parizu Petra Seitel osvetljuje dogodke v francoski prestolnici. Slišite lahko tudi zgodbo doktorice veterinarskih znanosti Nine Milevoj, ki opravlja specializacijo na Škotskem, govorimo pa tudi o tem, kako se na Švedskem pripravljajo na prihod Miklavža.
Matthias Fey is the creator of the Pytorch Geometric library and a postdoctoral researcher in deep learning at TU Dortmund Germany. He is a core contributor to the Open Graph Benchmark dataset initiative in collaboration with Stanford University Professor Jure Leskovec. 00:00 Intro 00:50 Pytorch Geometric Inception 02:57 Graph NNs vs CNNs, Transformers, RNNs 05:00 Implementation of GNNs as an extension of other ANNs 08:15 Image Synthesis from Textual Inputs as GNNs 10:48 Image classification Implementations on augmented Data in GNNs 13:40 Multimodal Data implementation in GNNs 16:25 Computational complexity of GNN Models 18:55 GNNAuto Scale Paper, Big Data Scalability 24:39 Open Graph Benchmark Dataset Initiative with Stanford, Jure Leskovec and Large Networks 30:14 PyG in production, Biology, Chemistry and Fraud Detection 33:10 Solving Cold Start Problem in Recommender Systems using GNNs 38:21 German Football League, Bundesliga & Playing in Best team of Worst League 41:54 Pytorch Geometric in ICLR and NeurIPS and rise in GNN-based papers 43:27 Intrusion Detection, Anomaly Detection, and Social Network Monitoring as GNN implementation 46:10 Raw data conversion to Graph format as Input in PyG 50:00 Boilerplate templates for PyG for Citizen Data Scientists 53:37 GUI for beginners and Get Started Wizards 56:43 AutoML for PyG and timeline for Tensorflow Version 01:02:40 Explainability concerns in PyG and GNNs in general 01:04:40 CSV files in PyG and Structured Data Explainability 01:06:32 Playing Bass, Octoberfest & 99 Red Balloons 01:09:50 Collaboration with Stanford, OGB & Core Team 01:15:25 Leaderboards on Benchmark Datasets at OGB Website, Arvix Dataset 01:17:11 Datasets from outside Stanford, Harvard, Facebook etc 01:19:00 Kaggle vs Self-owned Competition Platform 01:20:00 Deploying Arvix Model for Recommendation of Papers 01:22:40 Future Directions of Research 01:26:00 Collaborations, Jurgen Schmidthuber & Combined Research 01:27:30 Sharing Office with a Dog, 2 Rabbits and How to train Cats
Ankit is an experienced AI Researcher/Machine Learning Engineer who is passionate about using AI to build scalable machine learning products. In his 10 years of AI career, he has researched and deployed several state-of-the-art machine learning models which have impacted 100s of millions of users. Currently, He works as a senior research scientist at Facebook where he works on a variety of machine learning problems across different verticals. Previously, he was a researcher at Uber AI where he worked on application of deep learning methods to different problems ranging from food delivery, fraud detection to self-driving cars. He has been a featured speaker in many of the top AI conferences and universities like UC Berkeley, IIT Bombay and has published papers in several top conferences like Neurips, ICLR. Additionally, he has co-authored a book on machine learning titled TensorFlow Machine Learning Projects. He has undergraduate and graduate degrees from IIT Bombay (India) and UC Berkeley respectively. Outside of work, he enjoys running and has run several marathons. 00:00 Intro 00:17 IIT vs FAANG companies, Competition Anxiety 05:40 Work Load between India and US, Educational Culture 07:50. Uber Eats, Food Recommendation Systems and Graph Networks 11:00 Accuracy Matrices for Recommendation Systems 12:42 Weather as a predictor of Food Orders and Pizza Fad 15:48 Raquel Urtusun and Zoubin Gharamani, Autonomous Driving and Google Brain 17:30 Graph Learning in Computer Vision & Beating the Benchmarks 19:15 Latent Space Representations and Fraud Detection 21:30 Multimodal Data & Prediction Accuracy 23:20 Multimodal Graph Recommendation at Uber Eats 23:50 Post-Order Data Analysis for Uber Eats 27:30 Plugging out of Matrix and Marathon Running 31:44 Finding Collusion between Riders and Drivers with Graph Learning 35:40 Reward Sensitivity Analysis for Drivers in Uber through LSTM Networks 42:00 PyG 2.0, Jure Leskovec, and DeepGraph, Tensorflow Support 46:46 Pytorch vs Tensorflow, Scalability and ease of use. 52:10 Work at Facebook, End to End Experiments 55:19 Optimisation of Cross-functional Solutions for Multiple Teams 57:30 Content Understanding teams and Behaviour Prediction 59:50 Cold Start Problem and Representation Mapping 01:03:30 NeurIPS paper on Meta-Learning and Global Few-Shot Model 01:07:00 Experimentation Ambience at Facebook, Privacy and Data Mine 01:09:03 Cons of working at FAANG 01:10:20 High School Math Teacher as Inspiration and Mentoring Others 01:18:25 TensorFlow Book and Upcoming Blog 01:16:40 Working at Oil Rig in the Ocean Straight Out of College 01:20:08 Promises of AI and Benefits to Society at Large 01:25:50 Facebook accused of Polarisation, Manipulation and Racism 01:28:10 Revenue Models - Product vs Advertising 01:31:15 Metaverse and Long-term Goals 01:33:10 Facebook Ray-Ban Stories and Market for Smart Glasses 01:36:40 Possibility of Facebook OS for Facebook Hardware 01:38:00 LibraCoin & Moving Fast - Breaking Things at Facebook 01:39:09 Orkut vs Facebook - A case study on Superior Tech Stack 01:42:00 Careers in Data Science & How to Get into It 01:45:00 Irrelevance of College Degrees and Prestigious Universities as Pre-requisites 01:49:50 Decreasing Attention Span & Lack of Curiosity 01:54:40 Arranged Marriages & Shifting Relationship Trends
Dass Gesundheitsrisiken gesellschaftlich ungleich verteilt sind, ist lange bekannt. Trotzdem trägt die Pandemiepolitik dem nicht Rechnung. Das Coronavirus trifft arme und bildungsferne Menschen besonders hart, und das nicht nur beim Infektionsrisiko, sondern auch, was den Krankheitsverlauf angeht. In einer Sonderfolge spricht Wissenschaftsredakteurin Korinna Hennig mit dem Medizinsoziologen Nico Dragano über die sozioökonomische Seite der Corona-Krise. Er erklärt, was man aus früheren Pandemien hätte lernen können, welche Rolle Nahverkehr und Arbeitsplätze spielen - und was man anders machen müsste. Die Hintergrundinformationen • Sozioökonomische Ungleichheit und COVID-19 | Robert Koch-Insititut: Eine Übersicht über den internationalen Forschungsstand https://edoc.rki.de/handle/176904/6965 • Risikofaktoren und Covid-19-Tests in Großbritannien | Marc Chadeau-Hyam, Barbara Bodinier, Joshua Elliott, Matthew D Whitaker, Ioanna Tzoulaki, Roel Vermeulen, Michelle Kelly-Irving, Cyrille Delpierre und Paul Elliott: "Risk factors for positive and negative COVID-19 tests: a cautious and in-depth analysis of UK biobank data", erschienen August 2020 bei Oxford Academic https://academic.oup.com/ije/article/49/5/1454/5894660?login=true • Covid-19 Inzidenz und Sterberate in Abhängigkeit von schlechten Wohnverhältnissen in den USA | Khansa Ahmad, Sebhat Erqou, Nishant Shah, Umair Nazir, Alan R. Morrison, Gaurav Choudhary und Wen-Chih Wu: "Association of poor housing conditions with COVID-19 incidence and mortality across US counties", erschienen November 2020 bei PLOS ONE https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0241327 • Mobilität und Nahverkehr in der Corona-Pandemie in New York | Karla Therese L. Sy, Micaela E. Martinez, Benjamin Rader und Laura F. White: "Socioeconomic disparities in subway use and COVID-19 outcomes in New York City", erschienen Mai 2020 auf medRxiv https://www.medrxiv.org/content/10.1101/2020.05.28.20115949v1.full.pdf • Mobilität und Ungleichheit | Serina Chang, Emma Pierson, Pang Wei Koh, Jaline Gerardin, Beth Redbird, David Grusky und Jure Leskovec: "Mobility network models of COVID-19 explain inequities and inform reopening", erschienen November 2020 bei Nature https://www.nature.com/articles/s41586-020-2923-3.pdf • Studie aus Israel zum Zugang zu Tests und Infektionsrisiken in der Corona-Pandemie | Khitam Muhsen, Wasef Na'aminh, Yelena Lapidot, Sophy Goren, Yonatan Amir, Saritte Perlman, Manfred S. Green, Gabriel Chodick und Dani Cohen: "A nationwide analysis of population group differences in the COVID-19 epidemic in Israel, February 2020–February 2021", erschienen August 2020 in The Lancet Regional Health - Europe auf ScienceDirect https://www.sciencedirect.com/science/article/pii/S2666776221001071 • Risikofaktoren und Krankheitsverläufe in Schweden | Jonathan Bergman, Marcel Ballin, Anna Nordström und Peter Nordström: "Risk factors for COVID-19 diagnosis, hospitalization, and subsequent all-cause mortality in Sweden: a nationwide study", erschienen März 2021 in European Journal of Epidemiology volume bei Springer Nature https://link.springer.com/article/10.1007/s10654-021-00732-w • Risikofaktoren und Krankheitsverläufe in Großbritannien | Joshua Elliott, Barbara Bodinier, Matthew Whitaker, Cyrille Delpierre, Roel Vermeulen, Ioanna Tzoulaki, Paul Elliott und Marc Chadeau-Hyam: "COVID-19 mortality in the UK Biobank cohort: revisiting and evaluating risk factors", erschienen Februar 2021 im European Journal of Epidemiology bei Springer Nature https://link.springer.com/article/10.1007/s10654-021-00722-y • Erhöhtes Risiko eines COVID-19-bedingten Krankenhausaufenthaltes für Arbeitslose | Morten Wahrendorf, Christoph J. Rupprecht, Olga Dortmann, Maria Scheider und Nico Dragano: Eine Analyse von Krankenkassendaten von 1,28 Mio. Versicherten in Deutschland, erschienen Januar 2021 im Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz auf Springer Nature https://link.springer.com/article/10.1007%2Fs00103-021-03280-6 • Inzidenz und Sterblichkeit in Abhängigkeit vom sozialen Status in Chile | Gonzalo E. Mena, Pamela P. Martinez, Ayesha S. Mahmud, Pablo A. Marquet, Caroline O. Buckee und Mauricio Santillana: "Socioeconomic status determines COVID-19 incidence and related mortality in Santiago, Chile", erschienen Mai 2021 bei Science https://science.sciencemag.org/content/372/6545/eabg5298.abstract • Impfungen und Bevölkerungsgruppen - Studie aus Italien | Alexander Domnich, Maura Cambiaggi, Alessandro Vasco, Luca Maraniello, Filippo Ansaldi, Vincenzo Baldo, Paolo Bonanni, Giovanna Elisa Calabrò, Claudio Costantino, Chiara de Waure, Giovanni Gabutti, Vincenzo Restivo, Caterina Rizzo, Francesco Vitale und Riccardo Grassi: "Attitudes and Beliefs on Influenza Vaccination during the COVID-19 Pandemic: Results from a Representative Italian Survey", erschienen November 2020 bei MDPI https://www.mdpi.com/2076-393X/8/4/711 • Grippe und Impfung bei 65+ | Jason M Nagata, Isabel Hernández-Ramos, Anand Sivasankara Kurup, Daniel Albrecht, Claudia Vivas-Torrealba und Carlos Franco-Paredes, "Social determinants of health and seasonal influenza vaccination in adults ≥65 years: a systematic review of qualitative and quantitative data", erschienen April 2013 bei BMC (Springer Nature) https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-13-388 • Daten aus früheren Pandemien - Hohe Sterberaten bei niedrigem sozialen Status | "Social inequality - a forgotten factor in pandemic influenza preparedness", erschienen Juni 2017 bei Tidsskrift for Den norske legeforening Oslo https://tidsskriftet.no/2017/05/global-helse/social-inequality-forgotten-factor-pandemic-influenza-preparedness • Frühere Pandemien - SARS | Grace Wong Bucchianeri: "Is SARS a Poor Man's Disease? Socioeconomic Status and Risk Factors for SARS Transmission", erschienen 2010 in Forum for Health Economics & Policy https://www.degruyter.com/document/doi/10.2202/1558-9544.1209/html • Frühere Pandemien - Blick auf Chicago 1918 | Kyra H. Grantz, Madhura S. Rane, Henrik Salje, Gregory E. Glass, Stephen E. Schachterle, and Derek A. T. Cummings: "Disparities in influenza mortality and transmission related to sociodemographic factors within Chicago in the pandemic of 1918", erschienen November 2016 bei PNAS https://www.pnas.org/content/113/48/13839.short • Ungleichheit und Epidemien als Herausforderung für die globale Sicherheit | Sandra Crouse Quinn und Supriya Kumar: "Health Inequalities and Infectious Disease Epidemics: A Challenge for Global Health Security", erschienen September 2014 im Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science bei Mary Ann Liebert, Inc., publishers https://www.liebertpub.com/doi/abs/10.1089/bsp.2014.0032
Dass Gesundheitsrisiken gesellschaftlich ungleich verteilt sind, ist lange bekannt. Trotzdem trägt die Pandemiepolitik dem nicht Rechnung. Das Coronavirus trifft arme und bildungsferne Menschen besonders hart, und das nicht nur beim Infektionsrisiko, sondern auch, was den Krankheitsverlauf angeht. In einer Sonderfolge spricht Wissenschaftsredakteurin Korinna Hennig mit dem Medizinsoziologen Nico Dragano über die sozioökonomische Seite der Corona-Krise. Er erklärt, was man aus früheren Pandemien hätte lernen können, welche Rolle Nahverkehr und Arbeitsplätze spielen - und was man anders machen müsste. Die Hintergrundinformationen • Sozioökonomische Ungleichheit und COVID-19 | Robert Koch-Insititut: Eine Übersicht über den internationalen Forschungsstand https://edoc.rki.de/handle/176904/6965 • Risikofaktoren und Covid-19-Tests in Großbritannien | Marc Chadeau-Hyam, Barbara Bodinier, Joshua Elliott, Matthew D Whitaker, Ioanna Tzoulaki, Roel Vermeulen, Michelle Kelly-Irving, Cyrille Delpierre und Paul Elliott: "Risk factors for positive and negative COVID-19 tests: a cautious and in-depth analysis of UK biobank data", erschienen August 2020 bei Oxford Academic https://academic.oup.com/ije/article/49/5/1454/5894660?login=true • Covid-19 Inzidenz und Sterberate in Abhängigkeit von schlechten Wohnverhältnissen in den USA | Khansa Ahmad, Sebhat Erqou, Nishant Shah, Umair Nazir, Alan R. Morrison, Gaurav Choudhary und Wen-Chih Wu: "Association of poor housing conditions with COVID-19 incidence and mortality across US counties", erschienen November 2020 bei PLOS ONE https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0241327 • Mobilität und Nahverkehr in der Corona-Pandemie in New York | Karla Therese L. Sy, Micaela E. Martinez, Benjamin Rader und Laura F. White: "Socioeconomic disparities in subway use and COVID-19 outcomes in New York City", erschienen Mai 2020 auf medRxiv https://www.medrxiv.org/content/10.1101/2020.05.28.20115949v1.full.pdf • Mobilität und Ungleichheit | Serina Chang, Emma Pierson, Pang Wei Koh, Jaline Gerardin, Beth Redbird, David Grusky und Jure Leskovec: "Mobility network models of COVID-19 explain inequities and inform reopening", erschienen November 2020 bei Nature https://www.nature.com/articles/s41586-020-2923-3.pdf • Studie aus Israel zum Zugang zu Tests und Infektionsrisiken in der Corona-Pandemie | Khitam Muhsen, Wasef Na'aminh, Yelena Lapidot, Sophy Goren, Yonatan Amir, Saritte Perlman, Manfred S. Green, Gabriel Chodick und Dani Cohen: "A nationwide analysis of population group differences in the COVID-19 epidemic in Israel, February 2020–February 2021", erschienen August 2020 in The Lancet Regional Health - Europe auf ScienceDirect https://www.sciencedirect.com/science/article/pii/S2666776221001071 • Risikofaktoren und Krankheitsverläufe in Schweden | Jonathan Bergman, Marcel Ballin, Anna Nordström und Peter Nordström: "Risk factors for COVID-19 diagnosis, hospitalization, and subsequent all-cause mortality in Sweden: a nationwide study", erschienen März 2021 in European Journal of Epidemiology volume bei Springer Nature https://link.springer.com/article/10.1007/s10654-021-00732-w • Risikofaktoren und Krankheitsverläufe in Großbritannien | Joshua Elliott, Barbara Bodinier, Matthew Whitaker, Cyrille Delpierre, Roel Vermeulen, Ioanna Tzoulaki, Paul Elliott und Marc Chadeau-Hyam: "COVID-19 mortality in the UK Biobank cohort: revisiting and evaluating risk factors", erschienen Februar 2021 im European Journal of Epidemiology bei Springer Nature https://link.springer.com/article/10.1007/s10654-021-00722-y • Erhöhtes Risiko eines COVID-19-bedingten Krankenhausaufenthaltes für Arbeitslose | Morten Wahrendorf, Christoph J. Rupprecht, Olga Dortmann, Maria Scheider und Nico Dragano: Eine Analyse von Krankenkassendaten von 1,28 Mio. Versicherten in Deutschland, erschienen Januar 2021 im Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz auf Springer Nature https://link.springer.com/article/10.1007%2Fs00103-021-03280-6 • Inzidenz und Sterblichkeit in Abhängigkeit vom sozialen Status in Chile | Gonzalo E. Mena, Pamela P. Martinez, Ayesha S. Mahmud, Pablo A. Marquet, Caroline O. Buckee und Mauricio Santillana: "Socioeconomic status determines COVID-19 incidence and related mortality in Santiago, Chile", erschienen Mai 2021 bei Science https://science.sciencemag.org/content/372/6545/eabg5298.abstract • Impfungen und Bevölkerungsgruppen - Studie aus Italien | Alexander Domnich, Maura Cambiaggi, Alessandro Vasco, Luca Maraniello, Filippo Ansaldi, Vincenzo Baldo, Paolo Bonanni, Giovanna Elisa Calabrò, Claudio Costantino, Chiara de Waure, Giovanni Gabutti, Vincenzo Restivo, Caterina Rizzo, Francesco Vitale und Riccardo Grassi: "Attitudes and Beliefs on Influenza Vaccination during the COVID-19 Pandemic: Results from a Representative Italian Survey", erschienen November 2020 bei MDPI https://www.mdpi.com/2076-393X/8/4/711 • Grippe und Impfung bei 65+ | Jason M Nagata, Isabel Hernández-Ramos, Anand Sivasankara Kurup, Daniel Albrecht, Claudia Vivas-Torrealba und Carlos Franco-Paredes, "Social determinants of health and seasonal influenza vaccination in adults ≥65 years: a systematic review of qualitative and quantitative data", erschienen April 2013 bei BMC (Springer Nature) https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-13-388 • Daten aus früheren Pandemien - Hohe Sterberaten bei niedrigem sozialen Status | "Social inequality - a forgotten factor in pandemic influenza preparedness", erschienen Juni 2017 bei Tidsskrift for Den norske legeforening Oslo https://tidsskriftet.no/2017/05/global-helse/social-inequality-forgotten-factor-pandemic-influenza-preparedness • Frühere Pandemien - SARS | Grace Wong Bucchianeri: "Is SARS a Poor Man's Disease? Socioeconomic Status and Risk Factors for SARS Transmission", erschienen 2010 in Forum for Health Economics & Policy https://www.degruyter.com/document/doi/10.2202/1558-9544.1209/html • Frühere Pandemien - Blick auf Chicago 1918 | Kyra H. Grantz, Madhura S. Rane, Henrik Salje, Gregory E. Glass, Stephen E. Schachterle, and Derek A. T. Cummings: "Disparities in influenza mortality and transmission related to sociodemographic factors within Chicago in the pandemic of 1918", erschienen November 2016 bei PNAS https://www.pnas.org/content/113/48/13839.short • Ungleichheit und Epidemien als Herausforderung für die globale Sicherheit | Sandra Crouse Quinn und Supriya Kumar: "Health Inequalities and Infectious Disease Epidemics: A Challenge for Global Health Security", erschienen September 2014 im Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science bei Mary Ann Liebert, Inc., publishers https://www.liebertpub.com/doi/abs/10.1089/bsp.2014.0032
We discuss the paper "Mobility network models of COVID-19 explain inequities and inform reopening" with first author and Stanford computer science PhD student Serina Chang. This paper's team of interdisciplinary authors include other computer scientists (Emma Pierson, Pang Wei Koh, and Jure Leskovec), sociologists (Beth Redbird and David Grusky), and an epidemiologist (Jaline Gerardin). Serina shared with us challenges in navigating post-publication scientific communication and translating scientific research into real-world policy tools, as well as the success of grounding research questions in supporting the needs of real people.
We talk with Emma Pierson, PhD in Computer Science from Stanford and incoming assistant professor of Computer Science at Cornell Tech, about her paper "Daily, weekly, seasonal and menstrual cycles in women's mood, behaviour and vital signs" published in Nature Human Behavior, 2021. This was joint work with fellow computer scientists (Tim Althoff and Jure Leskovec), head of data science at a partner company (Daniel Thomas), and professor of obstetrics and gynecology (Paula Hillard). Emma shared with us strategies for normalizing research on women's health and the menstrual cycle and creating trust with industry partners. She emphasized that math is a universal language that can transcend the boundaries of individuals' personal experiences. Paper link: https://www.nature.com/articles/s41562-020-01046-9
V prvi oddaji z letnico 2021 bomo pozornost namenili delovnim načrtom treh civilnih organizacij, ki se v Sloveniji ukvarjajo s področjem Slovencev po svetu – Slovenske izseljenske matice, Svetovnega slovenskega kongresa in Izseljenskega društva Slovenija v svetu. Skupaj s sogovorniki bomo pregledali tudi opravljene dejavnosti v prejšnjem letu, ki jih je močno prekrižala pandemija. Na staro leto in nove načrte v letu 2021 pa se bosta ozrla tudi član odbora direktorjev Ameriško-slovenske izobraževalne fundacije ASEF dr. Jure Leskovec in predsednica Društva v tujini izobraženih Slovencev-VTIS Kristi Hodak.
Moin Nadeem is a masters student at MIT, where he studies natural language generation. His research interests broadly include natural language processing, information retrieval, and software systems for machine learning. Learn more about Moin: https://moinnadeem.com/ (https://moinnadeem.com/) https://twitter.com/moinnadeem (https://twitter.com/moinnadeem) Want to level-up your skills in machine learning and software engineering? Join the ML Engineered Newsletter: http://bit.ly/mle-newsletter (http://bit.ly/mle-newsletter) Comments? Questions? Submit them here: http://bit.ly/mle-survey (http://bit.ly/mle-survey) Follow Charlie on Twitter: https://twitter.com/CharlieYouAI (https://twitter.com/CharlieYouAI) Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/ (https://www.givingwhatwecan.org/) Subscribe to ML Engineered: https://mlengineered.com/listen (https://mlengineered.com/listen) Timestamps: 01:35 Follow Charlie on Twitter (https://twitter.com/CharlieYouAI (https://twitter.com/CharlieYouAI)) 03:10 How Moin got started in computer science 05:50 Using ML to identify depression on Twitter in high school 11:00 Building a system to track phone locations on MIT's campus 14:35 Specializing in NLP 17:20 Building an end-to-end fact-checking system (https://www.aclweb.org/anthology/N19-4014/ (https://www.aclweb.org/anthology/N19-4014/)) 25:15 Predicting statement stance with neural multi-task learning (https://www.aclweb.org/anthology/D19-6603/ (https://www.aclweb.org/anthology/D19-6603/)) 27:20 Is feature engineering in NLP dead? 29:40 Reconciling language models with existing knowledge graphs 35:20 How advances in AI hardware will affect NLP research (crazy!) 47:25 Moin's research into sampling algorithms for natural language generation (https://arxiv.org/abs/2009.07243 (https://arxiv.org/abs/2009.07243)) 57:10 Under-rated areas of ML research 01:00:10 How research works at MIT CSAIL 01:04:35 How Moin keeps up in such a fast-moving field 01:11:30 Starting the MIT Machine Intelligence Community 01:16:30 Rapid Fire Questions Links: https://www.aclweb.org/anthology/N19-4014/ (FAKTA: An Automatic End-to-End Fact Checking System) https://stereoset.mit.edu/ (StereoSet: Measuring stereotypical bias in pretrained language models) https://www.aclweb.org/anthology/D19-6603/ (Neural Multi-Task Learning for Stance Prediction) http://www.incompleteideas.net/IncIdeas/BitterLesson.html (Rich Sutton - The Bitter Lesson) https://arxiv.org/abs/2009.07243 (A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation) https://arxiv.org/abs/1905.12265 (Strategies for Pre-training Graph Neural Networks) https://openreview.net/pdf?id=YicbFdNTTy (Transformers For Image Recognition at Scale) https://www.cerebras.net/product/ (Cerebras CS-1) https://www.tryklarity.com/ (Klarity: AI for Law Contract Review) https://www.mit.edu/~jda/ (Jacob Andreas) https://cs.stanford.edu/people/jure/ (Jure Leskovec) https://www.simonandschuster.com/books/Shoe-Dog/Phil-Knight/9781501135927 (Shoe Dog) https://en.m.wikipedia.org/wiki/Alexander_Hamilton_(book) (Hamilton) https://becomingmichelleobama.com/ (Becoming) https://www.penguinrandomhouse.com/books/44330/mindset-by-carol-s-dweck-phd/ (Mindset) https://en.m.wikipedia.org/wiki/The_Innovators_(book) (The Innovators)
Pogovarjal sem se z Nino in dr. Juretom Leskovec. Nina je magistra kvantitativnih financ, sedaj zaposlena na podjetju ComCom, kjer se ukvarja s podatkovno analitiko ter direktorica ASEF Slovenije. Jure je profesor na univerzi Stanford v San Franciscu, glavni znanstvenik na podjetju Pinterest ter raziskovalec na inštitutu Chan Zuckerberg Biohub. Jure je leta 2014 soustanovil fundacijo ASEF. ASEF je ameriško-slovenska izobraževalna fundacija, ki spodbuja kroženje znanja in povezuje slovenske izobražence in profesorje po celem svetu, zakar so leta 2018 prejeli priznanje VTIS leta. V fundaciji je zbranih preko 50 svetovno priznanih slovenskih profesorjev, ki delujejo na najboljših univerzah po svetu. Preko fundacije sodelujejo s Slovenijo ter mlad in v Sloveniji omogočajo rast in razvoj. V teh letih je ASEF s štipendijami in programi pomagal preko 130 mladim.
Ameriško-slovenska izobraževalna fundacija (ASEF) spodbuja povezovanje med Slovenci v domovini in po svetu. Hkrati pa najbolj obetavnim mladim daje priložnost in odpira vrata, da razvijejo svoje potenciale in talente. V petek, 24. julija 2020, je štipendiste slovenskega rodu iz tujine sprejela ministrica Helena Jaklitsch. O ASEF je govoril eden od ustanoviteljev in član odbora direktorjev, izredni profesor na Stanfordu v ZDA, dr. Jure Leskovec. Govorili so tudi prof. dr. Jernej Ule, eden izmed ASEF mentorjev ter štipendisti: David in Ana iz Clevelanda in Nadja iz Bariloč. Z odgovornim urednikom Novega glasa Jurijem Paljkom smo se pogovarjali o novi podobi tednika.
Dr. Jure Leskovec je profesor na ugledni ameriški Univerzi Stanford. Kako se je na koronakrizo odzvala Silicijeva dolina in kako tehnološki giganti razmišljajo ob aktualni zdravstveni in politični krizi v svetu?
Časovne in krajevne razdalje preči tudi digitalna tehnologija. Te se s pridom poslužujejo tudi v Ameriško slovenski izobraževalni fundaciji, kjer je v odboru direktorjev tudi prof. dr. Jure Leskovec, profesor računalništva na sloviti Univerzi Stanford v Kaliforniji. Seveda je svetovna pandemija posegla tudi v njihove načrte in aktivnosti, a zmotili bi se, če bi mislili, da v ASEFu ne delajo s polno paro tudi v tem času.
In news, the Defense Innovation Board releases AI Principles: Recommendations on the Ethical Use of AI by the Department of Defense. The National Institute of Standards and Technology’s National Cybersecurity Center of Excellent releases a draft for public comment on adversarial machine learning, which includes an in-depth taxonomy on the possibilities. Google adds BERT to its search algorithm, with its capability for bidirectional representations, in an attempt to “let go of some of your keyword-ese.” In research, Stanford University and Google demonstrate a method for explaining how image classifiers make their decisions, with Automatic Concept-based Explanations (ACE) that extra visual concepts such as colors and textures, or objects and parts. And GoogleAI, Stanford, and Columbia researchers teach a robot arm the concept of assembling objects, with Form2Fit, which is also capable of generalizing its learning to new objects and tasks. Danielle Tarraf pens the latest response to the National Security Commission on AI’s call for ideas, with Our Future Lies in Making AI Robust and Verifiable. Jure Leskovec, Anand Rajaraman, and Jeff Ullman make their second edition of Mining of Massive Datasets available. The Defense Innovation Board posts a video of its public meeting from 31 October at Georgetown University. Maciej Ceglowski’s “Superintelligence: the idea that eats smart people” takes a look at the arguments against superintelligence as a risk to humanity. Click here to visit our website and explore the links mentioned in the episode.
Slovenc, tvoja zemlja je zdrava…, pa vendar mladi odhajajo je naslov srečanja rojakov iz zamejstva in vsega sveta, ki je že osemnajstič potekalo v veliki dvorani parlamenta. Gre za tradicionalno srečanje, ki je lani zaradi predčasnih parlamentarnih volitev odpadlo in si je letos zadalo izzivalno tematiko: katere ukrepe bi morala sprejeti država, da bi mlade spodbujala k vračanju iz tujine domov. Kot uvodni razpravljalci so nastopili ustavni sodnik prof. ddr. Klemen Jaklič, izredni profesor računalništva na Univerzi Stanford v Kaliforniji, raziskovalec in soustanovitelj fundacije ASEF dr. Jure Leskovec, parlamentarna asistentka v Evropskem parlamentu mag. Nežka Figelj in sekretar na Uradu za Slovence v zamejstvu in po svetu doc. dr. Dejan Valentinčič.
Jure Leskovec is the Chief Scientist of Pinterest, an $11 billion dollar company hosting over 75 billion idea “pins” from it’s 175 million monthly users worldwide. Jure originally arrived at Pinterest in 2014 when his company, Kosei, was acquired after starting a “recommendation revolution” through smarter, personalized mobile ads. When Jure is not “turning cameras into keyboards” at Pinterest -- Fast Company’s “2nd most innovative AI company” -- he can also be found fulfilling his responsibilities as a: Associate Professor of Computer Science at Stanford University - where his research focuses on mining and modeling large social and information networks, including relationship graphs and chain effects in online community settings Investigator at the Chan-Zuckerberg Biohub - a multidisciplinary research organization on a mission to make all diseases preventable, manageable or curable by the year 2100 Some favorite topics we covered include: How being Pinterest’s Chief Data Scientist has affected his own social media use The story behind how he went from party conversation about Kosei to company acquisition by Pinterest in just 2 months How Pinterest makes recommendations to it’s users and thinks about the explore vs. exploit tradeoff of social media ads Moral considerations of echo chambers/filter bubbles/confirmation bias that he and his team take into account when serving Pinterest’s 200+ million users content each day Balancing short and long-term benefits at the individual (value to user), community (health of the content ecosystem), & company (ad revenue) level Recent research from Harvard, Stanford, UChicago, & Cornell investigating how machine learning can help criminal court judges make high-stake decisions This turned out to be one of my most fascinating conversations yet, so I hope you enjoy it as much as I did! Show Notes: https://ajgoldstein.com/podcast/ep10/ Jure’s Twitter: https://twitter.com/jure?lang=en AJ’s Twitter: https://twitter.com/ajgoldstein393/
Jure Leskovec ima zares impresiven življenjepis. Je glavni znanstvenik na Pinterestu (Chief Scientist) in obenem tudi profesor na Stanfordu, verjetno najboljši podjetniški univerzi na svetu. Leta 2014 je ustanovil podjetje Kosei, ki ga je že naslednje leto prevzel Pinterest. Jure je tudi velik strokovnjak na področju umetne inteligence, tako da sva tej temi namenila dober del podcasta. Nekatere teme, ki sva jih pokrila v podcastu so: kako in zakaj Jure daje prednost znanju pred zaslužkom (in kako to vodi do še boljšega zaslužka), kako je izgledal proces prodaje njegovega podjetja Pinterestu katere službe bodo obstale v prihodnosti, ki bo polna umetne inteligence.
The Future of Everything with Russ Altman: "Jure Leskovec: The power of social networks" What’s most likely to go viral? Where do online trolls come from? How do recommendation engines work? What do fitness apps tell us about the world? A computer scientist explains. What’s most likely to go viral? Where do online trolls come from? How do recommendation engines work? What do fitness apps tell us about the world? In the Future of Everything radio show, Jure Leskovec, Stanford associate professor of computer science and Pinterest chief scientist discusses the power of social media. Originally aired on SiriusXM on August 5, 2017. Recorded at Stanford Video.
What's most likely to go viral? Where do online trolls come from? How do recommendation engines work? What do fitness apps tell us about the world? In the Future of Everything radio show, Jure Leskovec, Stanford associate professor of computer science and Pinterest chief scientist discusses the power of social media.