Podcasts about demis

  • 110PODCASTS
  • 248EPISODES
  • 1h 3mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about demis

Latest podcast episodes about demis

Pivot
Demis Hassabis on AI, Game Theory, Multimodality, and the Nature of Creativity | Possible

Pivot

Play Episode Listen Later Apr 12, 2025 60:49


How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/  Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Possible
Demis Hassabis on AI, game theory, multimodality, and the nature of creativity

Possible

Play Episode Listen Later Apr 9, 2025 56:40


How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/  Select mentions:  Hitchhiker's Guide to the Galaxy by Douglas Adams AlphaGo documentary: https://www.youtube.com/watch?v=WXuK6gekU1Y Nash equilibrium & US mathematician John Forbes Nash Homo Ludens by Johan Huizinga Veo 2, an advanced, AI-powered video creation platform from Google DeepMind The Culture series by Iain Banks Hartmut Neven, German-American computer scientist Topics: 3:11 - Hellos and intros 5:20 - Brute force vs. self-learning systems 8:24 - How a learning approach helped develop new AI systems 11:29 - AlphaGo's Move 37 16:16 - What will the next Move 37 be? 19:42 - What makes an AI that can play the video game StarCraft impressive 22:32 - The importance of the act of play 26:24 - Data and synthetic data 28:33 - Midroll ad 28:39 - Is it important to have AI embedded in the world? 33:44 - The trade-off between thinking time and output quality 36:03 - Computer languages designed for AI 40:22 - The future of multimodality  43:27 - AI and geographic diversity  48:24 - AlphaFold and the future of medicine 51:18 - Rapid-fire Questions Possible is an award-winning podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. Hosted by Reid Hoffman and Aria Finger, each episode features an interview with an ambitious builder or deep thinker on a topic, from art to geopolitics and from healthcare to education. These conversations also showcase another kind of guest: AI. Each episode seeks to enhance and advance our discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.

Chicotadas
Vivências Assexuais no BDSM e na Não Monogamia (Uma conversa sobre assexualidade, demissexualidade, atração sexual e romântica, relatos e experiências) (Clube dos Apoiadores #12)

Chicotadas

Play Episode Listen Later Apr 8, 2025 115:04


Na semana iniciada pelo Dia Internacional da Assexualidade, um episódio justamente sobre o tema. Conversamos sobre vivências, relatos e experiências como pessoas assexuais praticantes de BDSM e/ou não monogâmicas, sobre assexualidade, demissexualidade, atração sexual e romântica, hormônios e hormonização, a ideia de amor romântico e como diferentes formatos de relação, fetiches e práticas estão presentes na vida de pessoas assexuais. Quer participar do nosso próximo encontro? Basta apoiar oChicotadas em https://apoia.se/chicotadas e votar nas enquetes sobre o tema e a data da próxima reunião no nosso grupo do Telegram! Episódio gravado em 24 de março de 2025.Equipe: Ada de Curitiba @aleneouadaParticipantes: Zab do RS @heyisinha, Vi/Nath de Santos @versoes.ineditas, Ariel de Taubaté @agiuliasforza, Sol de Ribeirão Preto @princesol.k1nk, Lu Oli de Brasília @luoli.sub, Fushi do RJ, Caos de BH @caos.impermanente, SarahSub de Guarulhos, Fe Bonfim de Juiz de Fora @fe.tichista @crcetim, Lui Castanho de São Paulo @luicastanho @lui_knk.A vitrine é uma arte com desenhos. Com fundo vermelho escuro, ela tem o desenho da silhueta de 8 cabeças, 4 na parte superior e 4 na parte inferior, em vermelho claro com uma faixa com as cores da bandeira ace (preto, cinza, branco e roxo) atravessando e passando ao redor delas. Ao centro, o título do episódio em bege.  Minutagens: 0:15 (citados: drop, fire play)6:46 IntroduçãoEps mencionados:- Chicotinho #04 – Chicopapo: Zab (assexualidade e vivências BDSM e não mono)- #43 Relações D/s e Não Monogamia- Clube dos apoiadores #10 (Sexualidade: Dúvidas e inseguranças)Citado: Mesa "Espectro da assexualidade e relações alternativas: relacionamentos e pautas de luta" no II Congresso Internacional da Aliança Nacional LGBTI+.8:50 Avisos de gatilho: 1:10:30-1:12:25 e 1:35:10-1:36:0010:07 Zab: vivência ace, pautas da comunidade.12:50 Ada: demi e descoberta16:18 Fushi: demi e a vontade de transar18:39 Demis, assexuais estritos e "tanto faz"20:42 Fe.tichista: fetiche como condicionante, microrrótulos, conteúdo sobre assexualidadeCitados: @lgbtqspacey, @coletivoabrace, @aroaceiros, fórum da AVEN. 25:56 Sol: demi, vínculo e expectativas alossexuaisCitados: Grayssexuais/assexual cinza, arromântico, demirromântica33:00 SarahSub: descobertas no BDSM e se descobrir demi35:42 Ada, menage37:06 Lui: dissidência, hormônios e puberdades, atração sexual, espectro, condicionante de fetiches, diferentes hormonizações44:44 Hormônios, libido, diferentes medicações que afetam o sexual49:20 Atração, vontade de transar, hormônios, testosterona, libido, reprodução, formas de atração55:19 Ariel/Giulia Sforza: Perlutan, jornada no bdsm, alossexualidade, termosCitado: Clube #06 (Heteronormatividade)1:02:50 Cenários em grupo e fetiche1:04:05 Vi/Nath: autoidentificação, mapeamento de desejos e conexões, space 1:07:37 Recado https://apoia.se/chicotadas1:09:19 Aceitação da assexualidade por pessoas demi, pré-conceitos e equívocos, "tipo" de pessoa, letramento1:10:30-1:12:25 gatilho: sexo como favor1:16:52 Como se identificar para alos?1:17:52 Cota alo, uma proposta provocativa, a ideia de "tanto faz".1:22:43 Expectativas sociais, atração romântica, destrinchar tipos de atração, a solidão dos aroaces (assexuais arromânticos), monogamia/NM.1:27:09 Possibilidades fora da norma, invalidação do ser fetichista e ser assexual1:32:09 Demi e rótulos1:33:34 Modelos de atração, gênero/masculinidade e pressão, arromanticidade1:35:10-1:36:00 gatilho: pressão para transar1:36:24 Dificuldade de definir atração, romance e sexo1:41:38 Romance e sexo, social/cultural versus natural, reflexões sobre situações, atividades, NM1:43:49 Luoli: confusão e definições 1:45:14 Arromanticidade, paixão, amor romântico, NM, anarquia relacional1:47:58 IndicaçõesAmizade DoloridaDevaneios Filosóficos: Ep. #25: demissexualidade: identidade ou sintoma ocidental?, Andreone MedradoZab: Ali Hazelwood 1:52:49 Aftercare

Ground Truths
Anna Greka: Molecular Sleuthing for Rare Diseases

Ground Truths

Play Episode Listen Later Mar 9, 2025 48:33


Funding for the NIH and US biomedical research is imperiled at a momentous time of progress. Exemplifying this is the work of Dr. Anna Greka, a leading physician-scientist at the Broad Institute who is devoted to unlocking the mysteries of rare diseases— that cumulatively affect 30 million Americans— and finding cures, science supported by the NIH.A clip from our conversationThe audio is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.Transcript with audio and external linksEric Topol (00:06):Well, hello. This is Eric Topol from Ground Truths, and I am really delighted to welcome today, Anna Greka. Anna is the president of the American Society for Clinical Investigation (ASCI) this year, a very prestigious organization, but she's also at Mass General Brigham, a nephrologist, a cell biologist, a physician-scientist, a Core Institute Member of the Broad Institute of MIT and Harvard, and serves as a member of the institute's Executive Leadership Team. So we got a lot to talk about of all these different things you do. You must be pretty darn unique, Anna, because I don't know any cell biologists, nephrologists, physician-scientist like you.Anna Greka (00:48):Oh, thank you. It's a great honor to be here and glad to chat with you, Eric.Eric Topol (00:54):Yeah. Well, I had the real pleasure to hear you speak at a November conference, the AI for Science Forum, which we'll link to your panel. Where I was in a different panel, but you spoke about your extraordinary work and it became clear that we need to get you on Ground Truths, so you can tell your story to everybody. So I thought rather than kind of going back from the past where you were in Greece and somehow migrated to Boston and all that. We're going to get to that, but you gave an amazing TED Talk and it really encapsulated one of the many phenomenal stories of your work as a molecular sleuth. So maybe if you could give us a synopsis, and of course we'll link to that so people could watch the whole talk. But I think that Mucin-1 or MUC1, as you call it, discovery is really important to kind of ground our discussion.A Mysterious Kidney Disease Unraveled Anna Greka (01:59):Oh, absolutely. Yeah, it's an interesting story. In some ways, in my TED Talk, I highlight one of the important families of this story, a family from Utah, but there's also other important families that are also part of the story. And this is also what I spoke about in London when we were together, and this is really sort of a medical mystery that initially started on the Mediterranean island of Cyprus, where it was found that there were many families in which in every generation, several members suffered and ultimately died from what at the time was a mysterious kidney disease. This was more than 30 years ago, and it was clear that there was something genetic going on, but it was impossible to identify the gene. And then even with the advent of Next-Gen sequencing, this is what's so interesting about this story, it was still hard to find the gene, which is a little surprising.Anna Greka (02:51):After we were able to sequence families and identify monogenic mutations pretty readily, this was still very resistant. And then it actually took the firepower of the Broad Institute, and it's actually from a scientific perspective, an interesting story because they had to dust off the old-fashioned Sanger sequencing in order to get this done. But they were ultimately able to identify this mutation in a VNTR region of the MUC1 gene. The Mucin-1 gene, which I call a dark corner of the human genome, it was really, it's highly repetitive, very GC-rich. So it becomes very difficult to sequence through there with Next-Gen sequencing. And so, ultimately the mutation of course was found and it's a single cytosine insertion in a stretch of cytosines that sort of causes this frameshift mutation and an early stop codon that essentially results in a neoprotein like a toxic, what I call a mangled protein that sort of accumulates inside the kidney cells.Anna Greka (03:55):And that's where my sort of adventure began. It was Eric Lander's group, who is the founding director of the Broad who discovered the mutation. And then through a conversation we had here in Boston, we sort of discovered that there was an opportunity to collaborate and so that's how I came to the Broad, and that's the beginnings of this story. I think what's fascinating about this story though, that starts in a remote Mediterranean island and then turns out to be a disease that you can find in every continent all over the world. There are probably millions of patients with kidney disease in whom we haven't recognized the existence of this mutation. What's really interesting about it though is that what we discovered is that the mangled protein that's a result of this misspelling of this mutation is ultimately captured by a family of cargo receptors, they're called the TMED cargo receptors and they end up sort of grabbing these misfolded proteins and holding onto them so tight that it's impossible for the cell to get rid of them.Anna Greka (04:55):And they become this growing heap of molecular trash, if you will, that becomes really hard to manage, and the cells ultimately die. So in the process of doing this molecular sleuthing, as I call it, we actually also identified a small molecule that actually disrupts these cargo receptors. And as I described in my TED Talk, it's a little bit like having these cargo trucks that ultimately need to go into the lysosome, the cells recycling facility. And this is exactly what this small molecule can do. And so, it was just like a remarkable story of discovery. And then I think the most exciting of all is that these cargo receptors turn out to be not only relevant to this one mangled misshapen protein, but they actually handle a completely different misshapen protein caused by a different genetic mutation in the eye, causing retinitis pigmentosa, a form of blindness, familial blindness. We're now studying familial Alzheimer's disease that's also involving these cargo receptors, and there are other mangled misshapen proteins in the liver, in the lung that we're now studying. So this becomes what I call a node, like a nodal mechanism that can be targeted for the benefit of many more patients than we had previously thought possible, which has been I think, the most satisfying part about this story of molecular sleuthing.Eric Topol (06:20):Yeah, and it's pretty extraordinary. We'll put the figure from your classic Cell paper in 2019, where you have a small molecule that targets the cargo receptor called TMED9.Anna Greka (06:34):Correct.Expanding the MissionEric Topol (06:34):And what's amazing about this, of course, is the potential to reverse this toxic protein disease. And as you say, it may have applicability well beyond this MUC1 kidney story, but rather eye disease with retinitis pigmentosa and the familial Alzheimer's and who knows what else. And what's also fascinating about this is how, as you said, there were these limited number of families with the kidney disease and then you found another one, uromodulin. So there's now, as you say, thousands of families, and that gets me to part of your sleuth work is not just hardcore science. You started an entity called the Ladders to Cures (L2C) Scientific Accelerator.Eric Topol (07:27):Maybe you can tell us about that because this is really pulling together all the forces, which includes the patient advocacy groups, and how are we going to move forward like this?Anna Greka (07:39):Absolutely. I think the goal of the Ladders to Cures Accelerator, which is a new initiative that we started at the Broad, but it really encompasses many colleagues across Boston. And now increasingly it's becoming sort of a national, we even have some international collaborations, and it's only two years that it's been in existence, so we're certainly in a growth mode. But the inspiration was really some of this molecular sleuthing work where I basically thought, well, for starters, it cannot be that there's only one molecular node, these TMED cargo receptors that we discovered there's got to be more, right? And so, there's a need to systematically go and find more nodes because obviously as anyone who works in rare genetic diseases will tell you, the problem for all of us is that we do what I call hand to hand combat. We start with the disease with one mutation, and we try to uncover the mechanism and then try to develop therapies, and that's wonderful.Anna Greka (08:33):But of course, it's slow, right? And if we consider the fact that there are 30 million patients in the United States in every state, everywhere in the country who suffer from a rare genetic disease, most of them, more than half of them are children, then we can appreciate the magnitude of the problem. Out of more than 8,000 genes that are involved in rare genetic diseases, we barely have something that looks like a therapy for maybe 500 of them. So there's a huge mismatch in the unmet need and magnitude of the problem. So the Ladders to Cures Accelerator is here to address this and to do this with the most modern tools available. And to your point, Eric, to bring patients along, not just as the recipients of whatever we discover, but also as partners in the research enterprise because it's really important to bring their perspectives and of course their partnerships in things like developing appropriate biomarkers, for example, for what we do down the road.Anna Greka (09:35):But from a fundamental scientific perspective, this is basically a project that aims to identify every opportunity for nodes, underlying all rare genetic diseases as quickly as possible. And this was one of the reasons I was there at the AI for Science Forum, because of course when one undertakes a project in which you're basically, this is what we're trying to do in the Ladders to Cures Accelerator, introduce dozens of thousands of missense and nonsense human mutations that cause genetic diseases, simultaneously introduce them into multiple human cells and then use modern scalable technology tools. Things like CRISPR screens, massively parallel CRISPR screens to try to interrogate all of these diseases in parallel, identify the nodes, and then develop of course therapeutic programs based on the discovery of these nodes. This is a massive data generation project that is much needed and in addition to the fact that it will help hopefully accelerate our approach to all rare diseases, genetic diseases. It is also a highly controlled cell perturbation dataset that will require the most modern tools in AI, not only to extract the data and understand the data of this dataset, but also because this, again, an extremely controlled, well controlled cell perturbation dataset can be used to train models, train AI models, so that in the future, and I hope this doesn't sound too futuristic, but I think that we're all aiming for that cell biologists for sure dream of this moment, I think when we can actually have in silico the opportunity to make predictions about what cell behaviors are going to look like based on a new perturbation that was not in the training set. So an experiment that hasn't yet been done on a cell, a perturbation that has not been made on a human cell, what if like a new drug, for example, or a new kind of perturbation, a new chemical perturbation, how would it affect the behavior of the cell? Can we make a predictive model for that? This doesn't exist today, but I think this is something, the cell prediction model is a big question for biology for the future. And so, I'm very energized by the opportunity to both address this problem of rare monogenic diseases that remains an unmet need and help as many patients as possible while at the same time advancing biology as much as we possibly can. So it's kind of like a win-win lifting all boats type of enterprise, hopefully.Eric Topol (12:11):Yeah. Well, there's many things to get to unpack what you've just been reviewing. So one thing for sure is that of these 8,000 monogenic diseases, they have relevance to the polygenic common diseases, of course. And then also the fact that the patient family advocates, they are great at scouring the world internet, finding more people, bringing together communities for each of these, as you point out aptly, these rare diseases cumulatively are high, very high proportion, 10% of Americans or more. So they're not so rare when you think about the overall.Anna Greka (12:52):Collectively.Help From the Virtual Cell?Eric Topol (12:53):Yeah. Now, and of course is this toxic proteinopathies, there's at least 50 of these and the point that people have been thinking until now that, oh, we found a mangled protein, but what you've zeroed in on is that, hey, you know what, it's not just a mangled protein, it's how it gets stuck in the cell and that it can't get to the lysosome to get rid of it, there's no waste system. And so, this is such fundamental work. Now that gets me to the virtual cell story, kind of what you're getting into. I just had a conversation with Charlotte Bunne and Steve Quake who published a paper in December on the virtual cell, and of course that's many years off, but of course it's a big, bold, ambitious project to be able to say, as you just summarized, if you had cells in silico and you could do perturbations in silico, and of course they were validated by actual experiments or bidirectionally the experiments, the real ones helped to validate the virtual cell, but then you could get a true acceleration of your understanding of cell biology, your field of course.Anna Greka (14:09):Exactly.Eric Topol (14:12):So what you described, is it the same as a virtual cell? Is it kind of a precursor to it? How do you conceive this because this is such a complex, I mean it's a fundamental unit of life, but it's also so much more complex than a protein or an RNA because not only all the things inside the cell, inside all these organelles and nucleus, but then there's all the outside interactions. So this is a bold challenge, right?Anna Greka (14:41):Oh my god, it's absolutely from a biologist perspective, it's the challenge of a generation for sure. We think taking humans to Mars, I mean that's an aspirational sort of big ambitious goal. I think this is the, if you will, the Mars shot for biology, being able to, whether the terminology, whether you call it a virtual cell. I like the idea of saying that to state it as a problem, the way that people who think about it from a mathematics perspective for example, would think about it. I think stating it as the cell prediction problem appeals to me because it actually forces us biologists to think about setting up the way that we would do these cell perturbation data sets, the way we would generate them to set them up to serve predictions. So for example, the way that I would think about this would be can I in the future have so much information about how cell perturbations work that I can train a model so that it can predict when I show it a picture of another cell under different conditions that it hasn't seen before, that it can still tell me, ah, this is a neuron in which you perturbed the mitochondria, for example, and now this is sort of the outcome that you would expect to see.Anna Greka (16:08):And so, to be able to have this ability to have a model that can have the ability to predict in silico what cells would look like after perturbation, I think that's sort of the way that I think about this problem. It is very far away from anything that exists today. But I think that the beginning starts, and this is one of the unique things about my institute, if I can say, we have a place where cell biologists, geneticists, mathematicians, machine learning experts, we all come together in the same place to really think and grapple with these problems. And of course we're very outward facing, interacting with scientists all across the world as well. But there's this sort of idea of bringing people into one institute where we can just think creatively about these big aspirational problems that we want to solve. I think this is one of the unique things about the ecosystem at the Broad Institute, which I'm proud to be a part of, and it is this kind of out of the box thinking that will hopefully get us to generate the kinds of data sets that will serve the needs of building these kinds of models with predictive capabilities down the road.Anna Greka (17:19):But as you astutely said, AlphaFold of course was based on the protein database existing, right? And that was a wealth of available information in which one could train models that would ultimately be predictive, as we have seen this miracle that Demi Hassabis and John Jumper have given to humanity, if you will.Anna Greka (17:42):But as Demis and John would also say, I believe is as I have discussed with them, in fact, the cell prediction problem is really a bigger problem because we do not have a protein data bank to go to right now, but we need to create it to generate these data. And so, my Ladders to Cures Accelerator is here to basically provide some part of the answer to that problem, create this kind of well-controlled database that we need for cell perturbations, while at the same time maximizing our learnings about these fully penetrant coding mutations and what their downstream sequelae would be in many different human cells. And so, in this way, I think we can both advance our knowledge about these monogenic diseases, build models, hopefully with predictive capabilities. And to your point, a lot of what we will learn about this biology, if we think that it involves 8,000 or more out of the 20,000 genes in our genome, it will of course serve our understanding of polygenic diseases ultimately as well as we go deeper into this biology and we look at the combinatorial aspects of what different mutations do to human cells. And so, it's a huge aspirational problem for a whole generation, but it's a good one to work on, I would say.Learning the Language of Life with A.I. Eric Topol (19:01):Oh, absolutely. Now I think you already mentioned something that's quite, well, two things from what you just touched on. One of course, how vital it is to have this inner or transdisciplinary capability because you do need expertise across these vital areas. But the convergence, I mean, I love your term nodal biology and the fact that there's all these diseases like you were talking about, they do converge and nodal is a good term to highlight that, but it's not. Of course, as you mentioned, we have genome editing which allows to look at lots of different genome perturbations, like the single letter change that you found in MUC1 pathogenic critical mutation. There's also the AI world which is blossoming like I've never seen. In fact, I had in Science this week about learning the language of life with AI and how there's been like 15 new foundation models, DNA, proteins, RNA, ligands, all their interactions and the beginning of the cell story too with the human cell.Eric Topol (20:14):So this is exploding. As you said, the expertise in computer science and then this whole idea that you could take these powerful tools and do as you said, which is the need to accelerate, we just can't sit around here when there's so much discovery work to be done with the scalability, even though it might take years to get to this artificial intelligence virtual cell, which I have to agree, everyone in biology would say that's the holy grail. And as you remember at our conference in London, Demi Hassabis said that's what we'd like to do now. So it has the attention of leaders in AI around the world, obviously in the science and the biomedical community like you and many others. So it is an extraordinary time where we just can't sit still with these tools that we have, right?Anna Greka (21:15):Absolutely. And I think this is going to be, you mentioned the ASCI presidency in the beginning of our call. This is going to be the president gets to give an address at the annual meeting in Chicago. This is going to be one of the points I make, no matter what field in biomedicine we're in, we live in, I believe, a golden era and we have so many tools available to us that we can really accelerate our ability to help more patients. And of course, this is our mandate, the most important stakeholders for everything that we do as physician-scientists are our patients ultimately. So I feel very hopeful for the future and our ability to use these tools and to really make good on the promise of research is a public good. And I really hope that we can advance our knowledge for the benefit of all. And this is really an exciting time, I think, to be in this field and hopefully for the younger colleagues a time to really get excited about getting in there and getting involved and asking the big questions.Career ReflectionsEric Topol (22:21):Well, you are the prototype for this and an inspiration to everyone really, I'm sure to your lab group, which you highlighted in the TED Talk and many other things that you do. Now I want to spend a little bit of time about your career. I think it's fascinating that you grew up in Greece and your father's a nephrologist and your mother's a pathologist. So you had two physicians to model, but I guess you decided to go after nephrology, which is an area in medicine that I kind of liken it to Rodney Dangerfield, he doesn't get any respect. You don't see many people that go into nephrology. But before we get to your decision to do that somehow or other you came from Greece to Harvard for your undergrad. How did you make that connect to start your college education? And then subsequently you of course you stayed in Boston, you've never left Boston, I think.Anna Greka (23:24):I never left. Yeah, this is coming into 31 years now in Boston.Anna Greka (23:29):Yeah, I started as a Harvard undergraduate and I'm now a full professor. It's kind of a long, but wonderful road. Well, actually I would credit my parents. You mentioned that my father, they're both physician-scientists. My father is now both retired, but my father is a nephrologist, and my mother is a pathologist, actually, they were both academics. And so, when we were very young, we lived in England when my parents were doing postdoctoral work. That was actually a wonderful gift that they gave me because I became bilingual. It was a very young age, and so that allowed me to have this advantage of being fluent in English. And then when we moved back to Greece where I grew up, I went to an American school. And from that time, this is actually an interesting story in itself. I'm very proud of this school.Anna Greka (24:22):It's called Anatolia, and it was founded by American missionaries from Williams College a long time ago, 150 and more years ago. But it is in Thessaloniki, Greece, which is my hometown, and it's a wonderful institution, which gave me a lot of gifts as well, preparing me for coming to college in the United States. And of course, I was a good student in high school, but what really was catalytic was that I was lucky enough to get a scholarship to go to Harvard. And that was really, you could say the catalyst that propelled me from a teenager who was dreaming about a career as a physician-scientist because I certainly was for as far back as I remember in fact. But then to make that a reality, I found myself on the Harvard campus initially for college, and then I was in the combined Harvard-MIT program for my MD PhD. And then I trained in Boston at Mass General in Brigham, and then sort of started my academic career. And that sort of brings us to today, but it is an unlikely story and one that I feel still very lucky and blessed to have had these opportunities. So for sure, it's been wonderful.Eric Topol (25:35):We're the ones lucky that you came here and set up shop and you did your productivity and discovery work and sleuthing has been incredible. But I do think it's interesting too, because when you did your PhD, it was in neuroscience.Anna Greka (25:52):Ah, yes. That's another.Eric Topol (25:54):And then you switch gears. So tell us about that?Anna Greka (25:57):This is interesting, and actually I encourage more colleagues to think about it this way. So I have always been driven by the science, and I think that it seems a little backward to some people, but I did my PhD in neuroscience because I was interested in understanding something about these ion channels that were newly discovered at the time, and they were most highly expressed in the brain. So here I was doing work in the brain in the neuroscience program at Harvard, but then once I completed my PhD and I was in the middle of my residency training actually at Mass General, I distinctly remember that there was a paper that came out that implicated the same family of ion channels that I had spent my time understanding in the brain. It turned out to be a channelopathy that causes kidney disease.Anna Greka (26:43):So that was the light bulb, and it made me realize that maybe what I really wanted to do is just follow this thread. And my scientific curiosity basically led me into studying the kidney and then it seemed practical therefore to get done with my clinical training as efficiently as possible. So I finished residency, I did nephrology training, and then there I was in the lab trying to understand the biology around this channelopathy. And that sort of led us into the early projects in my young lab. And in fact, it's interesting we didn't talk about that work, but that work in itself actually has made it all the way to phase II trials in patients. This was a paper we published in Science in 2017 and follow onto that work, there was an opportunity to build this into a real drug targeting one of these ion channels that has made it into phase II trials. And we'll see what happens next. But it's this idea of following your scientific curiosity, which I also talked about in my TED Talk, because you don't know to what wonderful places it will lead you. And quite interestingly now my lab is back into studying familial Alzheimer's and retinitis pigmentosa in the eye in brain. So I tell people, do not limit yourself to whatever someone says your field is or should be. Just follow your scientific curiosity and usually that takes you to a lot more interesting places. And so, that's certainly been a theme from my career, I would say.Eric Topol (28:14):No, I think that's perfect. Curiosity driven science is not the term. You often hear hypothesis driven or now with AI you hear more AI exploratory science. But no, that's great. Now I want to get a little back to the AI story because it's so fascinating. You use lots of different types of AI such as cellular imaging would be fusion models and drug discovery. I mean, you've had drug discovery for different pathways. You mentioned of course the ion channel and then also as we touched on with your Cell paper, the whole idea of targeting the cargo receptor with a small molecule and then things in between. You discussed this of course at the London panel, but maybe you just give us the skinny on the different ways that you incorporate AI in the state-of-the-art science that you're doing?Anna Greka (29:17):Sure, yeah, thank you. I think there are many ways in which even for quite a long time before AI became such a well-known kind of household term, if you will, the concept of machine learning in terms of image processing is something that has been around for some time. And so, this is actually a form of AI that we use in order to process millions of images. My lab has by produced probably more than 20 million images over the last few years, maybe five to six years. And so, if you can imagine it's impossible for any human to process this many images and make sense of them. So of course, we've been using machine learning that is becoming increasingly more and more sophisticated and advanced in terms of being able to do analysis of images, which is a lot of what we cell biologists do, of course.Anna Greka (30:06):And so, there's multiple different kinds of perturbations that we do to cells, whether we're using CRISPR or base editing to make, for example, genome wide or genome scale perturbations or small molecules as we have done as well in the past. These are all ways in which we are then using machine learning to read out the effects in images of cells that we're looking at. So that's one way in which machine learning is used in our daily work, of course, because we study misshape and mangled proteins and how they are recognized by these cargo receptors. We also use AlphaFold pretty much every day in my lab. And this has been catalytic for us as a tool because we really are able to accelerate our discoveries in ways that were even just three or four years ago, completely impossible. So it's been incredible to see how the young people in my lab are just so excited to use these tools and they're becoming extremely savvy in using these tools.Anna Greka (31:06):Of course, this is a new generation of scientists, and so we use AlphaFold all the time. And this also has a lot of implications of course for some of the interventions that we might think about. So where in this cargo receptor complex that we study for example, might we be able to fit a drug that would disrupt the complex and lead the cargo tracks into the lysosome for degradation, for example. So there's many ways in which AI can be used for all of these functions. So I would say that if we were to organize our thinking around it, one way to think about the use of machine learning AI is around what I would call understanding biology in cells and what in sort of more kind of drug discovery terms you would call target identification, trying to understand the things that we might want to intervene on in order to have a benefit for disease.Anna Greka (31:59):So target ID is one area in which I think machine learning and AI will have a catalytic effect as they already are. The other of course, is in the actual development of the appropriate drugs in a rational way. So rational drug design is incredibly enabled by AlphaFold and all these advances in terms of understanding protein structures and how to fit drugs into them of all different modalities and kinds. And I think an area that we are not yet harnessing in my group, but I think the Ladders to Cures Accelerator hopes to build on is really patient data. I think that there's a lot of opportunity for AI to be used to make sense of medical records for example and how we extract information that would tell us that this cohort of patients is a better cohort to enroll in your trial versus another. There are many ways in which we can make use of these tools. Not all of them are there yet, but I think it's an exciting time for being involved in this kind of work.Eric Topol (32:58):Oh, no question. Now it must be tough when you know the mechanism of these families disease and you even have a drug candidate, but that it takes so long to go from that to helping these families. And what are your thoughts about that, I mean, are you thinking also about genome editing for some of these diseases or are you thinking to go through the route of here's a small molecule, here's the tox data in animal models and here's phase I and on and on. Where do you think because when you know so much and then these people are suffering, how do you bridge that gap?Anna Greka (33:39):Yeah, I think that's an excellent question. Of course, having patients as our partners in our research is incredible as a way for us to understand the disease, to build biomarkers, but it is also exactly creating this kind of emotional conflict, if you will, because of course, to me, honesty is the best policy, if you will. And so, I'm always very honest with patients and their families. I welcome them to the lab so they can see just how long it takes to get some of these things done. Even today with all the tools that we have, of course there are certain things that are still quite slow to do. And even if you have a perfect drug that looks like it fits into the right pocket, there may still be some toxicity, there may be other setbacks. And so, I try to be very honest with patients about the road that we're on. The small molecule path for the toxic proteinopathies is on its way now.Anna Greka (34:34):It's partnered with a pharmaceutical company, so it's on its way hopefully to patients. Of course, again, this is an unpredictable road. Things can happen as you very well know, but I'm at least glad that it's sort of making its way there. But to your point, and I'm in an institute where CRISPR was discovered, and base editing and prime editing were discovered by my colleagues here. So we are in fact looking at every other modality that could help with these diseases. We have several hurdles to overcome because in contrast to the liver and the brain, the kidney for example, is not an organ in which you can easily deliver nucleic acid therapies, but we're making progress. I have a whole subgroup within the bigger group who's focusing on this. It's actually organized in a way where they're running kind of independently from the cell biology group that I run.Anna Greka (35:31):And it's headed by a person who came from industry so that she has the opportunity to really drive the project the way that it would be run milestone driven, if you will, in a way that it would be run as a therapeutics program. And we're really trying to go after all kinds of different nucleic acid therapies that would target the mutations themselves rather than the cargo receptors. And so, there's ASO and siRNA technologies and then also actual gene editing technologies that we are investigating. But I would say that some of them are closer than others. And again, to your question about patients, I tell them honestly when a project looks to be more promising, and I also tell them when a project looks to have hurdles and that it will take long and that sometimes I just don't know how long it will take before we can get there. The only thing that I can promise patients in any of our projects, whether it's Alzheimer's, blindness, kidney disease, all I can promise is that we're working the hardest we possibly can on the problem.Anna Greka (36:34):And I think that is often reassuring I have found to patients, and it's best to be honest about the fact that these things take a long time, but I do think that they find it reassuring that someone is on it essentially, and that there will be some progress as we move forward. And we've made progress in the very first discovery that came out of my lab. As I mentioned to you, we've made it all the way to phase II trials. So I have seen the trajectory be realized, and I'm eager to make it happen again and again as many times as I can within my career to help as many people as possible.The Paucity of Physician-ScientistsEric Topol (37:13):I have no doubts that you'll be doing this many times in your career. No, there's no question about it. It's extraordinary actually. There's a couple of things there I want to pick up on. Physician-scientists, as you know, are a rarefied species. And you have actually so nicely told the story about when you have a physician-scientist, you're caring for the patients that you're researching, which is, most of the time we have scientists. Nothing wrong with them of course, but you have this hinge point, which is really important because you're really hearing the stories and experiencing the patients and as you say, communicating about the likelihood of being able to come up with a treatment or the progress. What are we going to do to get more physician-scientists? Because this is a huge problem, it has been for decades, but the numbers just keep going lower and lower.Anna Greka (38:15):I think you're absolutely right. And this is again, something that in my leadership of the ASCI I have made sort of a cornerstone of our efforts. I think that it has been well-documented as a problem. I think that the pressures of modern clinical care are really antithetical to the needs of research, protected time to really be able to think and be creative and even have the funding available to be able to pursue one's program. I think those pressures are becoming so heavy for investigators that many of them kind of choose one or the other route most often the clinical route because that tends to be, of course where they can support their families better. And so, this has been kind of the conundrum in some ways that we take our best and brightest medical students who are interested in investigation, we train them and invest in them in becoming physician-scientists, but then we sort of drop them at the most vulnerable time, which is usually after one completes their clinical and scientific training.Anna Greka (39:24):And they're embarking on early phases of one's careers. It has been found to be a very vulnerable point when a lot of people are now in their mid-thirties or even late thirties perhaps with some family to take care of other burdens of adulthood, if you will. And I think what it becomes very difficult to sustain a career where one salary is very limited due to the research component. And so, I think we have to invest in our youngest people, and it is a real issue that there's no good mechanism to do that at the present time. So I was actually really hoping that there would be an opportunity with leadership at the NIH to really think about this. It's also been discussed at the level of the National Academy of Medicine where I had some role in discussing the recent report that they put out on the biomedical enterprise in the United States. And it's kind of interesting to see that there is a note made there about this issue and the fact that there needs to be, I think, more generous investment in the careers of a few select physician-scientists that we can support. So if you look at the numbers, currently out of the entire physician workforce, a physician-scientist comprised of less than 1%.Anna Greka (40:45):It's probably closer to 0.8% at this point.Eric Topol (40:46):No, it's incredible.Anna Greka (40:48):So that's really not enough, I think, to maintain the enterprise and if you will, this incredible innovation economy that the United States has had this miracle engine, if you will, in biomedicine that has been fueled in large part by physician investigators. Of course, our colleagues who are non-physician investigators are equally important partners in this journey. But we do need a few of the physician-scientists investigators I think as well, if you really think about the fact that I think 70% of people who run R&D programs in all the big pharmaceutical companies are physician-scientists. And so, we need people like us to be able to work on these big problems. And so, more investment, I think that the government, the NIH has a role to play there of course. And this is important from both an economic perspective, a competition perspective with other nations around the world who are actually heavily investing in the physician-scientist workforce.Anna Greka (41:51):And I think it's also important to do so through our smaller scale efforts at the ASCI. So one of the things that I have been involved in as a council member and now as president is the creation of an awards program for those early career investigators. So we call them the Emerging-Generation Awards, and we also have the Young Physician-Scientist Awards. And these are really to recognize people who are making that transition from being kind of a trainee and a postdoc and have finished their clinical training into becoming an independent assistant professor. And so, those are small awards, but they're kind of a symbolic tap on the shoulder, if you will, that the ASCI sees you, you're talented, stay the course. We want you to become a future member. Don't give up and please keep on fighting. I think that can take us only so far.Anna Greka (42:45):I mean, unless there's a real investment, of course still it will be hard to maintain people in the pipeline. But this is just one way in which we have tried to, these programs that the ASCI offers have been very successful over the last few years. We create a cohort of investigators who are clearly recognized by members of the ASCI is being promising young colleagues. And we give them longitudinal training as part of a cohort where they learn about how to write a grant, how to write a paper, leadership skills, how to run a lab. And they're sort of like a buddy system as well. So they know that they're in it together rather than feeling isolated and struggling to get their careers going. And so, we've seen a lot of success. One way that we measure that is conversion into an ASCI membership. And so, we're encouraged by that, and we hope that the program can continue. And of course, as president, I'm going to be fundraising for that as well, it's part of the role. But it is a really worthy cause because to your point, we have to somehow make sure that our younger colleagues stay the course that we can at least maintain, if not bolster our numbers within the scientific workforce.Eric Topol (43:57):Well, you outlined some really nice strategies and plans. It's a formidable challenge, of course. And we'd like to see billions of dollars to support this. And maybe someday we will because as you say, if we could relieve the financial concerns of people who have curiosity driven ideas.Anna Greka (44:18):Exactly.Eric Topol (44:19):We could do a lot to replenish and build a big physician-scientist workforce. Now, the last thing I want to get to, is you have great communication skills. Obviously, anybody who is listening or watching this.Eric Topol (44:36):Which is another really important part of being a scientist, no less a physician or the hybrid of the two. But I wanted to just go to the backstory because your TED Talk, which has been watched by hundreds of thousands of people, and I'm sure there's hundreds of thousands more that will watch it, but the TED organization is famous for making people come to the place a week ahead. This is Vancouver used to be in LA or Los Angeles area and making them rehearse the talk, rehearse, rehearse, rehearse, which seems crazy. You could train the people there, how to give a talk. Did you have to go through that?Anna Greka (45:21):Not really. I did rehearse once on stage before I actually delivered the talk live. And I was very encouraged by the fact that the TED folks who are of course very well calibrated, said just like that. It's great, just like that.Eric Topol (45:37):That says a lot because a lot of people that do these talks, they have to do it 10 times. So that kind of was another metric. But what I don't like about that is it just because these people almost have to memorize their talks from giving it so much and all this coaching, it comes across kind of stilted and unnatural, and you're just a natural great communicator added to all your other things.Anna Greka (46:03):I think it's interesting. Actually, I would say, if I may, that I credit, of course, I actually think that it's important, for us physician-scientists, again, science and research is a public good, and being able to communicate to the public what it is that we do, I think is kind of an obligation for the fact that we are funded by the public to do this kind of work. And so, I think that's important. And I always wanted to cultivate those communication skills for the benefit of communicating simply and clearly what it is that we do in our labs. But also, I would say as part of my story, I mentioned that I had the opportunity to attend a special school growing up in Greece, Anatolia, which was an American school. One of the interesting things about that is that there was an oratory competition.Anna Greka (46:50):I got very early exposure entering that competition. And if you won the first prize, it was in the kind of ancient Rome way, first among equals, right? And so, that was the prize. And I was lucky to have this early exposure. This is when I was 14, 15, 16 years old, that I was training to give these oratory speeches in front of an audience and sort of compete with other kids who were doing the same. I think these are just wonderful gifts that a school can give a student that have stayed with me for life. And I think that that's a wonderful, yeah, I credit that experience for a lot of my subsequent capabilities in this area.Eric Topol (47:40):Oh, that's fantastic. Well, this has been such an enjoyable conversation, Anna. Did I miss anything that we need to bring up, or do you think we have it covered?Anna Greka (47:50):Not at all. No, this was wonderful, and I thoroughly enjoyed it as well. I'm very honored seeing how many other incredible colleagues you've had on the show. It's just a great honor to be a part of this. So thank you for having me.Eric Topol (48:05):Well, you really are such a great inspiration to all of us in the biomedical community, and we'll be cheering for your continued success and thanks so much for joining today, and I look forward to the next time we get a chance to visit.Anna Greka (48:20):Absolutely. Thank you, Eric.**************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access.Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding and support for biomedical research at NIH and other US governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

Maretul Har Podcast
Doctrina Duminicala class16 [Demis]

Maretul Har Podcast

Play Episode Listen Later Feb 5, 2025 18:49


Doctrina Duminicala class16 [Demis] by Maretul Har UK

Pro Talk: Tennis Conversations
Australian Open 25 J11-12: présentation demis hommes + partage Petits As

Pro Talk: Tennis Conversations

Play Episode Listen Later Jan 23, 2025 14:05


Aujourd'hui je reviens sur mes deux journées aux petits as, et nous parlons des demi-finales de l'open d'Australie et les approches tactiques qu'il peut y avoir Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Maretul Har Podcast
Doctrina Duminicala Clasa 14 [Demis]

Maretul Har Podcast

Play Episode Listen Later Jan 21, 2025 25:24


Doctrina Duminicala Clasa 14 [Demis] by Maretul Har UK

Fazit - Kultur vom Tage - Deutschlandfunk Kultur
"Slow burn" - Unter der neuen Intendanz von Demis Volpi am Hamburg Ballett

Fazit - Kultur vom Tage - Deutschlandfunk Kultur

Play Episode Listen Later Dec 8, 2024 7:23


Nehring, Elisabeth www.deutschlandfunkkultur.de, Fazit

Multiverse 5D
Mundo Secreto com Demis Viana - Boletim Galáctico 03-12-24

Multiverse 5D

Play Episode Listen Later Dec 4, 2024 113:52


Mundo Secreto com Demis Viana - Boletim Galáctico 03-12-24

Multiverse 5D
Mundo Secreto _Demis Viana - Atualização de Exopolítica 30.11.24

Multiverse 5D

Play Episode Listen Later Dec 1, 2024 128:43


Mundo Secreto com Demis Viana - Atualização de Exopolítica 30.11

Maretul Har Podcast
Doctrina Duminicala Clasa 5 [Demis]

Maretul Har Podcast

Play Episode Listen Later Nov 12, 2024 27:55


Doctrina Duminicala Clasa 5 [Demis] by Maretul Har UK

Multiverse 5D
Demis Viana -- Atualização de Exopolítica -- 09-11-24

Multiverse 5D

Play Episode Listen Later Nov 11, 2024 149:39


Demis Viana -- Atualização de Exopolítica -- 09-11-24

Multiverse 5D
Mundo Secreto - Demis Viana - Updates de exopolítica, geopolítica, ufologia, metafísica e habilidades psíquicas 03-11-24

Multiverse 5D

Play Episode Listen Later Nov 5, 2024 112:08


Mundo Secreto - Demis Viana - Updates de exopolítica, geopolítica, ufologia, metafísica e habilidades psíquicas 03-11-24

Multiverse 5D
Mundo Secreto com Demis Viana -- Boletim Galáctico -- 29-10-24

Multiverse 5D

Play Episode Listen Later Oct 30, 2024 114:41


Mundo Secreto com Demis Viana -- Boletim Galáctico -- 29-10-24

Multiverse 5D
Demis Viana - Mundo Secreto - Boletim Exopolítico, Geopolítico & Galáctico - 22-10-24

Multiverse 5D

Play Episode Listen Later Oct 23, 2024 123:33


Demis Viana - Mundo Secreto - Boletim Exopolítico, Geopolítico & Galáctico - 22-10-24

Multiverse 5D
Demis Viana -- Mundo Secreto -- Atualização de Exopolítica, Geopolítica & Metafísica -- 03-10-24

Multiverse 5D

Play Episode Listen Later Oct 4, 2024 117:01


Demis Viana -- Mundo Secreto -- Atualização de Exopolítica, Geopolítica & Metafísica -- 03-10-24

Multiverse 5D
Demis Viana -- Mundo Secreto -- Atualização de Exopolítica, Geopolítica, Ufologia, Metafísica, desacobertamento e revelação UFO 28-09-24

Multiverse 5D

Play Episode Listen Later Sep 29, 2024 159:17


Demis Viana -- Mundo Secreto -- Atualização de Exopolítica, Geopolítica, Ufologia, Metafísica, desacobertamento e revelação UFO 28-09-24

Multiverse 5D
Demis Viana Mundo Secreto - Updates & FAQ 26-09-24

Multiverse 5D

Play Episode Listen Later Sep 27, 2024 103:52


Demis Viana Mundo Secreto - Updates & FAQ 26-09-24

Multiverse 5D
Mundo Secreto com Demis Viana - Boletim Galáctico, Político, Geopolítico e Exopolítico - 24-09-24

Multiverse 5D

Play Episode Listen Later Sep 26, 2024 107:42


Mundo Secreto com Demis Viana - Boletim Galáctico, Político, Geopolítico e Exopolítico - 24-09-24

Plus
Hlavní zprávy - rozhovory a komentáře: Polední publicistika: Rezignace vedení Pirátů. Následky povodní v Jeseníku. Benefiční koncert

Plus

Play Episode Listen Later Sep 23, 2024 20:00


Demisí reaguje vedení České pirátské strany na neúspěch v krajských a senátních volbách. Znamená to i její konec ve vládě? Jakým způsobem bude strana čelit odlivu voličů? Jak se severovýchod Česka vypořádává s následky záplav? Jaká byla na Jesenicku volební účast? A kam půjde výtěžek z benefičního koncertu na pomoc povodněmi postiženým městům a obcím, který v sobotu uspořádá Český rozhlas?

Radiožurnál
Hlavní zprávy - rozhovory a komentáře: Polední publicistika: Rezignace vedení Pirátů. Následky povodní v Jeseníku. Benefiční koncert

Radiožurnál

Play Episode Listen Later Sep 23, 2024 20:00


Demisí reaguje vedení České pirátské strany na neúspěch v krajských a senátních volbách. Znamená to i její konec ve vládě? Jakým způsobem bude strana čelit odlivu voličů? Jak se severovýchod Česka vypořádává s následky záplav? Jaká byla na Jesenicku volební účast? A kam půjde výtěžek z benefičního koncertu na pomoc povodněmi postiženým městům a obcím, který v sobotu uspořádá Český rozhlas?

Hlavní zprávy - rozhovory a komentáře
Polední publicistika: Rezignace vedení Pirátů. Následky povodní v Jeseníku. Benefiční koncert

Hlavní zprávy - rozhovory a komentáře

Play Episode Listen Later Sep 23, 2024 20:00


Demisí reaguje vedení České pirátské strany na neúspěch v krajských a senátních volbách. Znamená to i její konec ve vládě? Jakým způsobem bude strana čelit odlivu voličů? Jak se severovýchod Česka vypořádává s následky záplav? Jaká byla na Jesenicku volební účast? A kam půjde výtěžek z benefičního koncertu na pomoc povodněmi postiženým městům a obcím, který v sobotu uspořádá Český rozhlas?Všechny díly podcastu Hlavní zprávy - rozhovory a komentáře můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.

Multiverse 5D
Demis Viana Mundo Secreto - FAQ (Frequently Asked Questions) 19-09-24

Multiverse 5D

Play Episode Listen Later Sep 20, 2024 104:29


Demis Viana Mundo Secreto - FAQ (Frequently Asked Questions) 19-09-24

mundo secreto viana demis faq frequently asked questions
Multiverse 5D
Demis Viana - Mundo Secreto - Atualização Exopolíticas e Boletim Galáctico - 17-09-24

Multiverse 5D

Play Episode Listen Later Sep 18, 2024 104:49


Demis Viana - Mundo Secreto - Atualização Exopolíticas e Boletim Galáctico - 17-09-24

Multiverse 5D
Demis Viana - Mundo Secreto - Atualização de Exopolítica - 14-09-24

Multiverse 5D

Play Episode Listen Later Sep 15, 2024 159:19


Demis Viana - Mundo Secreto - Atualização de Exopolítica - 14-09-24

Multiverse 5D
Mundo Secreto com Demis Viana - Boletim Galáctico, Exopolítico e Geopolítico 10-09-24

Multiverse 5D

Play Episode Listen Later Sep 11, 2024 114:24


Mundo Secreto com Demis Viana - Boletim Galáctico, Exopolítico e Geopolítico 10-09-24

BookTok Made Me Podcast
Born of Blood and Ash - Flesh and Fire Book 4

BookTok Made Me Podcast

Play Episode Listen Later Sep 10, 2024 85:17


Bridget, Caitlin, and Hilda discuss "Born of Blood and Ash," book 4 in Jennifer L. Armentrout's Flesh and Fire series, which is a prequel to the From Blood and Ash series. And well ... they read it so you don't have to. And we'll leave it at that. Happy listening! Join our Patreon for exclusive behind-the-scenes content and let's be friends!Instagram > @Booktokmademe_podTikTok > @BooktokMadeMe

Multiverse 5D
Demis Viana - Mundo Secreto - FAQ (Frequently Asked Questions) - Atualização de Exopolítica, Geopolítica e Ufologia e Metafísica - 07-09-24

Multiverse 5D

Play Episode Listen Later Sep 9, 2024 141:50


Demis Viana - Mundo Secreto - FAQ (Frequently Asked Questions) - Atualização de Exopolítica, Geopolítica e Ufologia e Metafísica - 07-09-24

Multiverse 5D
Demis Viana - Mundo Secreto - FAQ (Frequently asked Questions) - 05-09-24

Multiverse 5D

Play Episode Listen Later Sep 6, 2024 96:56


Demis Viana - Mundo Secreto - FAQ (Frequently asked Questions) - 05-09-24

mundo secreto viana demis faq frequently asked questions
Multiverse 5D
Demis Viana - Mundo Secreto - Atualização de Exopolítica 3-9-24

Multiverse 5D

Play Episode Listen Later Sep 4, 2024 101:21


Demis Viana - Mundo Secreto - Atualização de Exopolítica 3-9-24

Multiverse 5D
Demis Viana - Mundo Secreto - Atualização de Exopolítica 31-8-24

Multiverse 5D

Play Episode Listen Later Sep 1, 2024 131:57


Demis Viana - Mundo Secreto - Atualização de Exopolítica 31-8-24

Multiverse 5D
Atualização de Exopolítica com Demis Viana - Mundo Secreto 24-08-24

Multiverse 5D

Play Episode Listen Later Aug 25, 2024 116:39


Atualização de Exopolítica com Demis Viana - Mundo Secreto 24-08-24

Multiverse 5D
Demis Viana - Mundo Secreto - FAQ - Frequently Asked Questions 22-08-24

Multiverse 5D

Play Episode Listen Later Aug 23, 2024 117:28


Demis Viana - Mundo Secreto - FAQ - Frequently Asked Questions 22-08-24

mundo secreto viana demis faq frequently asked questions
Multiverse 5D
Demis Viana - Mundo secreto - Atualização de Exopolítica e Transição Planetária - Sab, 17-08-24

Multiverse 5D

Play Episode Listen Later Aug 18, 2024 138:30


Demis Viana - Mundo secreto - Atualização de Exopolítica e Transição Planetária - Sab, 17-08-24

The Power Meeting Podcast
En grej till: Love is Blind UK #4 – ”I put the ring on the wrong finger and he didn't notice”

The Power Meeting Podcast

Play Episode Listen Later Aug 16, 2024 50:07


Dags för våra reaktioner på det fjärde avsnittet av Love is Blind UK! Vi snackar avtändande dåligt omdöme, confused ass Nicoles omedelbara ånger, att citera romcom-repliker, när att skryta om sitt utseende slår fel, att insistera att någon ska lita på en, ”boring brown eyes”, tunna överläppar, thick hairline-privilegium, Ollie & Demis första möte, vilka som har naturlig kemi, Nicole & Sams uppbrott + mycket mer. Enjoy! Stötta oss på Patreon för regelbundna bonusavsnitt + mer! Hosted on Acast. See acast.com/privacy for more information.

The Power Meeting Podcast
En grej till: Love is Blind UK #3 – ”I think I love you too”

The Power Meeting Podcast

Play Episode Listen Later Aug 16, 2024 37:39


Våra tankar om tredje avsnittet av Love is Blind UK är här! Vi pratar den klassiska ”he's not here for the right reasons-”taktiken, Nicoles dåliga omdöme och frumpy klädstil, att vilja vara hemmafru, att inte vilja ha hemmafru, vad Jon tyckte vad pinsamt med Bobby och Jasmines reveal, Demis endometrios och Ollies svar, personer som är oförmögna att följa dejtingråd, Toms kåthet, Freddies armar, det urspårade fillers- och botoxanvändandet, intensiva kramar + mycket mer. Enjoy!  Stötta oss på Patreon för regelbundna bonusavsnitt + mer! Hosted on Acast. See acast.com/privacy for more information.

Multiverse 5D
Demis Viana - Mundo secreto - Atualização de Exopolítica e Transição Planetária 13-08-24

Multiverse 5D

Play Episode Listen Later Aug 14, 2024 97:22


Demis Viana - Mundo secreto - Atualização de Exopolítica e Transição Planetária 13-08-24

Multiverse 5D
Demis Viana - Mundo secreto - Atualização de Exopolítica sobre as tecologias suprimidas do público 10-08-24

Multiverse 5D

Play Episode Listen Later Aug 12, 2024 139:13


Demis Viana - Mundo secreto - Atualização de Exopolítica sobre as tecologias suprimidas do público pela elites globalistas e a NOM 10-08-24

Multiverse 5D
Mundo Secreto Com Demis Viana - Atualização de Exopolítica - O sistema monetário mundial 03-08-24

Multiverse 5D

Play Episode Listen Later Aug 6, 2024 150:42


Mundo Secreto Com Demis Viana - Atualização de Exopolítica - O sistema monetário mundial 03-08-24

Jurnal RFI
Putin l-a demis pe prim-adjunctul ministrului de Externe, responsabil de relaţiile cu ţările europene

Jurnal RFI

Play Episode Listen Later Jul 29, 2024


Preşedintele rus Vladimir Putin l-a eliberat pe Vladimir Titov din funcţia de prim-adjunct al ministrului de externe al Federaţiei Ruse, potrivit unui decret al şefului statului publicat pe portalul oficial de informaţii, scrie agenţia TASS, transmite News.ro.

Multiverse 5D
Mundo Secreto com Demis Viana - Atualização de Exopolítica 20-07-24

Multiverse 5D

Play Episode Listen Later Jul 21, 2024 118:24


Mundo Secreto com Demis Viana - Atualização de Exopolítica 20-07-24

Multiverse 5D
Demis viana - Mundo Secreto - Latest Updates & FAQ 18-07-24

Multiverse 5D

Play Episode Listen Later Jul 19, 2024 97:48


Demis viana - Mundo Secreto - Latest Updates & FAQ 18-07-24

Multiverse 5D
Demis Viana - Mundo Secreto - Boletim galáctico informativo 16-07-24

Multiverse 5D

Play Episode Listen Later Jul 17, 2024 98:36


Demis Viana - Mundo Secreto - Boletim galáctico informativo 16-07-24

Multiverse 5D
Mundo secreto com Demis Viana - Atualização de Exopolítica 06-07-24

Multiverse 5D

Play Episode Listen Later Jul 7, 2024 138:18


Mundo secreto com Demis Viana - Atualização de Exopolítica 06-07-24 Terapeuta Quântica Eliete Viana: Energização Taquiônica : whatsapp 15 99810-7215 (somente mensagens de texto e áudio, não atendo ligações) Grata pela compreensão. Telegram Eliete : @elieteviana Telegram grupo fechado :https://t.me/+fgvRZvh328tmYTVh Twitter: @demisvianams Seja nosso Colaborador: Banco Caixa Agência 4892 Operação 13 Conta poupança : 4683-7 Eliete Anselmo Paloni Viana: elieteapv@gmail.com (CHAVE DO PIX) ou mundosecreto15@hotmail.com

Top Flight Time Machine
The Roussos Odyssey - Part 3

Top Flight Time Machine

Play Episode Listen Later Jun 26, 2024 39:00


Names, pickling, tabs, Eurohits, and Demis meets Basil Brush before dying and getting a museum. (Rec: 27/9/23) Join the Iron Filings Society: https://www.patreon.com/topflighttimemachine Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 6, 2024 121:17


Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y

america god tv love ceo amazon spotify netflix world learning europe english google ai apple lessons pr magic san francisco phd friend digital chinese marvel reading data predictions elon musk microsoft events funny fortune startups white house weird economics wall street memory wall street journal reddit wars vr auto cloud singapore curious gate stanford connections mix israelis context ibm mark zuckerberg senior vice president average intel cto ram state of the union tigers signal vc minecraft adapt siri transformers ipo sol instructors lsu openai clouds gemini nvidia stability rust ux api gi lemon patel nsfw cisco luther b2c d d compass bro progression davos sweep bing makes disagreement gpt mythology ml lama github llama token thursday night apis stripe quran vcs amd devops captive baldur embody opus silicon sora dozen copilot bobo tab sam altman capital one mamba llm gpu altman boba generic waze agi dali upfront midjourney ide approve napster gdc zuck golem coliseum git kv prs albrecht diffusion rag cloudflare waymo klarna gpus coders gan deepmind tldr boston dynamics alessio gitlab grok anthropic sergei minefields ppa json fragmented lex fridman ena mistral suno stable diffusion nox inflection decibel databricks counterpoint a16z mts rohde adept cursor gpts cuda chroma asr sundar jensen huang lemurian gtc decoder iou stability ai singaporeans omniverse netlify etched sram nvidia gpus cerebros pytorch eac lamo day6 devtools not safe mustafa suleyman agis jupyter kubecon elicit vae autogpt project titan tpu milind practical ai nvidia gtc personal ai demis groq neurips andrej karpathy marginally jeff dean imbue nlw positron ai engineer hbm slido nat friedman entropic ppap lstm c300 technium boba guys simon willison mbu xla you look lpu latent space swix medex lstms mxu metax
Par Jupiter !
Bye Bye, Boss...

Par Jupiter !

Play Episode Listen Later Mar 29, 2024 3:32


durée : 00:03:32 - Le karaoké de Thomas Croisière - "Aujourd'hui c'est vendredi et j'voudrais bien qu'on m'aime" chantait Bashung. Un texte de Boris Bergman que Yann Chouquet, le directeur des programmes de France Inter, voulait karaoker pour son départ. L'occasion de chanter aussi Dalida, Schmoll, Demis et… Michel...

The Nonlinear Library
LW - DeepMind: Evaluating Frontier Models for Dangerous Capabilities by Zach Stein-Perlman

The Nonlinear Library

Play Episode Listen Later Mar 21, 2024 1:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind: Evaluating Frontier Models for Dangerous Capabilities, published by Zach Stein-Perlman on March 21, 2024 on LessWrong. To understand the risks posed by a new AI system, we must understand what it can and cannot do. Building on prior work, we introduce a programme of new "dangerous capability" evaluations and pilot them on Gemini 1.0 models. Our evaluations cover four areas: (1) persuasion and deception; (2) cyber-security; (3) self-proliferation; and (4) self-reasoning. [Evals for CBRN capabilities are under development.] We do not find evidence of strong dangerous capabilities in the models we evaluated, but we flag early warning signs. Our goal is to help advance a rigorous science of dangerous capability evaluation, in preparation for future models. At last, DeepMind talks about its dangerous capability evals. With details! Yay! (My weak guess is that they only finished these evals after Gemini 1.0 deployment: these evals were mentioned in an updated version of the Gemini 1.0 report but not the initial version. DeepMind hasn't yet made RSP-like commitments - that is, specific commitments about risk assessment (for extreme risks), safety and security practices as a function of risk assessment results, and training and deployment decisions as a function of risk assessment results. Demis recently suggested on Dwarkesh that DeepMind might make RSP-like commitments this year.) Random interesting note: DeepMind hired 8 superforecasters to make relevant predictions, most notably about when some eval-thresholds will trigger. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

TED Talks Technology
DeepMind's Demis Hassabis on the future of AI | The TED Interview

TED Talks Technology

Play Episode Listen Later Mar 15, 2024 49:24


Demis Hassabis is one of tech's most brilliant minds. A chess-playing child prodigy turned researcher and founder of headline-making AI company DeepMind, Demis is thinking through some of the most revolutionary -- and in some cases controversial -- uses of artificial intelligence. From ​​the development of computer program AlphaGo, which beat out world champions in the board game Go, to making leaps in the research of how proteins fold, Demis is at the helm of the next generation of groundbreaking technology. In this episode of The TED Interview, which will be back for a new season next week, Demis gives a peek into some of the questions that his top-level projects are asking, talks about how gaming, creativity, and intelligence inform his approach to tech, and muses on where AI is headed next. If you like this, listen to The TED Interview wherever you get your podcasts.

Decoder with Nilay Patel
Inside Google's big AI shuffle — and how it plans to stay competitive, with Google DeepMind CEO Demis Hassabis

Decoder with Nilay Patel

Play Episode Listen Later Jul 10, 2023 62:14


Today, I'm talking to Demis Hassabis, the CEO of Google DeepMind, the newly created division of Google responsible for AI efforts across the company. Google DeepMind is the result of an internal merger: Google acquired Demis' DeepMind startup in 2014 and ran it as a separate company inside its parent company, Alphabet, while Google itself had an AI team called Google Brain.  Google has been showing off AI demos for years now, but with the explosion of ChatGPT and a renewed threat from Microsoft in search, Google and Alphabet CEO Sundar Pichai made the decision to bring DeepMind into Google itself earlier this year to create… Google DeepMind. What's interesting is that Google Brain and DeepMind were not necessarily compatible or even focused on the same things: DeepMind was famous for applying AI to things like games and protein-folding simulations. The AI that beat world champions at Go, the ancient board game? That was DeepMind's AlphaGo. Meanwhile, Google Brain was more focused on what's come to be the familiar generative AI toolset: large language models for chatbots, and editing features in Google Photos. This was a culture clash and a big structure decision with the goal of being more competitive and faster to market with AI products. And the competition isn't just OpenAI and Microsoft — you might have seen a memo from a Google engineer floating around the web recently claiming that Google has no competitive moat in AI because open-source models running on commodity hardware are rapidly evolving and catching up to the tools run by the giants. Demis confirmed that the memo was real but said it was part of Google's debate culture, and he disagreed with it because he has other ideas about where Google's competitive edge might come into play. We also talked about AI risk and artificial general intelligence. Demis is not shy that his goal is building an AGI, and we talked through what risks and regulations should be in place and on what timeline. Demis recently signed onto a 22-word statement about AI risk with OpenAI's Sam Altman and others that simply reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That's pretty chill, but is that the real risk right now? Or is it just a distraction from other more tangible problems like AI replacing labor in various creative industries? We also talked about the new kinds of labor AI is creating — armies of low-paid taskers classifying data in countries like Kenya and India in order to train AI systems. I wanted to know if Demis thought these jobs were here to stay or just a temporary side effect of the AI boom. This one really hits all the Decoder high points: there's the big idea of AI, a lot of problems that come with it, an infinite array of complicated decisions to be made, and of course, a gigantic org chart decision in the middle of it all. Demis and I got pretty in the weeds, and I still don't think we covered it all, so we'll have to have him back soon. Links: Inside the AI Factory Inside Google's AI culture clash - The Verge A leaked Google memo raises the alarm about open-source A.I. | Fortune The End of Search As You Know It Google's Sundar Pichai talks Search, AI, and dancing with Microsoft - The Verge DeepMind reportedly lost a yearslong bid to win more independence from Google - The Verge Transcript: https://www.theverge.com/e/23542786 Credits: Decoder is a production of The Verge, and part of the Vox Media Podcast Network. Today's episode was produced by Jackie McDermott and Raghu Manavalan, and it was edited by Callie Wright. The Decoder music is by Breakmaster Cylinder. Our Editorial Director is Brooke Minters, and our Executive Producer is Eleanor Donovan.  Learn more about your ad choices. Visit podcastchoices.com/adchoices