Podcasts about epfl

  • 327PODCASTS
  • 1,604EPISODES
  • 53mAVG DURATION
  • 1WEEKLY EPISODE
  • May 6, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about epfl

Show all podcasts related to epfl

Latest podcast episodes about epfl

The Next Byte
219. Edible Robots Bring Tech to the Dessert Table

The Next Byte

Play Episode Listen Later May 6, 2025 14:41


(3:00) - Robotics meets the culinary artsThis episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn more about the rise of soft robotics in applications like 3D printing, rescue missions, and more! Become a founding reader of our newsletter: http://read.thenextbyte.com/ As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.

Swiss Impact with Banerjis
Season 10, Episode 07 - Turning Sunlight into Fuel: The Future of Hydrogen Energy Is Here

Swiss Impact with Banerjis

Play Episode Listen Later May 6, 2025 46:52


Turning Sunlight into Fuel: The Future of Hydrogen Energy Is Here | Season 10, Episode 07 #SolarEnergy #Hydrogen #CleanEnergy #Innovation #SoHHytec Join us as we dive into the future of energy with Dr. Saurabh Tembhurne, the visionary behind SoHHytec's revolutionary solar hydrogen technology! Discover how his "Artificial Tree" mimics nature to produce clean fuels and tackle climate change. From academic roots at IIT Bombay and EPFL to pioneering solutions for Earth and space, learn how SoHHytec aims to end the fossil fuel era. Don't miss this insightful conversation on green hydrogen, sustainability, and the future of energy. Learn More: https://www.sohhytec.com/ 0:00 – Introduction 3:10 – Meet Dr. Sorabh Tambour – Founder of SoHHytec 5:25 – From Academia to Innovation: The Journey 8:01 – What is a Hydrogen Tree? 10:04 – How the Technology Works – Solar to Hydrogen 13:39 – Energy Storage, Recycling & Sustainability 17:05 – Efficiency, Market Use & Environmental Impact 21:05 – Real-World Applications: Homes, Cars, Industry 25:26 – Installation, Maintenance & Cost Factors 33:05 – Economics: Hydrogen vs Fossil Fuels 39:01 – Scaling, Funding & Global Deployment 44:25 – Final Thoughts & Call to Action

Subject to
Subject to: Dominique de Werra

Subject to

Play Episode Listen Later Apr 3, 2025 85:07


Dominique de Werra is an emeritus professor of Operations Research at EPFL (Ecole Polytechnique Federale de Lausanne) in Switzerland. His research fields include Combinatorial Optimization, Graph Theory, Scheduling and Timetabling. After spending a few years as an assistant professor in Management Sciences at the University of Waterloo (Canada) he joined the Math Department of EPFL. He conducted a collection of Operational Research projects (applied as well as theoretical) with a number of industrial partners. He is an associate editor of Discrete Applied Mathematics, Discrete Mathematics, Annals of Operations Research and a member of a dozen of editorial boards of international journals. From 1990 to 2000 Dominique de Werra was the Vice-President of EPFL; he was in charge of the international relations and represented his institution in many academy networks in Europe (like the CLUSTER network of excellence which he chaired). He was also in charge of all education programs of EPFL. He was President of IFORS (the International Federation of Operational Research Societies) from 2010 to 2012. In 1987-1988 he was President of EURO, the European Association of Operational Research Societies. In 1985–1986 he was President of ASRO, the Swiss Operations Research Society. In 1995 he was the laureate of the EURO Gold Medal. He has obtained Honorary Degrees from the University of Paris, the Technical University of Poznan (Poland) and the University of Fribourg (Switzerland). In 2012 he was awarded the EURO Distinguished Service Medal. He published over 200 papers in international scientific journals. He also wrote and edited several books. He was member of many committees in various countries of Europe and America (evaluation of institutions, accreditation, strategic orientation, etc.).

CQFD - La 1ere
Les animaux, la fin de la maladie du sommeil et le Ig Nobel Award Tour Show

CQFD - La 1ere

Play Episode Listen Later Mar 31, 2025 55:30


Le Festival Histoire et Cité sur le thème des animaux Les brèves du jour Comment la Guinée s'est débarassée de la maladie du sommeil? Le Ig Nobel Award Tour Show 2025 à lʹEPFL

Swisspreneur Show
EP #486 - Sami Arpa & Yunus Erduran: How AI Will Change the Movie Industry

Swisspreneur Show

Play Episode Listen Later Mar 30, 2025 68:31


Timestamps:2:29 - How Sami became interested in cinema11:15 - How to define visions and impact22:55 - What can Largo.AI actually do?27:55 - Does AI kill creativity?53:23 - Largo.AI's plans for the near futureThis episode was co-produced by SICTIC, the leading angel investor network in Switzerland.Click here to check out our free Founders Agreement masterclass, with Melanie Gabriel from Yokoy, Christof Roduner from Scandit, and Viviana Gropengiesser from Talent Kick.About Sami Arpa & Yunus Erduran:Sami Arpa is the CEO and founder of Largo.AI, a startup providing next-generation storytelling tools for the audiovisual industry using artificial intelligence. He holds a PhD in Computational Aesthetics from EPFL and was also the co-founder of Sofy.tv, a short film streaming service later acquired by Cosmoblue.Yunus Erduran is a founder and documentary filmmaker, and one of the first investors of Largo.AI. He holds a BA in Economics from Boğaziçi University (Turkey) and is currently active as Executive Partner at Eğlenceli Bilim, a company offering curriculum-compliant science programs to all educational institutions in Turkey. He's also the podcast producer and host of arastirmaca, together with his co-host Müge Bakioğlu.Largo has developed data-assisted intelligence which can be introduced early in the life of a film project, right from script choice and development, through filming, post-production and theatrical release. Largo empowers production companies with data-enhanced ROI uplift and reduction of risk. Their AI models were trained with 400K+ movies/TV shows, 250K+ commercials, 950K+ talents. Largo's clients are usually producers and studios (on the film side) and agencies and brands (on the advertising side). 

Six heures - Neuf heures, le samedi - La 1ere
On fait quoi ce week-end ? – BiblioWeekend 2025: Indie Video Games Day à l'EPFL

Six heures - Neuf heures, le samedi - La 1ere

Play Episode Listen Later Mar 29, 2025 11:05


LʹIndie Video Games Day revient à lʹEPFL dans le cadre du BiblioWeekend ! Pour cette deuxième édition, la Bibliothèque de lʹEPFL sʹassocie au GameLab UNIL-EPFL et propose une journée complète dédiée aux jeux vidéo indépendants, un événement gratuit destiné aussi bien aux passionné·es de jeux vidéo quʹaux familles. Sur place, nous retrouvons Laurent Dormond en compagnie de Marion Favre, responsable de la collection de jeux vidéo à la Bibliothèque de lʹEPFL.

Robot Talk
Episode 115: Robot dogs working in industry - Benjamin Mottis

Robot Talk

Play Episode Listen Later Mar 28, 2025 23:41


Claire chatted to Benjamin Mottis from ANYbotics about deploying their four-legged ANYmal robot in a variety of industries. Benjamin Mottis is a Robotics Engineer in charge of ANYmal Research at ANYbotics. After graduating in robotics from EPFL, he joined ANYbotics as a Field Engineer in 2023. He specializes in deploying ANYmal and training customers across all ANYbotics verticals (Oil & Gas, Nuclear, Metals, Chemicals, etc.). Since 2024, as the Global Research Community Manager, he has been working on expanding the ANYmal Research Community and helping world-leading researchers push the boundaries of robotics with ANYmal. Join the Robot Talk community on Patreon: https://www.patreon.com/ClaireAsher  

Ground Truths
The Holy Grail of Biology

Ground Truths

Play Episode Listen Later Mar 18, 2025 43:43


“Eventually, my dream would be to simulate a virtual cell.”—Demis HassabisThe aspiration to build the virtual cell is considered to be equivalent to a moonshot for digital biology. Recently, 42 leading life scientists published a paper in Cell on why this is so vital, and how it may ultimately be accomplished. This conversation is with 2 of the authors, Charlotte Bunne, now at EPFL and Steve Quake, a Professor at Stanford University, who heads up science at the Chan-Zuckerberg Initiative The audio (above) is available on iTunes and Spotify. The full video is linked here, at the top, and also can be found on YouTube.TRANSCRIPT WITH LINKS TO AUDIO Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths and we've got a really hot topic today, the virtual cell. And what I think is extraordinarily important futuristic paper that recently appeared in the journal Cell and the first author, Charlotte Bunne from EPFL, previously at Stanford's Computer Science. And Steve Quake, a young friend of mine for many years who heads up the Chan Zuckerberg Initiative (CZI) as well as a professor at Stanford. So welcome, Charlotte and Steve.Steve Quake (00:42):Thanks, Eric. It's great to be here.Charlotte Bunne:Thanks for having me.Eric Topol (00:45):Yeah. So you wrote this article that Charlotte, the first author, and Steve, one of the senior authors, appeared in Cell in December and it just grabbed me, “How to build the virtual cell with artificial intelligence: Priorities and opportunities.” It's the holy grail of biology. We're in this era of digital biology and as you point out in the paper, it's a convergence of what's happening in AI, which is just moving at a velocity that's just so extraordinary and what's happening in biology. So maybe we can start off by, you had some 42 authors that I assume they congregated for a conference or something or how did you get 42 people to agree to the words in this paper?Steve Quake (01:33):We did. We had a meeting at CZI to bring community members together from many different parts of the community, from computer science to bioinformatics, AI experts, biologists who don't trust any of this. We wanted to have some real contrarians in the mix as well and have them have a conversation together about is there an opportunity here? What's the shape of it? What's realistic to expect? And that was sort of the genesis of the article.Eric Topol (02:02):And Charlotte, how did you get to be drafting the paper?Charlotte Bunne (02:09):So I did my postdoc with Aviv Regev at Genentech and Jure Leskovec at CZI and Jure was part of the residency program of CZI. And so, this is how we got involved and you had also prior work with Steve on the universal cell embedding. So this is how everything got started.Eric Topol (02:29):And it's actually amazing because it's a who's who of people who work in life science, AI and digital biology and omics. I mean it's pretty darn impressive. So I thought I'd start off with a quote in the article because it kind of tells a story of where this could go. So the quote was in the paper, “AIVC (artificial intelligence virtual cell) has the potential to revolutionize the scientific process, leading to future breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.” That's a pretty big statement. So maybe we can just kind of toss that around a bit and maybe give it a little more thoughts and color as to what you were positing there.Steve Quake (03:19):Yeah, Charlotte, you want me to take the first shot at that? Okay. So Eric, it is a bold claim and we have a really bold ambition here. We view that over the course of a decade, AI is going to provide the ability to make a transformative computational tool for biology. Right now, cell biology is 90% experimental and 10% computational, roughly speaking. And you've got to do just all kinds of tedious, expensive, challenging lab work to get to the answer. And I don't think AI is going to replace that, but it can invert the ratio. So within 10 years I think we can get to biology being 90% computational and 10% experimental. And the goal of the virtual cell is to build a tool that'll do that.Eric Topol (04:09):And I think a lot of people may not understand why it is considered the holy grail because it is the fundamental unit of life and it's incredibly complex. It's not just all the things happening in the cell with atoms and molecules and organelles and everything inside, but then there's also the interactions the cell to other cells in the outside tissue and world. So I mean it's really quite extraordinary challenge that you've taken on here. And I guess there's some debate, do we have the right foundation? We're going to get into foundation models in a second. A good friend of mine and part of this whole I think process that you got together, Eran Segal from Israel, he said, “We're at this tipping point…All the stars are aligned, and we have all the different components: the data, the compute, the modeling.” And in the paper you describe how we have over the last couple of decades have so many different data sets that are rich that are global initiatives. But then there's also questions. Do we really have the data? I think Bo Wang especially asked about that. Maybe Charlotte, what are your thoughts about data deficiency? There's a lot of data, but do you really have what we need before we bring them all together for this kind of single model that will get us some to the virtual cell?Charlotte Bunne (05:41):So I think, I mean one core idea of building this AIVC is that we basically can leverage all experimental data that is overall collected. So this also goes back to the point Steve just made. So meaning that we basically can integrate across many different studies data because we have AI algorithms or the architectures that power such an AIVC are able to integrate basically data sets on many different scales. So we are going a bit away from this dogma. I'm designing one algorithm from one dataset to this idea of I have an architecture that can take in multiple dataset on multiple scales. So this will help us a bit in being somewhat efficient with the type of experiments that we need to make and the type of experiments we need to conduct. And again, what Steve just said, ultimately, we can very much steer which data sets we need to collect.Charlotte Bunne (06:34):Currently, of course we don't have all the data that is sufficient. I mean in particular, I think most of the tissues we have, they are healthy tissues. We don't have all the disease phenotypes that we would like to measure, having patient data is always a very tricky case. We have mostly non-interventional data, meaning we have very limited understanding of somehow the effect of different perturbations. Perturbations that happen on many different scales in many different environments. So we need to collect a lot here. I think the overall journey that we are going with is that we take the data that we have, we make clever decisions on the data that we will collect in the future, and we have this also self-improving entity that is aware of what it doesn't know. So we need to be able to understand how well can I predict something on this somewhat regime. If I cannot, then we should focus our data collection effort into this. So I think that's not a present state, but this will basically also guide the future collection.Eric Topol (07:41):Speaking of data, one of the things I think that's fascinating is we saw how AlphaFold2 really revolutionized predicting proteins. But remember that was based on this extraordinary resource that had been built, the Protein Data Bank that enabled that. And for the virtual cell there's no such thing as a protein data bank. It's so much more as you emphasize Charlotte, it's so much dynamic and these perturbations that are just all across the board as you emphasize. Now the human cell atlas, which currently some tens of millions, but going into a billion cells, we learned that it used to be 200 cell types. Now I guess it's well over 5,000 and that we have 37 trillion cells approximately in the average person adult's body is a formidable map that's being made now. And I guess the idea that you're advancing is that we used to, and this goes back to a statement you made earlier, Steve, everything we did in science was hypothesis driven. But if we could get computational model of the virtual cell, then we can have AI exploration of the whole field. Is that really the nuts of this?Steve Quake (09:06):Yes. A couple thoughts on that, maybe Theo Karaletsos, our lead AI person at CZI says machine learning is the formalism through which we understand high dimensional data and I think that's a very deep statement. And biological systems are intrinsically very high dimensional. You've got 20,000 genes in the human genome in these cell atlases. You're measuring all of them at the same time in each single cell. And there's a lot of structure in the relationships of their gene expression there that is just not evident to the human eye. And for example, CELL by GENE, our database that collects all the aggregates, all of the single cell transcriptomic data is now over a hundred million cells. And as you mentioned, we're seeing ways to increase that by an order of magnitude in the near future. The project that Jure Leskovec and I worked on together that Charlotte referenced earlier was like a first attempt to build a foundational model on that data to discover some of the correlations and structure that was there.Steve Quake (10:14):And so, with a subset, I think it was the 20 or 30 million cells, we built a large language model and began asking it, what do you understand about the structure of this data? And it kind of discovered lineage relationships without us teaching it. We trained on a matrix of numbers, no biological information there, and it learned a lot about the relationships between cell type and lineage. And that emerged from that high dimensional structure, which was super pleasing to us and really, I mean for me personally gave me the confidence to say this stuff is going to work out. There is a future for the virtual cell. It's not some made up thing. There is real substance there and this is worth investing an enormous amount of CZIs resources in going forward and trying to rally the community around as a project.Eric Topol (11:04):Well yeah, the premise here is that there is a language of life, and you just made a good case that there is if you can predict, if you can query, if you can generate like that. It is reminiscent of the famous Go game of Lee Sedol, that world champion and how the machine came up with a move (Move 37) many, many years ago that no human would've anticipated and I think that's what you're getting at. And the ability for inference and reason now to add to this. So Charlotte, one of the things of course is about, well there's two terms in here that are unfamiliar to many of the listeners or viewers of this podcast, universal representations (UR) and virtual instrument (VIs) that you make a pretty significant part of how you are going about this virtual cell model. So could you describe that and also the embeddings as part of the universal representation (UR) because I think embeddings, or these meaningful relationships are key to what Steve was just talking about.Charlotte Bunne (12:25):Yes. So in order to somewhat leverage very different modalities in order to leverage basically modalities that will take measurements across different scales, like the idea is that we have large, may it be transformer models that might be very different. If I have imaging data, I have a vision transformer, if I have a text data, I have large language models that are designed of course for DNA then they have a very wide context and so on and so forth. But the idea is somewhat that we have models that are connected through the scales of biology because those scales we know. We know which components are somewhat involved or in measurements that are happening upstream. So we have the somewhat interconnection or very large model that will be trained on many different data and we have this internal model representation that somewhat capture everything they've seen. And so, this is what we call those universal representation (UR) that will exist across the scales of biology.Charlotte Bunne (13:22):And what is great about AI, and so I think this is a bit like a history of AI in short is the ability to predict the last years, the ability to generate, we can generate new hypothesis, we can generate modalities that we are missing. We can potentially generate certain cellular state, molecular state have a certain property, but I think what's really coming is this ability to reason. So we see this in those very large language models, the ability to reason about a hypothesis, how we can test it. So this is what those instruments ultimately need to do. So we need to be able to simulate the change of a perturbation on a cellular phenotype. So on the internal representation, the universal representation of a cell state, we need to simulate the fact the mutation has downstream and how this would propagate in our representations upstream. And we need to build many different type of virtual instruments that allow us to basically design and build all those capabilities that ultimately the AI virtual cell needs to possess that will then allow us to reason, to generate hypothesis, to basically predict the next experiment to conduct to predict the outcome of a perturbation experiment to in silico design, cellular states, molecular states, things like that. And this is why we make the separation between internal representation as well as those instruments that operate on those representations.Eric Topol (14:47):Yeah, that's what I really liked is that you basically described the architecture, how you're going to do this. By putting these URs into the VIs, having a decoder and a manipulator and you basically got the idea if you can bring all these different integrations about which of course is pending. Now there are obviously many naysayers here that this is impossible. One of them is this guy, Philip Ball. I don't know if you read the language, How Life Works. Now he's a science journalist and he's a prolific writer. He says, “Comparing life to a machine, a robot, a computer, sells it short. Life is a cascade of processes, each with a distinct integrity and autonomy, the logic of which has no parallel outside the living world.” Is he right? There's no way to model this. It's silly, it's too complex.Steve Quake (15:50):We don't know, alright. And it's great that there's naysayers. If everyone agreed this was doable, would it be worth doing? I mean the whole point is to take risks and get out and do something really challenging in the frontier where you don't know the answer. If we knew that it was doable, I wouldn't be interested in doing it. So I personally am happy that there's not a consensus.Eric Topol (16:16):Well, I mean to capture people's imagination here, if you're successful and you marshal a global effort, I don't know who's going to pay for it because it's a lot of work coming here going forward. But if you can do it, the question here is right today we talk about, oh let's make an organoid so we can figure out how to treat this person's cancer or understand this person's rare disease or whatever. And instead of having to wait weeks for this culture and all the expense and whatnot, you could just do it in a computer and in silico and you have this virtual twin of a person's cells and their tissue and whatnot. So the opportunity here is, I don't know if people get, this is just extraordinary and quick and cheap if you can get there. And it's such a bold initiative idea, who will pay for this do you think?Steve Quake (17:08):Well, CZI is putting an enormous amount of resources into it and it's a major project for us. We have been laying the groundwork for it. We recently put together what I think is if not the largest, one of the largest GPU supercomputer clusters for nonprofit basic science research that came online at the end of last year. And in fact in December we put out an RFA for the scientific community to propose using it to build models. And so we're sharing that resource within the scientific community as I think you appreciate, one of the real challenges in the field has been access to compute resources and industry has it academia at a much lower level. We are able to be somewhere in between, not quite at the level of a private company but the tech company but at a level beyond what most universities are being able to do and we're trying to use that to drive the field forward. We're also planning on launching RFAs we this year to help drive this project forward and funding people globally on that. And we are building a substantial internal effort within CZI to help drive this project forward.Eric Topol (18:17):I think it has the looks of the human genome project, which at time as you know when it was originally launched that people thought, oh, this is impossible. And then look what happened. It got done. And now the sequence of genome is just a commodity, very relatively, very inexpensive compared to what it used to be.Steve Quake (18:36):I think a lot about those parallels. And I will say one thing, Philip Ball, I will concede him the point, the cells are very complicated. The genome project, I mean the sort of genius there was to turn it from a biology problem to a chemistry problem, there is a test tube with a chemical and it work out the structure of that chemical. And if you can do that, the problem is solved. I think what it means to have the virtual cell is much more complex and ambiguous in terms of defining what it's going to do and when you're done. And so, we have our work cut out for us there to try to do that. And that's why a little bit, I established our North Star and CZI for the next decade as understanding the mysteries of the cell and that word mystery is very important to me. I think the molecules, as you pointed out earlier are understood, genome sequenced, protein structure solved or predicted, we know a lot about the molecules. Those are if not solved problems, pretty close to being solved. And the real mystery is how do they work together to create life in the cell? And that's what we're trying to answer with this virtual cell project.Eric Topol (19:43):Yeah, I think another thing that of course is happening concurrently to add the likelihood that you'll be successful is we've never seen the foundation models coming out in life science as they have in recent weeks and months. Never. I mean, I have a paper in Science tomorrow coming out summarizing the progress about not just RNA, DNA, ligands. I mean the whole idea, AlphaFold3, but now Boltz and so many others. It's just amazing how fast the torrent of new foundation models. So Charlotte, what do you think accounts for this? This is unprecedented in life science to see foundation models coming out at this clip on evolution on, I mean you name it, design of every different molecule of life or of course in cells included in that. What do you think is going on here?Charlotte Bunne (20:47):So on the one hand, of course we benefit profits and inherit from all the tremendous efforts that have been made in the last decades on assembling those data sets that are very, very standardized. CELLxGENE is very somehow AI friendly, as you can say, it is somewhat a platform that is easy to feed into algorithms, but at the same time we actually also see really new building mechanisms, design principles of AI algorithms in itself. So I think we have understood that in order to really make progress, build those systems that work well, we need to build AI tools that are designed for biological data. So to give you an easy example, if I use a large language model on text, it's not going to work out of the box for DNA because we have different reading directions, different context lens and many, many, many, many more.Charlotte Bunne (21:40):And if I look at standard computer vision where we can say AI really excels and I'm applying standard computer vision, vision transformers on multiplex images, they're not going to work because normal computer vision architectures, they always expect the same three inputs, RGB, right? In multiplex images, I'm measuring up to 150 proteins potentially in a single experiment, but every study will measure different proteins. So I deal with many different scales like larger scales and I used to attention mechanisms that we have in usual computer vision. Transformers are not going to work anymore, they're not going to scale. And at the same time, I need to be completely flexible in whatever input combination of channel I'm just going to face in this experiment. So this is what we right now did for example, in our very first work, inheriting the design principle that we laid out in the paper AI virtual cell and then come up with new AI architectures that are dealing with these very special requirements that biological data have.Charlotte Bunne (22:46):So we have now a lot of computer scientists that work very, very closely have a very good understanding of biologists. Biologists that are getting much and much more into the computer science. So people who are fluent in both languages somewhat, that are able to now build models that are adopted and designed for biological data. And we don't just take basically computer vision architectures that work well on street scenes and try to apply them on biological data. So it's just a very different way of thinking about it, starting constructing basically specialized architectures, besides of course the tremendous data efforts that have happened in the past.Eric Topol (23:24):Yeah, and we're not even talking about just sequence because we've also got imaging which has gone through a revolution, be able to image subcellular without having to use any types of stains that would disrupt cells. That's another part of the deep learning era that came along. One thing I thought was fascinating in the paper in Cell you wrote, “For instance, the Short Read Archive of biological sequence data holds over 14 petabytes of information, which is 1,000 times larger than the dataset used to train ChatGPT.” I mean that's a lot of tokens, that's a lot of stuff, compute resources. It's almost like you're going to need a DeepSeek type of way to get this. I mean not that DeepSeek as its claim to be so much more economical, but there's a data challenge here in terms of working with that massive amount that is different than the human language. That is our language, wouldn't you say?Steve Quake (24:35):So Eric, that brings to mind one of my favorite quotes from Sydney Brenner who is such a wit. And in 2000 at the sort of early first flush of success in genomics, he said, biology is drowning in a sea of data and starving for knowledge. A very deep statement, right? And that's a little bit what the motivation was for putting the Short Read Archive statistic into the paper there. And again, for me, part of the value of this endeavor of creating a virtual cell is it's a tool to help us translate data into knowledge.Eric Topol (25:14):Yeah, well there's two, I think phenomenal figures in your Cell paper. The first one that kicks across the capabilities of the virtual cell and the second that compares the virtual cell to the real or the physical cell. And we'll link that with this in the transcript. And the other thing we'll link is there's a nice Atlantic article, “A Virtual Cell Is a ‘Holy Grail' of Science. It's Getting Closer.” That might not be quite close as next week or year, but it's getting close and that's good for people who are not well grounded in this because it's much more taken out of the technical realm. This is really exciting. I mean what you're onto here and what's interesting, Steve, since I've known you for so many years earlier in your career you really worked on omics that is being DNA and RNA and in recent times you've made this switch to cells. Is that just because you're trying to anticipate the field or tell us a little bit about your migration.Steve Quake (26:23):Yeah, so a big part of my career has been trying to develop new measurement technologies that'll provide insight into biology. And decades ago that was understanding molecules. Now it's understanding more complex biological things like cells and it was like a natural progression. I mean we built the sequencers, sequenced the genomes, done. And it was clear that people were just going to do that at scale then and create lots of data. Hopefully knowledge would get out of that. But for me as an academic, I never thought I'd be in the position I'm in now was put it that way. I just wanted to keep running a small research group. So I realized I would have to get out of the genome thing and find the next frontier and it became this intersection of microfluidics and genomics, which as you know, I spent a lot of time developing microfluidic tools to analyze cells and try to do single cell biology to understand their heterogeneity. And that through a winding path led me to all these cell atlases and to where we are now.Eric Topol (27:26):Well, we're fortunate for that and also with your work with CZI to help propel that forward and I think it sounds like we're going to need a lot of help to get this thing done. Now Charlotte, as a computer scientist now at EPFL, what are you going to do to keep working on this and what's your career advice for people in computer science who have an interest in digital biology?Charlotte Bunne (27:51):So I work in particular on the prospect of using this to build diagnostic tools and to make diagnostics in the clinic easier because ultimately we have somewhat limited capabilities in the hospital to run deep omics, but the idea of being able to somewhat map with a cheaper and lighter modality or somewhat diagnostic test into something much richer because a model has been seeing all those different data and can basically contextualize it. It's very interesting. We've seen all those pathology foundation models. If I can always run an H&E, but then decide when to run deeper diagnostics to have a better or more accurate prediction, that is very powerful and it's ultimately reducing the costs, but the precision that we have in hospitals. So my faculty position right now is co-located between the School of Life Sciences, School of Computer Science. So I have a dual affiliation and I'm affiliated to the hospitals to actually make this possible and as a career advice, I think don't be shy and stick to your discipline.Charlotte Bunne (28:56):I have a bachelor's in biology, but I never only did biology. I have a PhD in computer science, which you would think a bachelor in biology not necessarily qualifies you through. So I think this interdisciplinarity also requires you to be very fluent, very comfortable in reading many different styles of papers and publications because a publication in a computer science venue will be very, very different from the way we write in biology. So don't stick to your study program, but just be free in selecting whatever course gets you closer to the knowledge you need in order to do the research or whatever task you are building and working on.Eric Topol (29:39):Well, Charlotte, the way you're set up there with this coalescence of life science and computer science is so ideal and so unusual here in the US, so that's fantastic. That's what we need and that's really the underpinning of how you're going to get to the virtual cells, getting these two communities together. And Steve, likewise, you were an engineer and somehow you became one of the pioneers of digital biology way back before it had that term, this interdisciplinary, transdisciplinary. We need so much of that in order for you all to be successful, right?Steve Quake (30:20):Absolutely. I mean there's so much great discovery to be done on the boundary between fields. I trained as a physicist and kind of made my career this boundary between physics and biology and technology development and it's just sort of been a gift that keeps on giving. You've got a new way to measure something, you discover something new scientifically and it just all suggests new things to measure. It's very self-reinforcing.Eric Topol (30:50):Now, a couple of people who you know well have made some pretty big statements about this whole era of digital biology and I think the virtual cell is perhaps the biggest initiative of all the digital biology ongoing efforts, but Jensen Huang wrote, “for the first time in human history, biology has the opportunity to be engineering, not science.” And Demis Hassabis wrote or said, ‘we're seeing engineering science, you have to build the artifact of interest first, and then once you have it, you can use the scientific method to reduce it down and understand its components.' Well here there's a lot to do to understand its components and if we can do that, for example, right now as both of AI drug discoveries and high gear and there's umpteen numbers of companies working on it, but it doesn't account for the cell. I mean it basically is protein, protein ligand interactions. What if we had drug discovery that was cell based? Could you comment about that? Because that doesn't even exist right now.Steve Quake (32:02):Yeah, I mean I can say something first, Charlotte, if you've got thoughts, I'm curious to hear them. So I do think AI approaches are going to be very useful designing molecules. And so, from the perspective of designing new therapeutics, whether they're small molecules or antibodies, yeah, I mean there's a ton of investment in that area that is a near term fruit, perfect thing for venture people to invest in and there's opportunity there. There's been enough proof of principle. However, I do agree with you that if you want to really understand what happens when you drug a target, you're going to want to have some model of the cell and maybe not just the cell, but all the different cell types of the body to understand where toxicity will come from if you have on-target toxicity and whether you get efficacy on the thing you're trying to do.Steve Quake (32:55):And so, we really hope that people will use the virtual cell models we're going to build as part of the drug discovery development process, I agree with you in a little of a blind spot and we think if we make something useful, people will be using it. The other thing I'll say on that point is I'm very enthusiastic about the future of cellular therapies and one of our big bets at CZI has been starting the New York Biohub, which is aimed at really being very ambitious about establishing the engineering and scientific foundations of how to engineer completely, radically more powerful cellular therapies. And the virtual cell is going to help them do that, right? It's going to be essential for them to achieve that mission.Eric Topol (33:39):I think you're pointing out one of the most important things going on in medicine today is how we didn't anticipate that live cell therapy, engineered cells and ideally off the shelf or in vivo, not just having to take them out and work on them outside the body, is a revolution ongoing, and it's not just in cancer, it's in autoimmune diseases and many others. So it's part of the virtual cell need. We need this. One of the things that's a misnomer, I want you both to comment on, we keep talking about single cell, single cell. And there's a paper spatial multi-omics this week, five different single cell scales all integrated. It's great, but we don't get to single cell. We're basically looking at 50 cells, 100 cells. We're not doing single cell because we're not going deep enough. Is that just a matter of time when we actually are doing, and of course the more we do get down to the single or a few cells, the more insights we're going to get. Would you comment about that? Because we have all this literature on single cell comes out every day, but we're not really there yet.Steve Quake (34:53):Charlotte, do you want to take a first pass at that and then I can say something?Charlotte Bunne (34:56):Yes. So it depends. So I think if we look at certain spatial proteomics, we still have subcellular resolutions. So of course, we always measure many different cells, but we are able to somewhat get down to resolution where we can look at certain colocalization of proteins. This also goes back to the point just made before having this very good environment to study drugs. If I want to build a new drug, if I want to build a new protein, the idea of building this multiscale model allows us to actually simulate different, somehow binding changes and binding because we simulate the effect of a drug. Ultimately, the redouts we have they are subcellular. So of course, we often in the spatial biology, we often have a bit like methods that are rather coarse they have a spot that averages over certain some cells like hundreds of cells or few cells.Charlotte Bunne (35:50):But I think we also have more and more technologies that are zooming in that are subcellular where we can actually tag or have those probe-based methods that allow us to zoom in. There's microscopy of individual cells to really capture them in 3D. They are of course not very high throughput yet, but it gives us also an idea of the morphology and how ultimately morphology determine certain somehow cellular properties or cellular phenotype. So I think there's lots of progress also on the experimental and that ultimately will back feed into the AI virtual cell, those models that will be fed by those data. Similarly, looking at dynamics, right, looking at live imaging of individual cells of their morphological changes. Also, this ultimately is data that we'll need to get a better understanding of disease mechanisms, cellular phenotypes functions, perturbation responses.Eric Topol (36:47):Right. Yes, Steve, you can comment on that and the amazing progress that we have made with space and time, spatial temporal resolution, spatial omics over these years, but that we still could go deeper in terms of getting to individual cells, right?Steve Quake (37:06):So, what can we do with a single cell? I'd say we are very mature in our ability to amplify and sequence the genome of a single cell, amplify and sequence the transcriptome of a single cell. You can ask is one cell enough to make a biological conclusion? And maybe I think what you're referring to is people want to see replicates and so you can ask how many cells do you need to see to have confidence in any given biological conclusion, which is a reasonable thing. It's a statistical question in good science. I think I've been very impressed with how the mass spec people have been doing recently. I think they've finally cracked the ability to look at proteins from single cells and they can look at a couple thousand proteins. That was I think one of these Nature method of the year things at the end of last year and deep visual proteomics.Eric Topol (37:59):Deep visual proteomics, yes.Steve Quake (38:00):Yeah, they are over the hump. Yeah, they are over the hump with single cell measurements. Part of what's missing right now I think is the ability to reliably do all of that on the same cell. So this is what Charlotte was referring to be able to do sort of multi-modal measurements on single cells. That's kind of in its infancy and there's a few examples, but there's a lot more work to be done on that. And I think also the fact that these measurements are all destructive right now, and so you're losing the ability to look how the cells evolve over time. You've got to say this time point, I'm going to dissect this thing and look at a state and I don't get to see what happens further down the road. So that's another future I think measurement challenge to be addressed.Eric Topol (38:42):And I think I'm just trying to identify some of the multitude of challenges in this extraordinarily bold initiative because there are no shortage and that's good about it. It is given people lots of work to do to overcome, override some of these challenges. Now before we wrap up, besides the fact that you point out that all the work has to be done and be validated in real experiments, not just live in a virtual AI world, but you also comment about the safety and ethics of this work and assuming you're going to gradually get there and be successful. So could either or both of you comment about that because it's very thoughtful that you're thinking already about that.Steve Quake (41:10):As scientists and members of the larger community, we want to be careful and ensure that we're interacting with people who said policy in a way that ensures that these tools are being used to advance the cause of science and not do things that are detrimental to human health and are used in a way that respects patient privacy. And so, the ethics around how you use all this with respect to individuals is going to be important to be thoughtful about from the beginning. And I also think there's an ethical question around what it means to be publishing papers and you don't want people to be forging papers using data from the virtual cell without being clear about where that came from and pretending that it was a real experiment. So there's issues around those sorts of ethics as well that need to be considered.Eric Topol (42:07):And of those 40 some authors, do you around the world, do you have the sense that you all work together to achieve this goal? Is there kind of a global bonding here that's going to collaborate?Steve Quake (42:23):I think this effort is going to go way beyond those 40 authors. It's going to include a much larger set of people and I'm really excited to see that evolve with time.Eric Topol (42:31):Yeah, no, it's really quite extraordinary how you kick this thing off and the paper is the blueprint for something that we are all going to anticipate that could change a lot of science and medicine. I mean we saw, as you mentioned, Steve, how that deep visual proteomics (DVP) saved lives. It was what I wrote a spatial medicine, no longer spatial biology. And so, the way that this can change the future of medicine, I think a lot of people just have to have a little bit of imagination that once we get there with this AIVC, that there's a lot in store that's really quite exciting. Well, I think this has been an invigorating review of that paper and some of the issues surrounding it. I couldn't be more enthusiastic for your success and ultimately where this could take us. Did I miss anything during the discussion that we should touch on before we wrap up?Steve Quake (43:31):Not from my perspective. It was a pleasure as always Eric, and a fun discussion.Charlotte Bunne (43:38):Thanks so much.Eric Topol (43:39):Well thank you both and all the co-authors of this paper. We're going to be following this with the great interest, and I think for most people listening, they may not know that this is in store for the future. Someday we will get there. I think one of the things to point out right now is the models we have today that large language models based on transformer architecture, they're going to continue to evolve. We're already seeing so much in inference and ability for reasoning to be exploited and not asking for prompts with immediate answers, but waiting for days to get back. A lot more work from a lot more computing resources. But we're going to get models in the future to fold this together. I think that's one of the things that you've touched on the paper so that whatever we have today in concert with what you've laid out, AI is just going to keep getting better.Eric Topol (44:39):The biology that these foundation models are going to get broader and more compelling as to their use cases. So that's why I believe in this. I don't see this as a static situation right now. I just think that you're anticipating the future, and we will have better models to be able to integrate this massive amount of what some people would consider disparate data sources. So thank you both and all your colleagues for writing this paper. I don't know how you got the 42 authors to agree to it all, which is great, and it's just a beginning of something that's a new frontier. So thanks very much.Steve Quake (45:19):Thank you, Eric.**********************************************Thanks for listening, watching or reading Ground Truths. Your subscription is greatly appreciated.If you found this podcast interesting please share it!That makes the work involved in putting these together especially worthwhile.All content on Ground Truths—newsletters, analyses, and podcasts—is free, open-access, with no ads..Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years. And such support is becoming more vital In light of current changes of funding by US biomedical research at NIH and other governmental agencies.Thanks to my producer Jessica Nguyen and to Sinjun Balabanoff for audio and video support at Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are working with Amplify on the 2025 State of AI Engineering Survey to be presented at the AIE World's Fair in SF! Join the survey to shape the future of AI Eng!We first met Snipd over a year ago, and were immediately impressed by the design, but were doubtful about the behavior of snipping as the title behavior:Podcast apps are enormously sticky - Spotify spent almost $1b in podcast acquisitions and exclusive content just to get an 8% bump in market share among normies.However, after a disappointing Overcast 2.0 rewrite with no AI features in the last 3 years, I finally bit the bullet and switched to Snipd. It's 2025, your podcast app should be able to let you search transcripts of your podcasts. Snipd is the best implementation of this so far.And yet they keep shipping:What impressed us wasn't just how this tiny team of 4 was able to bootstrap a consumer AI app against massive titans and do so well; but also how seriously they think about learning through podcasts and improving retention of knowledge over time, aka “Duolingo for podcasts”. As an educational AI podcast, that's a mission we can get behind.Full Video PodFind us on YouTube! This was the first pod we've ever shot outdoors!Show Notes* How does Shazam work?* Flutter/FlutterFlow* wav2vec paper* Perplexity Online LLM* Google Search Grounding* Comparing Snipd transcription with our Bee episode* NIPS 2017 Flo Rida* Gustav Söderström - Background AudioTimestamps* [00:00:03] Takeaways from AI Engineer NYC* [00:00:17] Weather in New York.* [00:00:26] Swyx and Snipd.* [00:01:01] Kevin's AI summit experience.* [00:01:31] Zurich and AI.* [00:03:25] SigLIP authors join OpenAI.* [00:03:39] Zurich is very costly.* [00:04:06] The Snipd origin story.* [00:05:24] Introduction to machine learning.* [00:09:28] Snipd and user knowledge extraction.* [00:13:48] App's tech stack, Flutter, Python.* [00:15:11] How speakers are identified.* [00:18:29] The concept of "backgroundable" video.* [00:29:05] Voice cloning technology.* [00:31:03] Using AI agents.* [00:34:32] Snipd's future is multi-modal AI.* [00:36:37] Snipd and existing user behaviour.* [00:42:10] The app, summary, and timestamps.* [00:55:25] The future of AI and podcasting.* [1:14:55] Voice AITranscriptswyx [00:00:03]: Hey, I'm here in New York with Kevin Ben-Smith of Snipd. Welcome.Kevin [00:00:07]: Hi. Hi. Amazing to be here.swyx [00:00:09]: Yeah. This is our first ever, I think, outdoors podcast recording.Kevin [00:00:14]: It's quite a location for the first time, I have to say.swyx [00:00:18]: I was actually unsure because, you know, it's cold. It's like, I checked the temperature. It's like kind of one degree Celsius, but it's not that bad with the sun. No, it's quite nice. Yeah. Especially with our beautiful tea. With the tea. Yeah. Perfect. We're going to talk about Snips. I'm a Snips user. I'm a Snips user. I had to basically, you know, apart from Twitter, it's like the number one use app on my phone. Nice. When I wake up in the morning, I open Snips and I, you know, see what's new. And I think in terms of time spent or usage on my phone, I think it's number one or number two. Nice. Nice. So I really had to talk about it also because I think people interested in AI want to think about like, how can we, we're an AI podcast, we have to talk about the AI podcast app. But before we get there, we just finished. We just finished the AI Engineer Summit and you came for the two days. How was it?Kevin [00:01:07]: It was quite incredible. I mean, for me, the most valuable was just being in the same room with like-minded people who are building the future and who are seeing the future. You know, especially when it comes to AI agents, it's so often I have conversations with friends who are not in the AI world. And it's like so quickly it happens that you, it sounds like you're talking in science fiction. And it's just crazy talk. It was, you know, it's so refreshing to talk with so many other people who already see these things and yeah, be inspired then by them and not always feel like, like, okay, I think I'm just crazy. And like, this will never happen. It really is happening. And for me, it was very valuable. So day two, more relevant, more relevant for you than day one. Yeah. Day two. So day two was the engineering track. Yeah. That was definitely the most valuable for me. Like also as a producer. Practitioner myself, especially there were one or two talks that had to do with voice AI and AI agents with voice. Okay. So that was quite fascinating. Also spoke with the speakers afterwards. Yeah. And yeah, they were also very open and, and, you know, this, this sharing attitudes that's, I think in general, quite prevalent in the AI community. I also learned a lot, like really practical things that I can now take away with me. Yeah.swyx [00:02:25]: I mean, on my side, I, I think I watched only like half of the talks. Cause I was running around and I think people saw me like towards the end, I was kind of collapsing. I was on the floor, like, uh, towards the end because I, I needed to get, to get a rest, but yeah, I'm excited to watch the voice AI talks myself.Kevin [00:02:43]: Yeah. Yeah. Do that. And I mean, from my side, thanks a lot for organizing this conference for bringing everyone together. Do you have anything like this in Switzerland? The short answer is no. Um, I mean, I have to say the AI community in, especially Zurich, where. Yeah. Where we're, where we're based. Yeah. It is quite good. And it's growing, uh, especially driven by ETH, the, the technical university there and all of the big companies, they have AI teams there. Google, like Google has the biggest tech hub outside of the U S in Zurich. Yeah. Facebook is doing a lot in reality labs. Uh, Apple has a secret AI team, open AI and then SwapBit just announced that they're coming to Zurich. Yeah. Um, so there's a lot happening. Yeah.swyx [00:03:23]: So, yeah, uh, I think the most recent notable move, I think the entire vision team from Google. Uh, Lucas buyer, um, and, and all the other authors of Siglip left Google to join open AI, which I thought was like, it's like a big move for a whole team to move all at once at the same time. So I've been to Zurich and it just feels expensive. Like it's a great city. Yeah. It's great university, but I don't see it as like a business hub. Is it a business hub? I guess it is. Right.Kevin [00:03:51]: Like it's kind of, well, historically it's, uh, it's a finance hub, finance hub. Yeah. I mean, there are some, some large banks there, right? Especially UBS, uh, the, the largest wealth manager in the world, but it's really becoming more of a tech hub now with all of the big, uh, tech companies there.swyx [00:04:08]: I guess. Yeah. Yeah. And, but we, and research wise, it's all ETH. Yeah. There's some other things. Yeah. Yeah. Yeah.Kevin [00:04:13]: It's all driven by ETH. And then, uh, it's sister university EPFL, which is in Lausanne. Okay. Um, which they're also doing a lot, but, uh, it's, it's, it's really ETH. Uh, and otherwise, no, I mean, it's a beautiful, really beautiful city. I can recommend. To anyone. To come, uh, visit Zurich, uh, uh, let me know, happy to show you around and of course, you know, you, you have the nature so close, you have the mountains so close, you have so, so beautiful lakes. Yeah. Um, I think that's what makes it such a livable city. Yeah.swyx [00:04:42]: Um, and the cost is not, it's not cheap, but I mean, we're in New York city right now and, uh, I don't know, I paid $8 for a coffee this morning, so, uh, the coffee is cheaper in Zurich than the New York city. Okay. Okay. Let's talk about Snipt. What is Snipt and, you know, then we'll talk about your origin story, but I just, let's, let's get a crisp, what is Snipt? Yeah.Kevin [00:05:03]: I always see two definitions of Snipt, so I'll give you one really simple, straightforward one, and then a second more nuanced, um, which I think will be valuable for the rest of our conversation. So the most simple one is just to say, look, we're an AI powered podcast app. So if you listen to podcasts, we're now providing this AI enhanced experience. But if you look at the more nuanced, uh, podcast. Uh, perspective, it's actually, we, we've have a very big focus on people who like your audience who listened to podcasts to learn something new. Like your audience, you want, they want to learn about AI, what's happening, what's, what's, what's the latest research, what's going on. And we want to provide a, a spoken audio platform where you can do that most effectively. And AI is basically the way that we can achieve that. Yeah.swyx [00:05:53]: Means to an end. Yeah, exactly. When you started. Was it always meant to be AI or is it, was it more about the social sharing?Kevin [00:05:59]: So the first version that we ever released was like three and a half years ago. Okay. Yeah. So this was before ChatGPT. Before Whisper. Yeah. Before Whisper. Yeah. So I think a lot of the features that we now have in the app, they weren't really possible yet back then. But we already from the beginning, we always had the focus on knowledge. That's the reason why, you know, we in our team, why we listen to podcasts, but we did have a bit of a different approach. Like the idea in the very beginning was, so the name is Snips and you can create these, what we call Snips, which is basically a small snippet, like a clip from a, from a podcast. And we did envision sort of like a, like a social TikTok platform where some people would listen to full episodes and they would snip certain, like the best parts of it. And they would post that in a feed and other users would consume this feed of Snips. And use that as a discovery tool or just as a means to an end. And yeah, so you would have both people who create Snips and people who listen to Snips. So our big hypothesis in the beginning was, you know, it will be easy to get people to listen to these Snips, but super difficult to actually get them to create them. So we focused a lot of, a lot of our effort on making it as seamless and easy as possible to create a Snip. Yeah.swyx [00:07:17]: It's similar to TikTok. You need CapCut for there to be videos on TikTok. Exactly.Kevin [00:07:23]: And so for, for Snips, basically whenever you hear an amazing insight, a great moment, you can just triple tap your headphones. And our AI actually then saves the moment that you just listened to and summarizes it to create a note. And this is then basically a Snip. So yeah, we built, we built all of this, launched it. And what we found out was basically the exact opposite. So we saw that people use the Snips to discover podcasts, but they really, you know, they don't. You know, really love listening to long form podcasts, but they were creating Snips like crazy. And this was, this was definitely one of these aha moments when we realized like, hey, we should be really doubling down on the knowledge of learning of, yeah, helping you learn most effectively and helping you capture the knowledge that you listen to and actually do something with it. Because this is in general, you know, we, we live in this world where there's so much content and we consume and consume and consume. And it's so easy to just at the end of the podcast. You just start listening to the next podcast. And five minutes later, you've forgotten everything. 90%, 99% of what you've actually just learned. Yeah.swyx [00:08:31]: You don't know this, but, and most people don't know this, but this is my fourth podcast. My third podcast was a personal mixtape podcast where I Snipped manually sections of podcasts that I liked and added my own commentary on top of them and published them as small episodes. Nice. So those would be maybe five to 10 minute Snips. Yeah. And then I added something that I thought was a good story or like a good insight. And then I added my own commentary and published it as a separate podcast. It's cool. Is that still live? It's still live, but it's not active, but you can go back and find it. If you're, if, if you're curious enough, you'll see it. Nice. Yeah. You have to show me later. It was so manual because basically what my process would be, I hear something interesting. I note down the timestamp and I note down the URL of the podcast. I used to use Overcast. So it would just link to the Overcast page. And then. Put in my note taking app, go home. Whenever I feel like publishing, I will take one of those things and then download the MP3, clip out the MP3 and record my intro, outro and then publish it as a, as a podcast. But now Snips, I mean, I can just kind of double click or triple tap.Kevin [00:09:39]: I mean, those are very similar stories to what we hear from our users. You know, it's, it's normal that you're doing, you're doing something else while you're listening to a podcast. Yeah. A lot of our users, they're driving, they're working out, walking their dog. So in those moments when you hear something amazing, it's difficult to just write them down or, you know, you have to take out your phone. Some people take a screenshot, write down a timestamp, and then later on you have to go back and try to find it again. Of course you can't find it anymore because there's no search. There's no command F. And, um, these, these were all of the issues that, that, that we encountered also ourselves as users. And given that our background was in AI, we realized like, wait, hey, this is. This should not be the case. Like podcast apps today, they're still, they're basically repurposed music players, but we actually look at podcasts as one of the largest sources of knowledge in the world. And once you have that different angle of looking at it together with everything that AI is now enabling, you realize like, hey, this is not the way that we, that podcast apps should be. Yeah.swyx [00:10:41]: Yeah. I agree. You mentioned something that you said your background is in AI. Well, first of all, who's the team and what do you mean your background is in AI?Kevin [00:10:48]: Those are two very different things. I'm going to ask some questions. Yeah. Um, maybe starting with, with my backstory. Yeah. My backstory actually goes back, like, let's say 12 years ago or something like that. I moved to Zurich to study at ETH and actually I studied something completely different. I studied mathematics and economics basically with this specialization for quant finance. Same. Okay. Wow. All right. So yeah. And then as you know, all of these mathematical models for, um, asset pricing, derivative pricing, quantitative trading. And for me, the thing that, that fascinates me the most was the mathematical modeling behind it. Uh, mathematics, uh, statistics, but I was never really that passionate about the finance side of things.swyx [00:11:32]: Oh really? Oh, okay. Yeah. I mean, we're different there.Kevin [00:11:36]: I mean, one just, let's say symptom that I noticed now, like, like looking back during that time. Yeah. I think I never read an academic paper about the subject in my free time. And then it was towards the end of my studies. I was already working for a big bank. One of my best friends, he comes to me and says, Hey, I just took this course. You have to, you have to do this. You have to take this lecture. Okay. And I'm like, what, what, what is it about? It's called machine learning and I'm like, what, what, what kind of stupid name is that? Uh, so you sent me the slides and like over a weekend I went through all of the slides and I just, I just knew like freaking hell. Like this is it. I'm, I'm in love. Wow. Yeah. Okay. And that was then over the course of the next, I think like 12 months, I just really got into it. Started reading all about it, like reading blog posts, starting building my own models.swyx [00:12:26]: Was this course by a famous person, famous university? Was it like the Andrew Wayne Coursera thing? No.Kevin [00:12:31]: So this was a ETH course. So a professor at ETH. Did he teach in English by the way? Yeah. Okay.swyx [00:12:37]: So these slides are somewhere available. Yeah. Definitely. I mean, now they're quite outdated. Yeah. Sure. Well, I think, you know, reflecting on the finance thing for a bit. So I, I was, used to be a trader, uh, sell side and buy side. I was options trader first and then I was more like a quantitative hedge fund analyst. We never really use machine learning. It was more like a little bit of statistical modeling, but really like you, you fit, you know, your regression.Kevin [00:13:03]: No, I mean, that's, that's what it is. And, uh, or you, you solve partial differential equations and have then numerical methods to, to, to solve these. That's, that's for you. That's your degree. And that's, that's not really what you do at work. Right. Unless, well, I don't know what you do at work. In my job. No, no, we weren't solving the partial differential. Yeah.swyx [00:13:18]: You learn all this in school and then you don't use it.Kevin [00:13:20]: I mean, we, we, well, let's put it like that. Um, in some things, yeah, I mean, I did code algorithms that would do it, but it was basically like, it was the most basic algorithms and then you just like slightly improve them a little bit. Like you just tweak them here and there. Yeah. It wasn't like starting from scratch, like, Oh, here's this new partial differential equation. How do we know?swyx [00:13:43]: Yeah. Yeah. I mean, that's, that's real life, right? Most, most of it's kind of boring or you're, you're using established things because they're established because, uh, they tackle the most important topics. Um, yeah. Portfolio management was more interesting for me. Um, and, uh, we, we were sort of the first to combine like social data with, with quantitative trading. And I think, uh, I think now it's very common, but, um, yeah. Anyway, then you, you went, you went deep on machine learning and then what? You quit your job? Yeah. Yeah. Wow.Kevin [00:14:12]: I quit my job because, uh, um, I mean, I started using it at the bank as well. Like try, like, you know, I like desperately tried to find any kind of excuse to like use it here or there, but it just was clear to me, like, no, if I want to do this, um, like I just have to like make a real cut. So I quit my job and joined an early stage, uh, tech startup in Zurich where then built up the AI team over five years. Wow. Yeah. So yeah, we built various machine learning, uh, things for, for banks from like models for, for sales teams to identify which clients like which product to sell to them and with what reasons all the way to, we did a lot, a lot with bank transactions. One of the actually most fun projects for me was we had an, an NLP model that would take the booking text of a transaction, like a credit card transaction and pretty fired. Yeah. Because it had all of these, you know, like numbers in there and abbreviations and whatnot. And sometimes you look at it like, what, what is this? And it was just, you know, it would just change it to, I don't know, CVS. Yeah.swyx [00:15:15]: Yeah. But I mean, would you have hallucinations?Kevin [00:15:17]: No, no, no. The way that everything was set up, it wasn't like, it wasn't yet fully end to end generative, uh, neural network as what you would use today. Okay.swyx [00:15:30]: Awesome. And then when did you go like full time on Snips? Yeah.Kevin [00:15:33]: So basically that was, that was afterwards. I mean, how that started was the friend of mine who got me into machine learning, uh, him and I, uh, like he also got me interested into startups. He's had a big impact on my life. And the two of us were just a jam on, on like ideas for startups every now and then. And his background was also in AI data science. And we had a couple of ideas, but given that we were working full times, we were thinking about, uh, so we participated in Hack Zurich. That's, uh, Europe's biggest hackathon, um, or at least was at the time. And we said, Hey, this is just a weekend. Let's just try out an idea, like hack something together and see how it works. And the idea was that we'd be able to search through podcast episodes, like within a podcast. Yeah. So we did that. Long story short, uh, we managed to do it like to build something that we realized, Hey, this actually works. You can, you can find things again in podcasts. We had like a natural language search and we pitched it on stage. And we actually won the hackathon, which was cool. I mean, we, we also, I think we had a good, um, like a good, good pitch or a good example. So we, we used the famous Joe Rogan episode with Elon Musk where Elon Musk smokes a joint. Okay. Um, it's like a two and a half hour episode. So we were on stage and then we just searched for like smoking weed and it would find that exact moment. It will play it. And it just like, come on with Elon Musk, just like smoking. Oh, so it was video as well? No, it was actually completely based on audio. But we did have the video for the presentation. Yeah. Which had a, had of course an amazing effect. Yeah. Like this gave us a lot of activation energy, but it wasn't actually about winning the hackathon. Yeah. But the interesting thing that happened was after we pitched on stage, several of the other participants, like a lot of them came up to us and started saying like, Hey, can I use this? Like I have this issue. And like some also came up and told us about other problems that they have, like very adjacent to this with a podcast. Where's like, like this. Like, could, could I use this for that as well? And that was basically the, the moment where I realized, Hey, it's actually not just us who are having these issues with, with podcasts and getting to the, making the most out of this knowledge. Yeah. The other people. Yeah. That was now, I guess like four years ago or something like that. And then, yeah, we decided to quit our jobs and start, start this whole snip thing. Yeah. How big is the team now? We're just four people. Yeah. Just four people. Yeah. Like four. We're all technical. Yeah. Basically two on the, the backend side. So one of my co-founders is this person who got me into machine learning and startups. And we won the hackathon together. So we have two people for the backend side with the AI and all of the other backend things. And two for the front end side, building the app.swyx [00:18:18]: Which is mostly Android and iOS. Yeah.Kevin [00:18:21]: It's iOS and Android. We also have a watch app for, for Apple, but yeah, it's mostly iOS. Yeah.swyx [00:18:27]: The watch thing, it was very funny because in the, in the Latent Space discord, you know, most of us have been slowly adopting snips. You came to me like a year ago and you introduced snip to me. I was like, I don't know. I'm, you know, I'm very sticky to overcast and then slowly we switch. Why watch?Kevin [00:18:43]: So it goes back to a lot of our users, they do something else while, while listening to a podcast, right? Yeah. And one of the, us giving them the ability to then capture this knowledge, even though they're doing something else at the same time is one of the killer features. Yeah. Maybe I can actually, maybe at some point I should maybe give a bit more of an overview of what the, all of the features that we have. Sure. So this is one of the killer features and for one big use case that people use this for is for running. Yeah. So if you're a big runner, a big jogger or cycling, like really, really cycling competitively and a lot of the people, they don't want to take their phone with them when they go running. So you load everything onto the watch. So you can download episodes. I mean, if you, if you have an Apple watch that has internet access, like with a SIM card, you can also directly stream. That's also possible. Yeah. So of course it's a, it's basically very limited to just listening and snipping. And then you can see all of your snips later on your phone. Let me tell you this error I just got.swyx [00:19:47]: Error playing episode. Substack, the host of this podcast, does not allow this podcast to be played on an Apple watch. Yeah.Kevin [00:19:52]: That's a very beautiful thing. So we found out that all of the podcasts hosted on Substack, you cannot play them on an Apple watch. Why is this restriction? What? Like, don't ask me. We try to reach out to Substack. We try to reach out to some of the bigger podcasters who are hosting the podcast on Substack to also let them know. Substack doesn't seem to care. This is not specific to our app. You can also check out the Apple podcast app. Yeah. It's the same problem. It's just that we actually have identified it. And we tell the user what's going on.swyx [00:20:25]: I would say we host our podcast on Substack, but they're not very serious about their podcasting tools. I've told them before, I've been very upfront with them. So I don't feel like I'm shitting on them in any way. And it's kind of sad because otherwise it's a perfect creative platform. But the way that they treat podcasting as an afterthought, I think it's really disappointing.Kevin [00:20:45]: Maybe given that you mentioned all these features, maybe I can give a bit of a better overview of the features that we have. Let's do that. Let's do that. So I think we're mostly in our minds. Maybe for some of the listeners.swyx [00:20:55]: I mean, I'll tell you my version. Yeah. They can correct me, right? So first of all, I think the main job is for it to be a podcast listening app. It should be basically a complete superset of what you normally get on Overcast or Apple Podcasts or anything like that. You pull your show list from ListenNotes. How do you find shows? You've got to type in anything and you find them, right?Kevin [00:21:18]: Yeah. We have a search engine that is powered by ListenNotes. Yeah. But I mean, in the meantime, we have a huge database of like 99% of all podcasts out there ourselves. Yeah.swyx [00:21:27]: What I noticed, the default experience is you do not auto-download shows. And that's one very big difference for you guys versus other apps, where like, you know, if I'm subscribed to a thing, it auto-downloads and I already have the MP3 downloaded overnight. For me, I have to actively put it onto my queue, then it auto-downloads. And actually, I initially didn't like that. I think I maybe told you that I was like, oh, it's like a feature that I don't like. Like, because it means that I have to choose to listen to it in order to download and not to... It's like opt-in. There's a difference between opt-in and opt-out. So I opt-in to every episode that I listen to. And then, like, you know, you open it and depends on whether or not you have the AI stuff enabled. But the default experience is no AI stuff enabled. You can listen to it. You can see the snips, the number of snips and where people snip during the episode, which roughly correlates to interest level. And obviously, you can snip there. I think that's the default experience. I think snipping is really cool. Like, I use it to share a lot on Discord. I think we have tons and tons of just people sharing snips and stuff. Tweeting stuff is also like a nice, pleasant experience. But like the real features come when you actually turn on the AI stuff. And so the reason I got snipped, because I got fed up with Overcast not implementing any AI features at all. Instead, they spent two years rewriting their app to be a little bit faster. And I'm like, like, it's 2025. I should have a podcast that has transcripts that I can search. Very, very basic thing. Overcast will basically never have it.Kevin [00:22:49]: Yeah, I think that was a good, like, basic overview. Maybe I can add a bit to it with the AI features that we have. So one thing that we do every time a new podcast comes out, we transcribe the episode. We do speaker diarization. We identify the speaker names. Each guest, we extract a mini bio of the guest, try to find a picture of the guest online, add it. We break the podcast down into chapters, as in AI generated chapters. That one. That one's very handy. With a quick description per title and quick description per each chapter. We identify all books that get mentioned on a podcast. You can tell I don't use that one. It depends on the podcast. There are some podcasts where the guests often recommend like an amazing book. So later on, you can you can find that again.swyx [00:23:42]: So you literally search for the word book or I just read blah, blah, blah.Kevin [00:23:46]: No, I mean, it's all LLM based. Yeah. So basically, we have we have an LLM that goes through the entire transcript and identifies if a user mentions a book, then we use perplexity API together with various other LLM orchestration to go out there on the Internet, find everything that there is to know about the book, find the cover, find who or what the author is, get a quick description of it for the author. We then check on which other episodes the author appeared on.swyx [00:24:15]: Yeah, that is killer.Kevin [00:24:17]: Because that for me, if. If there's an interesting book, the first thing I do is I actually listen to a podcast episode with a with a writer because he usually gives a really great overview already on a podcast.swyx [00:24:28]: Sometimes the podcast is with the person as a guest. Sometimes his podcast is about the person without him there. Do you pick up both?Kevin [00:24:37]: So, yes, we pick up both in like our latest models. But actually what we show you in the app, the goal is to currently only show you the guest to separate that. In the future, we want to show the other things more.swyx [00:24:47]: For what it's worth, I don't mind. Yeah, I don't think like if I like if I like somebody, I'll just learn about them regardless of whether they're there or not.Kevin [00:24:55]: Yeah, I mean, yes and no. We we we have seen there are some personalities where this can break down. So, for example, the first version that we released with this feature, it picked up much more often a person, even if it was not a guest. Yeah. For example, the best examples for me is Sam Altman and Elon Musk. Like they're just mentioned on every second podcast and it has like they're not on there. And if you're interested in it, you can go to Elon Musk. And actually like learning from them. Yeah, I see. And yeah, we updated our our algorithms, improved that a lot. And now it's gotten much better to only pick it up if they're a guest. And yeah, so this this is maybe to come back to the features, two more important features like we have the ability to chat with an episode. Yes. Of course, you can do the old style of searching through a transcript with a keyword search. But I think for me, this is this is how you used to do search and extracting knowledge in the in the past. Old school. And the A.I. Web. Way is is basically an LLM. So you can ask the LLM, hey, when do they talk about topic X? If you're interested in only a certain part of the episode, you can ask them for four to give a quick overview of the episode. Key takeaways afterwards also to create a note for you. So this is really like very open, open ended. And yeah. And then finally, the snipping feature that we mentioned just to reiterate. Yeah. I mean, here the the feature is that whenever you hear an amazing idea, you can trip. It's up your headphones or click a button in the app and the A.I. summarizes the insight you just heard and saves that together with the original transcript and audio in your knowledge library. I also noticed that you you skip dynamic content. So dynamic content, we do not skip it automatically. Oh, sorry. You detect. But we detect it. Yeah. I mean, that's one of the thing that most people don't don't actually know that like the way that ads get inserted into podcasts or into most podcasts is actually that every time you listen. To a podcast, you actually get access to a different audio file and on the server, a different ad is inserted into the MP3 file automatically. Yeah. Based on IP. Exactly. And that's what that means is if we transcribe an episode and have a transcript with timestamps like words, word specific timestamps, if you suddenly get a different audio file, like the whole time says I messed up and that's like a huge issue. And for that, we actually had to build another algorithm that would dynamically on the floor. I re sync the audio that you're listening to the transcript that we have. Yeah. Which is a fascinating problem in and of itself.swyx [00:27:24]: You sync by matching up the sound waves? Or like, or do you sync by matching up words like you basically do partial transcription?Kevin [00:27:33]: We are not matching up words. It's happening on the basically a bytes level matching. Yeah. Okay.swyx [00:27:40]: It relies on this. It relies on the exact match at some point.Kevin [00:27:46]: So it's actually. We're actually not doing exact matches, but we're doing fuzzy matches to identify the moment. It's basically, we basically built Shazam for podcasts. Just as a little side project to solve this issue.swyx [00:28:02]: Actually, fun fact, apparently the Shazam algorithm is open. They published the paper, it's talked about it. I haven't really dived into the paper. I thought it was kind of interesting that basically no one else has built Shazam.Kevin [00:28:16]: Yeah, I mean, well, the one thing is the algorithm. If you now talk about Shazam, the other thing is also having the database behind it and having the user mindset that if they have this problem, they come to you, right?swyx [00:28:29]: Yeah, I'm very interested in the tech stack. There's a big data pipeline. Could you share what is the tech stack?Kevin [00:28:35]: What are the most interesting or challenging pieces of it? So the general tech stack is our entire backend is, or 90% of our backend is written in Python. Okay. Hosting everything on Google Cloud Platform. And our front end is written with, well, we're using the Flutter framework. So it's written in Dart and then compiled natively. So we have one code base that handles both Android and iOS. You think that was a good decision? It's something that a lot of people are exploring. So up until now, yes. Okay. Look, it has its pros and cons. Some of the, you know, for example, earlier, I mentioned we have a Apple Watch app. Yeah. I mean, there's no Flutter for that, right? So that you build native. And then of course you have to sort of like sync these things together. I mean, I'm not the front end engineer, so I'm not just relaying this information, but our front end engineers are very happy with it. It's enabled us to be quite fast and be on both platforms from the very beginning. And when I talk with people and they hear that we are using Flutter, usually they think like, ah, it's not performant. It's super junk, janky and everything. And then they use it. They use our app and they're always super surprised. Or if they've already used our app, I couldn't tell them. They're like, what? Yeah. Um, so there is actually a lot that you can do with it.swyx [00:29:51]: The danger, the concern, there's a few concerns, right? One, it's Google. So when were they, when are they going to abandon it? Two, you know, they're optimized for Android first. So iOS is like a second, second thought, or like you can feel that it is not a native iOS app. Uh, but you guys put a lot of care into it. And then maybe three, from my point of view, JavaScript, as a JavaScript guy, React Native was supposed to be there. And I think that it hasn't really fulfilled that dream. Um, maybe Expo is trying to do that, but, um, again, it is not, does not feel as productive as Flutter. And I've, I spent a week on Flutter and dot, and I'm an investor in Flutter flow, which is the local, uh, Flutter, Flutter startup. That's doing very, very well. I think a lot of people are still Flutter skeptics. Yeah. Wait. So are you moving away from Flutter?Kevin [00:30:41]: I don't know. We don't have plans to do that. Yeah.swyx [00:30:43]: You're just saying about that. What? Yeah. Watch out. Okay. Let's go back to the stack.Kevin [00:30:47]: You know, that was just to give you a bit of an overview. I think the more interesting things are, of course, on the AI side. So we, like, as I mentioned earlier, when we started out, it was before chat GPT for the chat GPT moment before there was the GPT 3.5 turbo, uh, API. So in the beginning, we actually were running everything ourselves, open source models, try to fine tune them. They worked. There was us, but let's, let's be honest. They weren't. What was the sort of? Before Whisper, the transcription. Yeah, we were using wave to work like, um, there was a Google one, right? No, it was a Facebook, Facebook one. That was actually one of the papers. Like when that came out for me, that was one of the reasons why I said we, we should try something to start a startup in the audio space. For me, it was a bit like before that I had been following the NLP space, uh, quite closely. And as, as I mentioned earlier, we, we did some stuff at the startup as well, that I was working up. But before, and wave to work was the first paper that I had at least seen where the whole transformer architecture moved over to audio and bit more general way of saying it is like, it was the first time that I saw the transformer architecture being applied to continuous data instead of discrete tokens. Okay. And it worked amazingly. Ah, and like the transformer architecture plus self-supervised learning, like these two things moved over. And then for me, it was like, Hey, this is now going to take off similarly. It's the text space has taken off. And with these two things in place, even if some features that we want to build are not possible yet, they will be possible in the near term, uh, with this, uh, trajectory. So that was a little side, side note. No, it's in the meantime. Yeah. We're using whisper. We're still hosting some of the models ourselves. So for example, the whole transcription speaker diarization pipeline, uh,swyx [00:32:38]: You need it to be as cheap as possible.Kevin [00:32:40]: Yeah, exactly. I mean, we're doing this at scale where we have a lot of audio.swyx [00:32:44]: We're what numbers can you disclose? Like what, what are just to give people an idea because it's a lot. So we have more than a million podcasts that we've already processed when you say a million. So processing is basically, you have some kind of list of podcasts that you will auto process and others where a paying pay member can choose to press the button and transcribe it. Right. Is that the rough idea? Yeah, exactly.Kevin [00:33:08]: Yeah. And if, when you press that button or we also transcribe it. Yeah. So first we do the, we do the transcription. We do the. The, the speaker diarization. So basically you identify speech blocks that belong to the same speaker. This is then all orchestrated within, within LLM to identify which speech speech block belongs to which speaker together with, you know, we identify, as I mentioned earlier, we identify the guest name and the bio. So all of that comes together with an LLM to actually then assign assigned speaker names to, to each block. Yeah. And then most of the rest of the, the pipeline we've now used, we've now migrated to LLM. So we use mainly open AI, Google models, so the Gemini models and the open AI models, and we use some perplexity basically for those things where we need, where we need web search. Yeah. That's something I'm still hoping, especially open AI will also provide us an API. Oh, why? Well, basically for us as a consumer, the more providers there are.swyx [00:34:07]: The more downtime.Kevin [00:34:08]: The more competition and it will lead to better, better results. And, um, lower costs over time. I don't, I don't see perplexity as expensive. If you use the web search, the price is like $5 per a thousand queries. Okay. Which is affordable. But, uh, if you compare that to just a normal LLM call, um, it's, it's, uh, much more expensive. Have you tried Exa? We've, uh, looked into it, but we haven't really tried it. Um, I mean, we, we started with perplexity and, uh, it works, it works well. And if I remember. Correctly, Exa is also a bit more expensive.swyx [00:34:45]: I don't know. I don't know. They seem to focus on the search thing as a search API, whereas perplexity, maybe more consumer-y business that is higher, higher margin. Like I'll put it like perplexity is trying to be a product, Exa is trying to be infrastructure. Yeah. So that, that'll be my distinction there. And then the other thing I will mention is Google has a search grounding feature. Yeah. Which you, which you might want. Yeah.Kevin [00:35:07]: Yeah. We've, uh, we've also tried that out. Um, not as good. So we, we didn't, we didn't go into. Too much detail in like really comparing it, like quality wise, because we actually already had the perplexity one and it, and it's, and it's working. Yeah. Um, I think also there, the price is actually higher than perplexity. Yeah. Really? Yeah.swyx [00:35:26]: Google should cut their prices.Kevin [00:35:29]: Maybe it was the same price. I don't want to say something incorrect, but it wasn't cheaper. It wasn't like compelling. And then, then there was no reason to switch. So, I mean, maybe like in general, like for us, given that we do work with a lot of content, price is actually something that we do look at. Like for us, it's not just about taking the best model for every task, but it's really getting the best, like identifying what kind of intelligence level you need and then getting the best price for that to be able to really scale this and, and provide us, um, yeah, let our users use these features with as many podcasts as possible. Yeah.swyx [00:36:03]: I wanted to double, double click on diarization. Yeah. Uh, it's something that I don't think people do very well. So you know, I'm, I'm a, I'm a B user. I don't have it right now. And, and they were supposed to speak, but they dropped out last minute. Um, but, uh, we've had them on the podcast before and it's not great yet. Do you use just PI Anode, the default stuff, or do you find any tricks for diarization?Kevin [00:36:27]: So we do use the, the open source packages, but we have tweaked it a bit here and there. For example, if you mentioned the BAI guys, I actually listened to the podcast episode was super nice. Thank you. And when you started talking about speaker diarization, and I just have to think about, uh, I don't know.Kevin [00:36:49]: Is it possible? I don't know. I don't know. F**k this. Yeah, no, I don't know.Kevin [00:36:55]: Yeah. We are the best. This is a.swyx [00:37:07]: I don't know. This is the best. I don't know. This is the best. Yeah. Yeah. Yeah. You're doing good.Kevin [00:37:12]: So, so yeah. This is great. This is good. Yeah. No, so that of course helps us. Another thing that helps us is that we know certain structural aspects of the podcast. For example, how often does someone speak? Like if someone, like let's say there's a one hour episode and someone speaks for 30 seconds, that person is most probably not the guest and not the host. It's probably some ad, like some speaker from an ad. So we have like certain of these heuristics that we can use and we leverage to improve things. And in the past, we've also changed the clustering algorithm. So basically how a lot of the speaker diarization works is you basically create an embedding for the speech that's happening. And then you try to somehow cluster these embeddings and then find out this is all one speaker. This is all another speaker. And there we've also tweaked a couple of things where we again used heuristics that we could apply from knowing how podcasts function. And that's also actually why I was feeling so much with the BAI guys, because like all of these heuristics, like for them, it's probably almost impossible to use any heuristics because it can just be any situation, anything.Kevin [00:38:34]: So that's one thing that we do. Yeah, another thing is that we actually combine it with LLM. So the transcript, LLMs and the speaker diarization, like bringing all of these together to recalibrate some of the switching points. Like when does the speaker stop? When does the next one start?swyx [00:38:51]: The LLMs can add errors as well. You know, I wouldn't feel safe using them to be so precise.Kevin [00:38:58]: I mean, at the end of the day, like also just to not give a wrong impression, like the speaker diarization is also not perfect that we're doing, right? I basically don't really notice it.swyx [00:39:08]: Like I use it for search.Kevin [00:39:09]: Yeah, it's not perfect yet, but it's gotten quite good. Like, especially if you compare, if you look at some of the, like if you take a latest episode and you compare it to an episode that came out a year ago, we've improved it quite a bit.swyx [00:39:23]: Well, it's beautifully presented. Oh, I love that I can click on the transcript and it goes to the timestamp. So simple, but you know, it should exist. Yeah, I agree. I agree. So this, I'm loading a two hour episode of Detect Me Right Home, where there's a lot of different guests calling in and you've identified the guest name. And yeah, so these are all LLM based. Yeah, it's really nice.Kevin [00:39:49]: Yeah, like the speaker names.swyx [00:39:50]: I would say that, you know, obviously I'm a power user of all these tools. You have done a better job than Descript. Okay, wow. Descript is so much funding. They had their open AI invested in them and they still suck. So I don't know, like, you know, keep going. You're doing great. Yeah, thanks. Thanks.Kevin [00:40:12]: I mean, I would, I would say that, especially for anyone listening who's interested in building a consumer app with AI, I think the, like, especially if your background is in AI and you love working with AI and doing all of that, I think the most important thing is just to keep reminding yourself of what's actually the job to be done here. Like, what does actually the consumer want? Like, for example, you now were just delighted by the ability to click on this word and it jumps there. Yeah. Like, this is not, this is not rocket science. This is, like, you don't have to be, like, I don't know, Android Kapathi to come up with that and build that, right? And I think that's, that's something that's super important to keep in mind.swyx [00:40:52]: Yeah, yeah. Amazing. I mean, there's so many features, right? It's, it's so packed. There's quotes that you pick up. There's summarization. Oh, by the way, I'm going to use this as my official feature request. I want to customize what, how it's summarized. I want to, I want to have a custom prompt. Yeah. Because your summarization is good, but, you know, I have different preferences, right? Like, you know.Kevin [00:41:14]: So one thing that you can already do today, I completely get your feature request. And I think it just.swyx [00:41:18]: I'm sure people have asked it.Kevin [00:41:19]: I mean, maybe just in general as a, as a, how I see the future, you know, like in the future, I think all, everything will be personalized. Yeah, yeah. Like, not, this is not specific to us. Yeah. And today we're still in a, in a phase where the cost of LLMs, at least if you're working with, like, such long context windows. As us, I mean, there's a lot of tokens in, if you take an entire podcast, so you still have to take that cost into consideration. So if for every single user, we regenerate it entirely, it gets expensive. But in the future, this, you know, cost will continue to go down and then it will just be personalized. So that being said, you can already today, if you go to the player screen. Okay. And open up the chat. Yeah. You can go to the, to the chat. Yes. And just ask for a summary in your style.swyx [00:42:13]: Yeah. Okay. I mean, I, I listen to consume, you know? Yeah. Yeah. I, I've never really used this feature. I don't know. I think that's, that's me being a slow adopter. No, no. I mean, that's. It has, when does the conversation start? Okay.Kevin [00:42:26]: I mean, you can just type anything. I think what you're, what you're describing, I mean, maybe that is also an interesting topic to talk about. Yes. Where, like, basically I told you, like, look, we have this chat. You can just ask for it. Yeah. And this is, this is how ChatGPT works today. But if you're building a consumer app, you have to move beyond the chat box. People do not want to always type out what they want. So your feature request was, even though theoretically it's already possible, what you are actually asking for is, hey, I just want to open up the app and it should just be there in a nicely formatted way. Beautiful way such that I can read it or consume it without any issues. Interesting. And I think that's in general where a lot of the, the. Opportunities lie currently in the market. If you want to build a consumer app, taking the capability and the intelligence, but finding out what the actual user interface is the best way how a user can engage with this intelligence in a natural way.swyx [00:43:24]: Is this something I've been thinking about as kind of like AI that's not in your face? Because right now, you know, we like to say like, oh, use Notion has Notion AI. And we have the little thing there. And there's, or like some other. Any other platform has like the sparkle magic wand emoji, like that's our AI feature. Use this. And it's like really in your face. A lot of people don't like it. You know, it should just kind of become invisible, kind of like an invisible AI.Kevin [00:43:49]: 100%. I mean, the, the way I see it as AI is, is the electricity of, of the future. And like no one, like, like we don't talk about, I don't know, this, this microphone uses electricity, this phone, you don't think about it that way. It's just in there, right? It's not an electricity enabled product. No, it's just a product. Yeah. It will be the same with AI. I mean, now. It's still a, something that you use to market your product. I mean, we do, we do the same, right? Because it's still something that people realize, ah, they're doing something new, but at some point, no, it'll just be a podcast app and it will be normal that it has all of this AI in there.swyx [00:44:24]: I noticed you do something interesting in your chat where you source the timestamps. Yeah. Is that part of this prompt? Is there a separate pipeline that adds source sources?Kevin [00:44:33]: This is, uh, actually part of the prompt. Um, so this is all prompt engine. Engineering, um, uh, you should be able to click on it. Yeah, I clicked on it. Um, this is all prompt engineering with how to provide the, the context, you know, we, because we provide all of the transcript, how to provide the context and then, yeah, I get them all to respond in a correct way with a certain format and then rendering that on the front end. This is one of the examples where I would say it's so easy to create like a quick demo of this. I mean, you can just go to chat to be deep, paste this thing in and say like, yeah, do this. Okay. Like 15 minutes and you're done. Yeah. But getting this to like then production level that it actually works 99% of the time. Okay. This is then where, where the difference lies. Yeah. So, um, for this specific feature, like we actually also have like countless regexes that they're just there to correct certain things that the LLM is doing because it doesn't always adhere to the format correctly. And then it looks super ugly on the front end. So yeah, we have certain regexes that correct that. And maybe you'd ask like, why don't you use an LLM for that? Because that's sort of the, again, the AI native way, like who uses regexes anymore. But with the chat for user experience, it's very important that you have the streaming because otherwise you need to wait so long until your message has arrived. So we're streaming live the, like, just like ChatGPT, right? You get the answer and it's streaming the text. So if you're streaming the text and something is like incorrect. It's currently not easy to just like pipe, like stream this into another stream, stream this into another stream and get the stream back, which corrects it, that would be amazing. I don't know, maybe you can answer that. Do you know of any?swyx [00:46:19]: There's no API that does this. Yeah. Like you cannot stream in. If you own the models, you can, uh, you know, whatever token sequence has, has been emitted, start loading that into the next one. If you fully own the models, uh, I don't, it's probably not worth it. That's what you do. It's better. Yeah. I think. Yeah. Most engineers who are new to AI research and benchmarking actually don't know how much regexing there is that goes on in normal benchmarks. It's just like this ugly list of like a hundred different, you know, matches for some criteria that you're looking for. No, it's very cool. I think it's, it's, it's an example of like real world engineering. Yeah. Do you have a tooling that you're proud of that you've developed for yourself?Kevin [00:47:02]: Is it just a test script or is it, you know? I think it's a bit more, I guess the term that has come up is, uh, vibe coding, uh, vibe coding, some, no, sorry, that's actually something else in this case, but, uh, no, no, yes, um, vibe evals was a term that in one of the talks actually on, on, um, I think it might've been the first, the first or the first day at the conference, someone brought that up. Yeah. Uh, because yeah, a lot of the talks were about evals, right. Which is so important. And yeah, I think for us, it's a bit more vibe. Evals, you know, that's also part of, you know, being a startup, we can take risks, like we can take the cost of maybe sometimes it failing a little bit or being a little bit off and our users know that and they appreciate that in return, like we're moving fast and iterating and building, building amazing things, but you know, a Spotify or something like that, half of our features will probably be in a six month review through legal or I don't know what, uh, before they could sell them out.swyx [00:48:04]: Let's just say Spotify is not very good at podcasting. Um, I have a documented, uh, dislike for, for their podcast features, just overall, really, really well integrated any other like sort of LLM focused engineering challenges or problems that you, that you want to highlight.Kevin [00:48:20]: I think it's not unique to us, but it goes again in the direction of handling the uncertainty of LLMs. So for example, with last year, at the end of the year, we did sort of a snipped wrapped. And one of the things we thought it would be fun to, just to do something with, uh, with an LLM and something with the snips that, that a user has. And, uh, three, let's say unique LLM features were that we assigned a personality to you based on the, the snips that, that you have. It was, I mean, it was just all, I guess, a bit of a fun, playful way. I'm going to look up mine. I forgot mine already.swyx [00:48:57]: Um, yeah, I don't know whether it's actually still in the, in the, we all took screenshots of it.Kevin [00:49:01]: Ah, we posted it in the, in the discord. And the, the second one, it was, uh, we had a learning scorecard where we identified the topics that you snipped on the most, and you got like a little score for that. And the third one was a, a quote that stood out. And the quote is actually a very good example of where we would run that for user. And most of the time it was an interesting quote, but every now and then it was like a super boring quotes that you think like, like how, like, why did you select that? Like, come on for there. The solution was actually just to say, Hey, give me five. So it extracted five quotes as a candidate, and then we piped it into a different model as a judge, LLM as a judge, and there we use a, um, a much better model because with the, the initial model, again, as, as I mentioned also earlier, we do have to look at the, like the, the costs because it's like, we have so much text that goes into it. So we, there we use a bit more cheaper model, but then the judge can be like a really good model to then just choose one out of five. This is a practical example.swyx [00:50:03]: I can't find it. Bad search in discord. Yeah. Um, so, so you do recommend having a much smarter model as a judge, uh, and that works for you. Yeah. Yeah. Interesting. I think this year I'm very interested in LM as a judge being more developed as a concept, I think for things like, you know, snips, raps, like it's, it's fine. Like, you know, it's, it's, it's, it's entertaining. There's no right answer.Kevin [00:50:29]: I mean, we also have it. Um, we also use the same concept for our books feature where we identify the, the mention. Books. Yeah. Because there it's the same thing, like 90% of the time it, it works perfectly out of the box one shot and every now and then it just, uh, starts identifying books that were not really mentioned or that are not books or made, yeah, starting to make up books. And, uh, they are basically, we have the same thing of like another LLM challenging it. Um, yeah. And actually with the speakers, we do the same now that I think about it. Yeah. Um, so I'm, I think it's a, it's a great technique. Interesting.swyx [00:51:05]: You run a lot of calls.Kevin [00:51:07]: Yeah.swyx [00:51:08]: Okay. You know, you mentioned costs. You move from self hosting a lot of models to the, to the, you know, big lab models, open AI, uh, and Google, uh, non-topic.Kevin [00:51:18]: Um, no, we love Claude. Like in my opinion, Claude is the, the best one when it comes to the way it formulates things. The personality. Yeah. The personality. Okay. I actually really love it. But yeah, the cost is. It's still high.swyx [00:51:36]: So you cannot, you tried Haiku, but you're, you're like, you have to have Sonnet.Kevin [00:51:40]: Uh, like basically we like with Haiku, we haven't experimented too much. We obviously work a lot with 3.5 Sonnet. Uh, also, you know, coding. Yeah. For coding, like in cursor, just in general, also brainstorming. We use it a lot. Um, I think it's a great brainstorm partner, but yeah, with, uh, with, with a lot of things that we've done done, we opted for different models.swyx [00:52:00]: What I'm trying to drive at is how much cheaper can you get if you go from cloud to cloud? Closed models to open models. And maybe it's like 0% cheaper, maybe it's 5% cheaper, or maybe it's like 50% cheaper. Do you have a sense?Kevin [00:52:13]: It's very difficult to, to judge that. I don't really have a sense, but I can, I can give you a couple of thoughts that have gone through our minds over the time, because obviously we do realize like, given that we, we have a couple of tasks where there are just so many tokens going in, um, at some point it will make sense to, to offload some of that. Uh, to an open source model, but going back to like, we're, we're a startup, right? Like we're not an AI lab or whatever, like for us, actually the most important thing is to iterate fast because we need to learn from our users, improve that. And yeah, just this velocity of this, these iterations. And for that, the closed models hosted by open AI, Google is, uh, and swapping, they're just unbeatable because you just, it's just an API call. Yeah. Um, so you don't need to worry about. Yeah. So much complexity behind that. So this is, I would say the biggest reason why we're not doing more in this space, but there are other thoughts, uh, also for the future. Like I see two different, like we basically have two different usage patterns of LLMs where one is this, this pre-processing of a podcast episode, like this initial processing, like the transcription, speaker diarization, chapterization. We do that once. And this, this usage pattern it's, it's quite predictable. Because we know how many podcasts get released when, um, so we can sort of have a certain capacity and we can, we, we're running that 24 seven, it's one big queue running 24 seven.swyx [00:53:44]: What's the queue job runner? Uh, is it a Django, just like the Python one?Kevin [00:53:49]: No, that, that's just our own, like our database and the backend talking to the database, picking up jobs, finding it back. I'm just curious in orchestration and queues. I mean, we, we of course have like, uh, a lot of other orchestration where we're, we're, where we use, uh, the Google pub sub, uh, thing, but okay. So we have this, this, this usage pattern of like very predictable, uh, usage, and we can max out the, the usage. And then there's this other pattern where it's, for example, the snippet where it's like a user, it's a user action that triggers an LLM call and it has to be real time. And there can be moments where it's by usage and there can be moments when there's very little usage for that. There. So that's, that's basically where these LLM API calls are just perfect because you don't need to worry about scaling this up, scaling this down, um, handling, handling these issues. Serverless versus serverful.swyx [00:54:44]: Yeah, exactly. Okay.Kevin [00:54:45]: Like I see them a bit, like I see open AI and all of these other providers, I see them a bit as the, like as the Amazon, sorry, AWS of, of AI. So it's a bit similar how like back before AWS, you would have to have your, your servers and buy new servers or get rid of servers. And then with AWS, it just became so much easier to just ramp stuff up and down. Yeah. And this is like the taking it even, even, uh, to the next level for AI. Yeah.swyx [00:55:18]: I am a big believer in this. Basically it's, you know, intelligence on demand. Yeah. We're probably not using it enough in our daily lives to do things. I should, we should be able to spin up a hundred things at once and go through things and then, you know, stop. And I feel like we're still trying to figure out how to use LLMs in our lives effectively. Yeah. Yeah.Kevin [00:55:38]: 100%. I think that goes back to the whole, like that, that's for me where the big opportunity is for, if you want to do a startup, um, it's not about, but you can let the big labs handleswyx [00:55:48]: the challenge of more intelligence, but, um, it's the... Existing intelligence. How do you integrate? How do you actually incorporate it into your life? AI engineering. Okay, cool. Cool. Cool. Cool. Um, the one, one other thing I wanted to touch on was multimodality in frontier models. Dwarcash had a interesting application of Gemini recently where he just fed raw audio in and got diarized transcription out or timestamps out. And I think that will come. So basically what we're saying here is another wave of transformers eating things because right now models are pretty much single modality things. You know, you have whisper, you have a pipeline and everything. Yeah. You can't just say, Oh, no, no, no, we only fit like the raw, the raw files. Do you think that will be realistic for you? I 100% agree. Okay.Kevin [00:56:38]: Basically everything that we talked about earlier with like the speaker diarization and heuristics and everything, I completely agree. Like in the, in the future that would just be put everything into a big multimodal LLM. Okay. And it will output, uh, everything that you want. Yeah. So I've also experimented with that. Like just... With, with Gemini 2? With Gemini 2.0 Flash. Yeah. Just for fun. Yeah. Yeah. Because the big difference right now is still like the cost difference of doing speaker diarization this way or doing transcription this way is a huge difference to the pipeline that we've built up. Huh. Okay.swyx [00:57:15]: I need to figure out what, what that cost is because in my mind 2.0 Flash is so cheap. Yeah. But maybe not cheap enough for you.Kevin [00:57:23]: Uh, no, I mean, if you compare it to, yeah, whisper and speaker diarization and especially self-hosting it and... Yeah. Yeah. Yeah.swyx [00:57:30]: Yeah.Kevin [00:57:30]: Okay. But we will get there, right? Like this is just a question of time.swyx [00:57:33]: And, um, at some point, as soon as that happens, we'll be the first ones to switch. Yeah. Awesome. Anything else that you're like sort of eyeing on the horizon as like, we are thinking about this feature, we're thinking about incorporating this new functionality of AI into our, into our app? Yeah.Kevin [00:57:50]: I mean, we, there's so many areas that we're thinking about, like our challenge is a bit more... Choosing. Yeah. Choosing. Yeah. So, I mean, I think for me, like looking into like the next couple of years, like the big areas that interest us a lot, basically four areas, like one is content. Um, right now it's, it's podcasts. I mean, you did mention, I think you mentioned like you can also upload audio books and YouTube videos. YouTube. I actually use the YouTube one a fair amount. But in the future, we, we want to also have audio books natively in the app. And, uh, we want to enable AI generated content. Like just think of, take deep research and notebook analysis. Like put these together. That should be, that should be in our app. The second area is discovery. I think in general. Yeah.swyx [00:58:38]: I noticed that you don't have, so you

The New Quantum Era
Quantum imaginary time evolution with Zoe Holmes

The New Quantum Era

Play Episode Listen Later Mar 6, 2025 35:02 Transcription Available


Professor Zoe Holmes from EPFL in Lausanne, Switzerland, discusses her work on quantum imaginary time evolution and variational techniques for near-term quantum computers. With a background from Imperial College London and Oxford, Holmes explores the limits of what can be achieved with NISQ (Noisy Intermediate-Scale Quantum) devices.Key topics covered:Quantum Imaginary Time Evolution (QITE) as a cooling-inspired algorithm for finding ground statesComparison of QITE to Variational Quantum Eigensolver (VQE) approachesChallenges in variational methods, including barren plateaus and expressivity concernsTrade-offs between circuit depth, fidelity, and practical implementation on current hardwarePotential for scientific value from NISQ-era devices in physics and chemistry applicationsThe interplay between classical and quantum methods in advancing our understanding of quantum systems

45 Graus
Hugo Penedones: Como a I.A. está a revolucionar a Ciência (e a incrível história do Alphafold)

45 Graus

Play Episode Listen Later Mar 5, 2025 98:10


Hugo Penedones é licenciado em Engenharia Informática e Computação pela Universidade do Porto e é cofundador e atualmente CTO da Inductiva.AI, uma empresa de Inteligência Artificial para a ciência e engenharia. Anteriormente, passou pela Google DeepMind, onde foi membro fundador do projeto AlphaFold, um algoritmo de previsão de estruturas de proteínas que viria a revolucionar a ciência nesta área a levar a atribuição do Prémio Nobel de Química de 2024 a Demis Hassabis e John M. Jumper (David Baker foi o 3º laureado com o Nobel). Ao longo da sua carreira, trabalhou em diversas áreas, incluindo visão por computador, pesquisa web, bioinformática e aprendizagem por reforço em instituições de investigação como o Idiap e a EPFL na Suíça. _______________ Índice: (0:00) Início (3:30) PUB (3:54) IA aplicada à Ciência | Projecto Alphafold (Google Deepmind) | Paper em que o convidado foi co-autor (14:01) Alphafold vs LLMs (ex: ChatGPT) | AlphaGo (22:20) Como num hackathon com o Hugo e dois colegas começou o Alphafold | Demis Hassabis (CEO da Deepmind) (28:31) Outras aplicações de AI na ciência: fusão nuclear, previsão do tempo (41:14) IA na engenharia de materiais: descoberta de novos materiais e o potencial dos supercondutores (46:35) IA cientista: Poderá a IA formular hipóteses científicas no futuro? | Matemática | P vs NP (57:10 ) Modelos de machine learning são caixas negras? (1:03:12) Inductiva, a startup do convidado dedicada a simulações numéricas com machine learning (1:13:47) A promessa da computação quântica Cortar de 1:14:44 a 1:16:38 (assegura pf que fica silêncio no final, antes de eu fazer a pergunta seguinte, que muda de tema) (1:16:03) Desafios da qualidade dos dados na ciência com IA | Será possível simularmos uma célula? (1:24:44) Que progressos podemos esperar da IA na ciência nos próximos 10 anos? | Alphacell ______________ Esta conversa foi editada por: João RibeiroSee omnystudio.com/listener for privacy information.

Tribu - La 1ere
Pourquoi lʹhomme choisit la musique à la maison

Tribu - La 1ere

Play Episode Listen Later Feb 24, 2025 26:14


Invitée: Fiona Del Puppo. Lʹespace domestique est souvent décrit comme celui de la femme. Or, en matière dʹécoute de musique dans le logement, cʹest bien souvent lʹhomme qui impose ses choix. Comment expliquer cet ascendant sonore masculin? Tribu reçoit Fiona Del Puppo, qui a cosigné avec Garance Clément une étude intitulée "Un pouvoir domestique masculin. Lʹécoute de musique comme instrument de la domination genrée au sein du foyer". Toutes deux sont affiliées au Laboratoire de sociologie urbaine de lʹEPFL.

Vacarme - La 1ere
Les Échos de Vacarme - La voiture tient toujours la route

Vacarme - La 1ere

Play Episode Listen Later Feb 23, 2025 56:15


La voiture reste de loin le moyen de transport le plus utilisé en Suisse: on monte à son bord pour faire en moyenne chaque année 9400 km. On pourrait enchaîner les statistiques: les trois quarts des déplacements liés aux achats que nous effectuons sont inférieurs à 5 kilomètres; la taille de nos voitures n'arrête pas d'augmenter, mais plus d'un ménage sur cinq en Suisse n'en possède pas; le taux de remplissage pour aller au travail n'est que de 1,14 personne par véhicule. Comment expliquer cet attachement? Qu'en est-il de la promesse d'une future voiture entièrement électrique et autonome? Sommes-nous toutes et tous égaux face à elle? Peut-on sortir collectivement de ce moyen de locomotion et est-ce souhaitable? Production : Raphaële Bouchet Réalisation : Didier Rossat Les invité.es: Tiphaine Robert Historienne, maîtresse d'enseignement suppléante à la Faculté des sciences sociales et politiques, UNIL. & Prof. Vincent Kaufmann Sociologue et directeur du LaSUR ( Laboratoire de sociologie urbaine ) EPFL.

Kontext
Kultur-Talk: Die Medien und die Künstliche Intelligenz

Kontext

Play Episode Listen Later Feb 14, 2025 28:04


Künstliche Intelligenz ist längst im Alltag angekommen, etwa wenn wir einen Begriff in eine Suchmaschine eingeben. Was bedeutet nun die Lern-Fähigkeit von Maschinen für die Medien? Wie setzen sie heute Künstliche Intelligenz ein? Welche Gefahren und Möglichkeiten bietet KI im redaktionellen Alltag? Die Medienhäuser machen sich Gedanken, wie sich KI in den Redaktionen berufsethisch unbedenklich einsetzen lässt: als Hilfsmittel. Doch Kontrolle und Verantwortung sollen immer den Menschen obliegen. Was kann KI? Was kann sie nicht? Was bedeutet KI für die Medien-Nutzerinnen und -Nutzer? Es diskutieren die Informatikerin Sabine Süsstrunk, Professorin an der Fakultät für Informatik und Kommunikationswissenschaften an der EPFL in Lausanne und Verwaltungsrätin der SRG, und Alexandra Stark, die unter anderem CH Media in Sachen KI berät und der Eidgenössischen Medienkommission angehört.

Vertigo - La 1ere
"Formes: Les motifs dans l'art et la science", une exposition à voir à lʹEPFL

Vertigo - La 1ere

Play Episode Listen Later Feb 11, 2025 6:36


"Shapes: Patterns in Art and Science" qui veut dire "Formes: Les motifs dans l'art et la science" est une exposition qui veut montrer les géométries qui nous entourent par lʹart et la science. A voir en ce moment au Pavillon A à la place Cosandey dans le campus de lʹEPFL jusquʹau 9 mars 2025. Hugo Parlier professeur de mathématiques et curateur est au micro de Layla Shlonsky.

Six heures - Neuf heures, le samedi - La 1ere
On fait quoi ce week-end ? – Musica ex Machina au Pavillon de l'EPFL

Six heures - Neuf heures, le samedi - La 1ere

Play Episode Listen Later Feb 8, 2025 8:27


Laurent Dormond nous emmène au Pavillon de lʹEPFL pour découvrir Musica ex Machina, une exposition qui explore lʹhistoire de la pensée computationnelle et algorithmique en musique. Elle illustre comment les avancées technologiques et la créativité humaine redessinent sans cesse les contours de lʹexpression musicale.

Monumental - La 1ere
Lʹhabitat colossal

Monumental - La 1ere

Play Episode Listen Later Jan 26, 2025 56:04


Pendant longtemps, le logement était pratique presque banal. Puis, les architectes vont sʹinspirer des monuments pour créer un autre type dʹhabitat. Cʹest le cas du Circus de Bath, des espaces dʹAbraxas ou encore de la cité radieuse à Marseille. Pour en parler, Johanne Dussez reçoit Bruno Marchand, architecte et professeur émérite à lʹEPFL

Six heures - Neuf heures, le samedi - La 1ere
Lʹinvitée - Olivia Csiky Trnka, metteuse en scène et dramaturge

Six heures - Neuf heures, le samedi - La 1ere

Play Episode Listen Later Jan 25, 2025 21:20


Dans sa dernière création, " Cœur Colère ", Olivia Csiky Trnka explore nos rapports aux émotions et aux systèmes dʹoppression. Elle établit un parallèle entre la colère féminine et un réacteur nucléaire. Sur scène, trois femmes incarnent différentes facettes de la colère, ouvrant ainsi la voie à un monde nouveau. Pour nourrir cette œuvre, la metteuse en scène sʹest entourée de scientifiques de lʹUNIL et de lʹEPFL, sʹinspirant de ses visites en centrale nucléaire et du fonctionnement de laboratoires. Karine Vasarino sʹest entretenue avec Laurence Kaufmann, chercheuse en sciences sociales et spécialiste des émotions collectives, qui a suivi de près le travail dʹOlivia Csiky Trnka.

Tribu - La 1ere
Les villes suisses de moyenne importance

Tribu - La 1ere

Play Episode Listen Later Jan 7, 2025 26:45


Invité: Maxime Felder. Lorsquʹon parle des villes suisses, ce sont souvent les plus grandes qui sont évoquées: Genève, Lausanne, Bâle, Zurich, Berne. Il existe pourtant dans notre pays de très nombreuses communes urbaines de taille moyenne, qui sont souvent peu étudiées par la recherche. Des villes comme Bienne, La Chaux-de-Fonds, Martigny, Schaffhouse ou Thoune. Une équipe de recherche du Laboratoire de sociologie urbaine (LASUR) de lʹEPFL sʹest penchée sur douze ville Suisse de moyenne importance. Elle en a tiré un livre: "La Suisse de A(rbon) à Z(oug). Portrait en 12 villes" chez EPFL Press. Tribu reçoit Maxime Felder, sociologue, chercheur au LASUR, qui a codirigé le livre avec Renate Albrecher, Vincent Kaufmann et Yves Pedrazzini.

New Books Network
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books Network

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in History
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books in History

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history

New Books in Psychology
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books in Psychology

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/psychology

New Books in Science, Technology, and Society
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books in Science, Technology, and Society

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Neuroscience
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books in Neuroscience

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/neuroscience

New Books in Biology and Evolution
Charles Foster, "Being a Human: Adventures in Forty Thousand Years of Consciousness" (Metropolitan Books, 2021)

New Books in Biology and Evolution

Play Episode Listen Later Dec 29, 2024 62:18


How did humans come to be who we are? In his marvelous, eccentric, and widely lauded book Being a Beast, legal scholar, veterinary surgeon, and naturalist extraordinaire Charles Foster set out to understand the consciousness of animal species by living as a badger, otter, fox, deer, and swift. Now, he inhabits three crucial periods of human development to understand the consciousness of perhaps the strangest animal of all—the human being. To experience the Upper Paleolithic era—a turning point when humans became behaviorally modern, painting caves and telling stories, Foster learns what it feels like to be a Cro-Magnon hunter-gatherer by living in makeshift shelters without amenities in the rural woods of England. He tests his five impoverished senses to forage for berries and roadkill and he undertakes shamanic journeys to explore the connection of wakeful dreaming to religion. For the Neolithic period, when humans stayed in one place and domesticated plants and animals, forever altering our connection to the natural world, he moves to a reconstructed Neolithic settlement. Finally, to explore the Enlightenment—the age of reason and the end of the soul—Foster inspects Oxford colleges, dissecting rooms, cafes, and art galleries. He finds his world and himself bizarre and disembodied, and he rues the atrophy of our senses, the cause for much of what ails us. Drawing on psychology, neuroscience, natural history, agriculture, medical law and ethics, Being a Human: Adventures in Forty Thousand Years of Consciousness (Metropolitan Books, 2021) is one man's audacious attempt to feel a connection with 45,000 years of human history. This glorious, fiercely imaginative journey from our origins to a possible future ultimately shows how we might best live on earth—and thrive. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices

Swisspreneur Show
EP #463 - Frédéric Loizeau: How Photonic Integrated Circuits Will Supercharge AI

Swisspreneur Show

Play Episode Listen Later Dec 22, 2024 34:31


Timestamps: 4:35 - Similarities between entrepreneurship and academia 9:02 - What are photonic integrated circuits? 16:07 - Approaching customers from different industries 17:09 - Why Lightium is still in beta stage 18:51 - What business model do they plan on adopting? This episode was produced in collaboration with startup days, taking place next year on May 14th 2025. Click ⁠here⁠ to purchase your ticket. About Frédéric Loizeau: Frédéric Loizeau is the co-founder and CRO of Lightium, a Swiss startup enabling the next generation of photonic integrated circuits. He holds a PhD in Microsystems from EPFL and worked as a researcher at Stanford and as a key account manager at Sensirion and Business & Technology Development Manager at CSEM before starting Lightium in 2023. Lightium provides production-grade TFLN PIC foundry services to customers in the datacom, telecom, AI, Quantum, and aerospace industries. What does this mean? It means they manufacture photonic integrated circuits, which are essential to making the transfer of information faster. This is relevant for fiber optic internet, in the telecom industry, but also quite relevant for space satellites. Lastly, it's crucial for AI: the recent developments in this field have increased our need not only for computing power (which NVIDIA solved by providing their GPUs to data centers), but also for greater bandwidth — that's where Lightium's tech comes in. They frame their technology as a service, through which people from different industries can design their own photonic integrated circuits that will then be manufactured by the Lightium team. Lightium has had great success since the very beginning: when they were still working on their idea at the CSEM research center and tentatively reaching out to customers asking for letters of recommendation, they got cheques instead. Nowadays their success continues: they recently raised a CHF 7M seed round. The cover portrait was edited by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.smartportrait.io⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.‍ Don't forget to give us a follow on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners.

BlockHash: Exploring the Blockchain
Ep. 464 Vincent Gramoli | Compliant Asset Tokenization with Redbelly Network

BlockHash: Exploring the Blockchain

Play Episode Listen Later Dec 9, 2024 41:23


For episode 464, Founder & CTO Vincent Gramoli joins Brandon Zemp to discuss the Redbelly Network, which is Enabling asset issuers to tokenize and trade compliant on-chain structured products. Vincent Gramoli has chaired the Cybersecurity Committee for the Computing Research and Education Association of Australasia (CORE) and the Blockchain Technical Committee for the Australian Computer Society. He received the Digital National Facilities & Collections Award from CSIRO, the Best Paper Awards at ICDCS'21, IPDPS'22, ICDCS'22 and DSN'24 for his research on blockchains, the Education Leader of the Year Award from Blockchain Australia, and the Future Fellowship from the Australian Research Council. In the past, Gramoli has been affiliated with INRIA, Cornell, Data61 and EPFL. ⏳ Timestamps: 0:00 | Introduction 1:00 | Who is Vincent Gramoli? 4:52 | What is the Redbelly Network? 9:41 | Redbelly Network global partnerships 11:15 | Process of Tokenization 15:06 | Real world assets on Redbelly Network 17:24 | Double spending & Finality 22:43 | 3rd party audits for Redbelly Network 25:48 | Scalability on Redbelly Network 31:20 | Use-cases on Redbelly Network 33:56 | How can asset issuers start Tokenizing today 36:38 | Redbelly Network 2025 Roadmap 40:45 | Redbelly Community

CQFD - La 1ere
Filariose, Arctique et Atlas du ciel

CQFD - La 1ere

Play Episode Listen Later Dec 1, 2024 56:15


En nouvelle diffusion: 1. La Polynésie mobilisée contre la filariose Elle fait partie des maladies tropicales négligées, la filariose touche pourtant près de 50 millions de personnes à travers le monde. En Polynésie, lʹîle de Moorea fait actuellement lʹobjet dʹune campagne de lutte contre la filariose, intitulée "Moorea Pod". Laure Philiber en parle avec Laurence Rochat Stettler, infectiologue, Jérémie Bouchut, responsable de la campagne POD à Moorea, et Frédérique Roofthooft, cadre santé à lʹHôpital de Moorea. 2. Expédition scientifique en Arctique: analyser l'air et les fjords face au réchauffement global En Arctique, le changement climatique se manifeste avec une intensité accrue. Pour étudier ses impacts et comprendre le rôle de cette région dans le réchauffement global, deux équipes de scientifiques de lʹÉcole polytechnique de Lausanne (EPFL) se sont rendues sur le terrain. Avec Julia Schmale, professeure assistante en sciences atmosphériques à lʹEPFL. Un sujet de Lucia Sillig. 3. Atlas historique du ciel: comprendre lʹUnivers, une quête de toutes les sociétés humaines "LʹAtlas historique du ciel" (2024), publié aux Éditions Les Arènes, retrace 6'000 ans de découvertes: des premières observations à lʹœil nu aux télescopes sophistiqués dʹaujourdʹhui, de Ptolémée à Einstein, ou encore du soleil qui tourne autour de la Terre aux exoplanètes. Stéphane Délétroz reçoit ses deux auteurs, le géohistorien Christian Grataloup et lʹastrophysicien Pierre Léna.

Irish Tech News Audio Articles
TU RISE Launch Promotes Research Collaboration With Focus on Digital Transformation and Sustainability

Irish Tech News Audio Articles

Play Episode Listen Later Nov 22, 2024 2:54


South East Technological University (SETU) proudly hosted the highly anticipated launch of TU RISE (TU Research and Innovation Supporting Enterprise) on Wednesday, 20 November, at SETU's Cork Road Campus. The event brought together industry leaders, academics, and policymakers to celebrate the transformative impact of TU RISE and its role in driving regional development. SETU President Professor Veronica Campbell's welcoming address highlighted the University's commitment to fostering innovation and collaboration through TU RISE. Prof. Marie Claire Van Hout, SETU's Vice President for Research, Innovation and Impact, followed with a strategic overview of the initiative, underlining its importance in enhancing SETU's engagement with regional enterprises. Prof. Campbell said, "The launch of TU RISE today represents not just a new initiative but a bold step forward in our collective journey toward excellence, opportunity, and impact. "The launch of TU RISE is a defining moment for SETU, but it is also an invitation: an invitation to all of us to step forward and be part of something greater. As we continue to build and grow this initiative, we must remember that it is only through collaboration, curiosity, and bold thinking that we will achieve the transformative impact we seek." Dr Geraldine Canny provided attendees with insights into TU RISE offerings, outlining the opportunities it presents for businesses and researchers. A highlight of the event was a round-table discussion on regional development and the critical role research plays in driving economic growth. Facilitated by Prof. Felicity Kelliher, the discussion featured leading experts, including Prof. Dominique Foray from EPFL, Dr James O'Sullivan, Head of Innovation and Commercialisation at SETU, and Louise Grubb, entrepreneur and director at Trivium Vet. Attendees also benefitted from thought-provoking presentations, including: • Dr Patrick Lynch's introduction to the TU RISE Digital Masterclass. • Ed Murphy of Greentech HQ, who shared actionable insights on adopting sustainability in regional companies. • Dr Laurence Fitzhenry and the OTRG team, who detailed their successful collaboration with Bausch + Lomb. • Michael Flynn of FLI Global, who spoke about his experience working with SETU on collaborative projects. The event was attended by a wide range of regional companies, SETU academics and staff, as well as representatives from regional development agencies and the Southern Regional Assembly. TU RISE is co-financed by the Government of Ireland and the European Union through the ERDF Southern, Eastern & Midland Regional Programme 2021-27.

CQFD - La 1ere
Supercalculateur, aphasie et végétal

CQFD - La 1ere

Play Episode Listen Later Nov 21, 2024 55:51


Un super calculateur plus écolo à lʹEPFL? Les brèves du jour Le chant des aphasiques Un regard sur le végétal: Expo MEP

Tribu - La 1ere
Ce qui a changé en 10 ans: lʹexplosion des livraisons à domicile.

Tribu - La 1ere

Play Episode Listen Later Nov 13, 2024 25:52


Invité: Luca Pattaroni, en public. Pour fêter ses dix ans dʹexistence, Tribu se penche sur dix thématiques sociétales qui ont changé ces dix dernières années. Aujourdʹhui: la très forte hausse des livraisons à domicile. Habits, chaussures, repas, électro-ménager: nos achats sur internet ont explosé en une décennie. Pourquoi cet engouement pour les commandes en ligne? Quels en sont les conséquences sur notre manière de vivre, mais aussi sur les villes, qui sont traditionnellement organisées autour des commerces physiques? Pour en parler, Tribu reçoit Luca Pattaroni, professeur de sociologie urbaine à lʹEPFL.

Monumental - La 1ere
Le portrait de Jean Prouvé

Monumental - La 1ere

Play Episode Listen Later Nov 10, 2024 55:44


Considéré comme l'un des créateurs les plus importants du XXe siècle, Jean Prouvé est à la fois un entrepreneur, un chercheur, un designer, un ingénieur ainsi quʹun architecte. Pour parler de son parcours, Johanne Dussez accueille Giulia Marino architecte, professeure à lʹEPFL et à lʹUniversité catholique de Louvain.

Demystifying Science
Entropic Gravity + Atomic Interconnectome - Dr. Andreas Schlatter - DS Pod #294

Demystifying Science

Play Episode Listen Later Oct 27, 2024 128:44


Dr. Andreas Schlatter is a classically trained physicist (EPFL, Princeton) with a decidedly heretical approach to physics. Though deeply mathematical in his approach, he dispenses with the purely field-based approach to understanding the building blocks of nature, and asks far deeper question about what the mathematics is telling us about the hidden structures of nature. Rather than take the positivist approach, which suggests that anything that cannot be experimentally encountered is not worth considering, Schlatter follows in the tradition of Gödel and the other mid 20th century logicians, who believed that a layer of the universe beyond the visible is available to us if we can reason our way to it. By following this path, Schlatter has reached the conclusion that the only viable interpretation of quantum mechanics is the transactional one. Unlike the other transnational theorists we've had on the show, Schlatter has gone one step further to propose that there is a transactional interpretation of gravity just as is there is for quantum mechanics. He calls it entropic gravity, and in this episode we explore the convoluted path he took to physics, how he found the transactionalists, and how he and Ruth Kastner formulated an entropic explanation for spacetime. PATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB MERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/ AMAZON: Do your shopping through this link for Caver Mead's Collective Electrodynamics: https://amzn.to/4e01Slj (00:00) Go! (00:05:28) Andreas Schlatter's Academic Journey (00:10:39) Exploration of Mathematics in Physics (00:25:51) The Vienna Circle and Logical Positivism (00:30:04) Einstein's Transition in Theoretical Approach (00:37:37) Philosophical Inquiry in Physics Education (00:41:08) The Quest for Understanding in Logic and Set Theory (00:48:02) Transition from Academia to Finance (00:56:02) Challenges of Financial Modeling (01:09:59) Trust and Economic Stability (01:16:10) Light and Gravity Intersect (01:23:02) Entropy and Information Theory (01:31:07) Absorption and Entropy Dynamics (01:37:22) Exploration of Quantum Transactions (01:46:30) Transactional Approach to Gravity (01:56:31) Light Clocks and the Nature of Time (02:04:13) Multiverses and Quantum Realms #Physics, #QuantumMechanics, #Mathematics, #PhilosophyOfScience, #LogicalPositivism, #EmpiricalScience, #TheoreticalPhysics, #Einstein, #Newton, #QuantumReality, #Entropy, #Cosmology, #Multiverse, #GravityTheory, #EconomicStability, #TransactionalInterpretation, #ScienceEducation, #Philosophy, #QuantumGravity, #FinanceAndPhysics, #ScientificUnderstanding #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671

Wissenschaftsmagazin
Geschönte Publikationen in der Neurowissenschaft?

Wissenschaftsmagazin

Play Episode Listen Later Oct 5, 2024 26:03


Womöglich hat ein Hirnforscher seine Studien manipuliert. Zudem: Wie die Schweizer Hochschulen künstliche Intelligenz vorantreiben. 00:00 Schlagzeilen 00:48 KI: Schweizer Hochschulen geben Gas Künstliche Intelligenz beschäftigt auch die Hochschulen in der Schweiz intensiv. Gerade haben die ETH Zürich und die EPFL bekannt gegeben, dass sie die Zusammenarbeit in dem Bereich weiter verstärken wollen. Sie haben das Schweizerische Nationale Institut für KI gegründet. Was haben die da konkret vor? 07:26 Meldungen Pilze gedeihen besser mit «White Noise»-Geräuschen Wer ist wie anfällig für Hitze-Tod? Komet Tsuchinshan-Atlas kommt der Erde nah Geschönte Studien zu Parkinson? Recherchen des Fachmagazins Science legen nahe, dass ein weithin anerkannter US-Forscher über Jahre hinweg Abbildungen in seinen Studien manipuliert haben könnte. Betroffen sind vor allem Studien zu Parkinson, auch solche, deren Ergebnisse zu klinischen Studien mit neuartigen Wirkstoffen am Menschen geführt haben. Der Forscher leitet seit 2016 die Hirnforschung am National Institute for Ageing, das wesentliche Mengen an Fördergeldern vergibt. Mehr zum Wissenschaftsmagazin und Links zu Studien: https://www.srf.ch/wissenschaftsmagazin .

Swisspreneur Show
EP #436 - Samantha Anderson: Creating a Circular Economy for Plastics

Swisspreneur Show

Play Episode Listen Later Sep 11, 2024 36:08


Timestamps: 7:13 - The problem with plastic recycling 10:19 - Making plastic recycling cheaper  14:15 - Typical customers for DePoly 18:45 - Going from research to startup 22:12 - Patent strategy and picking your partners About Samantha Anderson: Samantha Anderson is the co-founder and CEO at DePoly, a cleantech startup recycling hard-to-recycle plastic. She's originally from Canada and holds a PhD in Carbon Capture and Storage from EPFL (her studies being the reason she moved to Switzerland). Sam worked as a researcher for some years before starting DePoly in 2019.  DePoly tackles a very pressing issue: out of all plastics produced, 90% are not recycled, but instead get incinerated (resulting in atmospheric pollution), or end up in the ocean (which is how we get microplastics in our food), or simply become litter (and take 500 years to decompose). Only “easy to recycle” plastic items such as bottles or clean packaging are actually recycled. DePoly's cutting-edge recycling process converts plastics into high-quality raw materials without compromising their quality. Not only is it energy-efficient, but it also has the remarkable ability to handle even the most challenging streams of PET plastic and textiles, including mixed, dirty post-consumer and post-industrial waste that are traditionally considered unrecyclable. Don't forget to give us a follow on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners.

New Books Network
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

New Books Network

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Mathematics
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

New Books in Mathematics

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/mathematics

New Books in the History of Science
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

New Books in the History of Science

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Science, Technology, and Society
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

New Books in Science, Technology, and Society

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Physics and Chemistry
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

New Books in Physics and Chemistry

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices

NBN Book of the Day
Brian Clegg, "Ten Patterns That Explain the Universe" (MIT Press, 2021)

NBN Book of the Day

Play Episode Listen Later Sep 2, 2024 52:08


Our universe might appear chaotic, but deep down it's simply a myriad of rules working independently to create patterns of action, force, and consequence. In Ten Patterns That Explain the Universe (MIT Press, 2021), Brian Clegg explores the phenomena that make up the very fabric of our world by examining ten essential sequenced systems. From diagrams that show the deep relationships between space and time to the quantum behaviors that rule the way that matter and light interact, Clegg shows how these patterns provide a unique view of the physical world and its fundamental workings. Guiding readers on a tour of our world and the universe beyond, Clegg describes the cosmic microwave background, sometimes called the "echo of the big bang," and how it offers clues to the universe's beginnings; the diagrams that illustrate Einstein's revelation of the intertwined nature of space and time; the particle trail patterns revealed by the Large Hadron Collider and other accelerators; and the simple-looking patterns that predict quantum behavior (and decorated Richard Feynman's van). Clegg explains how the periodic table reflects the underlying pattern of the configuration of atoms, discusses the power of the number line, demonstrates the explanatory uses of tree diagrams, and more. Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day

Tribu - La 1ere
Les bornes frontières

Tribu - La 1ere

Play Episode Listen Later Aug 29, 2024 25:47


Invité: Olivier Cavaleri. Près de 7000 bornes frontières suivent le pourtour de la Suisse. Elles témoignent de la formation des États actuels. Se balader pour les répertorier permet de remonter le passé. En les arpentant on y découvre par exemple le temps où le royaume de Sardaigne côtoyait la République du Valais. Pour nous faire voyager à travers le temps, Tribu reçoit Olivier Cavaleri, ingénieur EPFL et historien diplômé de lʹUNIL. Il a consacré son travail de mémoire et six ouvrages à la thématique des bornes frontières. Son dernier a pour titre "Histoires de bornes. La frontière entre le canton du Jura et la France" (Ed. Slatkine).

Echo der Zeit
Dutzende Tote nach israelischem Angriff auf Schulgebäude in Gaza

Echo der Zeit

Play Episode Listen Later Aug 10, 2024 28:46


Die israelische Armee hat einen Luftangriff auf eine Schule in der Stadt Gaza geflogen. Dabei gab es Dutzende Tote. Laut der israelischen Armee nutzt die Hamas das Gebäude als Kommandozentrale und als Versteck. Diese dementiert. Weitere Themen: (01:28) Dutzende Tote nach israelischem Angriff auf Schulgebäude in Gaza (09:08) Nach Mord in Basel: Braucht es eine Reform des Strafvollzugs? (13:32) Thailands neue Oppositionspartei (16:39) Venezuela: Polizei greift hart durch gegen Protestierende (22:56) Erste Frau an der Spitze der EPFL

Mind & Matter
Epigenetics, Chromatin Plasticity & the Neural Basis of Memory | Giulia Santoni | #169

Mind & Matter

Play Episode Listen Later Aug 9, 2024 83:45 Transcription Available


Send us a Text Message.About the guest: Giulia Santoni, PhD is a neuroscientist who obtained her PhD at the EPFL in Switzerland, where she studied epigenetic influences on memory formation.Episode summary: Nick and Dr. Santoni discuss: transcription & gene regulation; synaptic plasticity; learning & associative memory; epigenetics, histones, DNA methylation, and mechanisms of gene regulation; chromatin plasticity & the neural basis of memory formation; and more.Related episodes:Emotion, Cognition, Consciousness, Behavior & Brain Evolution | Joseph LeDoux | #73Cognitive Neuroscience, Cognitive Flexibility & Control, Attention, Working Memory, Multitasking & Behavior | Tobias Egner | #130*This content is never meant to serve as medical advice.Support the Show.All episodes (audio & video), show notes, transcripts, and more at the M&M Substack Try Athletic Greens: Comprehensive & convenient daily nutrition. Free 1-year supply of vitamin D with purchase.Try SiPhox Health—Affordable, at-home bloodwork w/ a comprehensive set of key health marker. Use code TRIKOMES for a 10% discount.Try the Lumen device to optimize your metabolism for weight loss or athletic performance. Use code MIND for 10% off.Learn all the ways you can support my efforts

Swisspreneur Show
EP #424 - Marcel Salathé: Universities Are the Key to Switzerland's Success

Swisspreneur Show

Play Episode Listen Later Jul 31, 2024 48:57


Timestamps: 6:59 - Being humbled by Y Combinator 12:20 - EPFL's extension school 15:47 - What needs fixing in Switzerland 29:25 - Budget cuts for Swiss universities 39:40 - EPFL AI center This episode was sponsored by NordPass. Use code “swisspreneur” at checkout to get 30% off Business and Teams plans. About Marcel Salathé: Marcel Salathé is a professor at EPFL, startup founder and investor, as well as a digital epidemiologist. He holds a PhD in Biology and Environmental Sciences from ETH, and has taught at Stanford and Penn State. He is the founder of the EPFL Extension School and AIcrowd, an AI challenge platform at EPFL. When founding the EPFL Extension School, Marcel's goal was to fulfill EPFL's mandate of not only educating its students but the Swiss population at large — he believes the current educational system, which restricts education to childhood and adolescence, is outdated, and that Swiss people need to be learning continuously, especially considering the rapid acceleration of technological development. Self-learning has played a big role in Marcel's career, who originally studied Biology but has since branched out to machine learning and generative AI. During his chat with Silvan, he further argues that although Switzerland has done a great job in connecting innovation centers (such as universities) with the private sector, it's done terribly at channeling this innovation towards the public sector as well, which is why this sphere seems to be 20 years behind the curve. Marcel thinks there must be a structural push towards this, just as there was a concerted political effort before to connect universities with the private sector. He is strongly against the recent budget cuts for education and innovation, because he considers this to be the “magic sauce” of Switzerland's prosperity.

Zero Knowledge
Episode 329: Building Cryptographic Proofs from Hash Functions with Alessandro Chiesa and Eylon Yogev

Zero Knowledge

Play Episode Listen Later Jun 26, 2024 70:38


Summary In this week's episode Anna (https://x.com/AnnaRRose) and Nico (https://x.com/nico_mnbl) chat with Alessandro Chiesa (https://ic-people.epfl.ch/~achiesa/), Associate Professor at EPFL and Eylon Yogev (https://eylonyogev.com/), Professor at Bar-Ilan University. They discuss their recent publication; Building Cryptographic Proofs from Hash Functions (https://snargsbook.org/), which provides a comprehensive and rigorous treatment of cryptographic proofs and goes on to analyze notable constructions of SNARGs based on ideal hash functions. Here's some additional links for this episode: Building Cryptographic Proofs from Hash Functions by Chiesa and Yogev (https://snargsbook.org/) Episode 200: SNARK Research & Pedagogy with Alessandro Chiesa (https://zeroknowledge.fm/episode-200-snark-research-pedagogy-with-alessandro-chiesa/) Barriers for Succinct Arguments in the Random Oracle Model by Chiesa and Eylon Yogev (https://eprint.iacr.org/2020/1427.pdf) STIR: Reed–Solomon Proximity Testing with Fewer Queries by Arnon, Chiesa, Fenzi and Eylon Yogev (https://eprint.iacr.org/2024/390.pdf) ZK Podcast Episode 321: STIR with Gal Arnon & Giacomo Fenzi (https://zeroknowledge.fm/321-2/) Computationally Sound Proofs by Micali (https://people.csail.mit.edu/silvio/Selected%20Scientific%20Papers/Proof%20Systems/Computationally_Sound_Proofs.pdf) Tight Security Bounds for Micali's SNARGs by Chiesa and Yogev (https://eprint.iacr.org/2021/188.pdf) Interactive Oracle Proofs by Ben-Sasson, Chiesa, and Spooner (https://eprint.iacr.org/2016/116.pdf) Summer School on Probabilistic Proofs: Foundations and Frontiers of Probabilistic Proofs in Zürich, Switzerland (https://www.slmath.org/summer-schools/1037) Proofs, Arguments, and Zero-Knowledge by Thaler (https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.pdf) ZK HACK Discord and Justin Thaler Study Club (https://discord.gg/Nw7PKJ7e) Justin Thaler Study Club by ZK HACK on YouTube (https://www.youtube.com/playlist?list=PLj80z0cJm8QEmZkGgSOLpr_8B08SCWVQ7) Subquadratic SNARGs in the Random Oracle Model by Chiesa and Yogev (https://eprint.iacr.org/2021/281.pdf) ZK Learning Course (https://zk-learning.org/) ZK Hack Montreal has been announced for Aug 9 - 11! Apply to join the hackathon here (https://zk-hack-montreal.devfolio.co/). Episode Sponsors Launching soon, Namada (https://namada.net/) is a proof-of-stake L1 blockchain focused on multichain, asset-agnostic privacy, via a unified shielded set. Namada is natively interoperable with fast-finality chains via IBC, and with Ethereum using a trust-minimized bridge. Follow Namada on Twitter @namada (https://twitter.com/namada) for more information and join the community on Discord (http://discord.gg/namada). Aleo (http://aleo.org/) is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. As Aleo is gearing up for their mainnet launch in Q1, this is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at http://aleo.org/ (http://aleo.org/). If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (www.youtube.com/channel/UCYWsYz5cKw4wZ9Mpe4kuM_g)

Tribu - La 1ere
Comment la Suisse est devenu une puissance économique

Tribu - La 1ere

Play Episode Listen Later Jun 11, 2024 26:02


Invité: Cédric Humair. La Suisse, malgré sa petite taille, s'est imposée comme une puissance économique de premier plan. Comment lʹexpliquer? Quels sont les éléments clés de cette expansion commerciale? Comment la Suisse a-t-elle réussi à conquérir des marchés sans conquérir de terres? Tribu reçoit Cédric Humair, maître dʹenseignement et de recherche à lʹUniversité de Lausanne et chargé de cours à lʹEPFL. Il publie "La Suisse et les Empires. Affirmation dʹune puissance économique (1857-1914)" paru aux Éditions Livreo-Alphil.

Vacarme - La 1ere
les Echos de Vacarme - Pollution aux dioxines: le grand enfumage?

Vacarme - La 1ere

Play Episode Listen Later May 19, 2024 56:15


Substances créées lors de la combustion notamment de matières plastiques, les dioxines sont un polluant des sols très toxique et persistant. Elles mettent plusieurs décennies à disparaître. On les trouve surtout aux alentours des incinérateurs de déchets ménagers. Présentes dans la terre jusquʹà 80 centimètres de profondeur, elles contaminent les potagers et les animaux qui consomment de la nourriture au sol. En 2021, la Ville de Lausanne découvrait avec stupeur des taux de pollution à des niveaux jamais atteints auparavant. Depuis, les traces de dioxines dans les sols sont de plus en plus fréquentes. Ces substances potentiellement cancérogènes suscitent peur et inquiétude au sein de la population, qui se sent mal informée par les autorités. Raphaële Bouchet recevra : Nathalie Chèvre, écotoxicologue à lʹUniversité de Lausanne & Alexandre Elsig, historien, chargé de cours au Collège des humanités de lʹEPFL, coauteur de lʹétude "La plus vieille usine du monde. Socio-histoire de lʹincinérateur du Vallon (1958-2005)"

Forum
Neurotech Roundtable: Anikeeva, Courtine, and Bloch

Forum

Play Episode Listen Later May 15, 2024 39:55


On this episode, Chief Editor Barbara Cheifet speaks with Polina Anikeeva from MIT, Grégoire Courtine from EPFL, and Jocelyn Bloch, a neurosurgeon at Lausanne University Hospital. These three leaders in the field of neurotechnologies discuss new devices that help us learn how our brain works, implantable brain-computer interfaces that are helping patients with neurological disorders walk again, and why this field is so exciting today. Hosted on Acast. See acast.com/privacy for more information.

Zero Knowledge
Episode 321: STIR with Gal Arnon & Giacomo Fenzi

Zero Knowledge

Play Episode Listen Later Apr 24, 2024 60:22


In this week's episode, Anna (https://twitter.com/annarrose) and Kobi (https://twitter.com/kobigurk) chat with Gal Arnon (https://galarnon42.github.io/), Ph.D student from the Weizmann Institute of Science (https://weizmann.ac.il/pages/) & Giacomo Fenzi (https://twitter.com/GiacomoFenzi), Ph.D. student in the COMPSEC Lab (https://compsec.epfl.ch/) at EPFL (https://epfl.ch/). Gal and Giacomo are amongst the co-authors of ‘STIR: Reed–Solomon Proximity Testing with Fewer Queries' (https://eprint.iacr.org/2024/390) and in this conversation, they discuss how their research led them to work on these topics and where the thesis for this particular work sparked from. They set the stage by exploring the history of FRI and discussing some hidden nuances in how FRI works. And then they introduce STIR, a system that can be used in place of FRI, which incorporates various optimisations to improve the performance. Here's some additional links for this episode: FRIDA: Data Availability Sampling from FRI by Hall-Andersen, Simkin and Wagner (https://eprint.iacr.org/2024/248.pdf) Lattice-Based Polynomial Commitments: Towards Asymptotic and Concrete Efficiency by Fenzi, Moghaddas and Nguyen (https://eprint.iacr.org/2023/846.pdf) DEEP-FRI: Sampling Outside the Box Improves Soundness by Ben-Sasson, Goldberg, Kopparty and Saraf (https://eprint.iacr.org/2019/336.pdf) Proximity Gaps for Reed–Solomon Codes by Ben-Sasson, Carmon, Ishai, Kopparty and Saraf (https://eprint.iacr.org/2020/654.pdf) IOPs with Inverse Polynomial Soundness Error by Arnon, Chiesa and Yogev (https://eprint.iacr.org/2023/1062.pdf) Episode 293: Exploring Security of ZK Systems with Nethermind's Michał & Albert (https://zeroknowledge.fm/293-2/) Circle STARKs by Haböck, Levit and Papini (https://eprint.iacr.org/2024/278.pdf) Episode 304: Exploring FRI, LogUp and using M31 for STARKs with Ulrich Haböck (https://zeroknowledge.fm/304-2/) FRI-Binius: Improved Polynomial Commitments for Binary Towers (https://www.ulvetanna.io/news/fri-binius) The next ZK Hack IRL is happening May 17-19 in Kraków, apply to join now at zkkrakow.com (https://www.zkkrakow.com/) Aleo (http://aleo.org/) is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Dive deeper and discover more about Aleo at http://aleo.org/ (http://aleo.org/) If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (https://zeroknowledge.fm/)