POPULARITY
Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria. But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right. As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that. One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers: Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media: @ASCO on Twitter ASCO on Facebook ASCO on LinkedIn ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera
“To navigate proof, we must reach into a thicket of errors and biases. We must confront monsters and embrace uncertainty, balancing — and rebalancing —our beliefs. We must seek out every useful fragment of data, gather every relevant tool, searching wider and climbing further. Finding the good foundations among the bad. Dodging dogma and falsehoods. Questioning. Measuring. Triangulating. Convincing. Then perhaps, just perhaps, we'll reach the truth in time.”—Adam KucharskiMy conversation with Professor Kucharski on what constitutes certainty and proof in science (and other domains), with emphasis on many of the learnings from Covid. Given the politicization of science and A.I.'s deepfakes and power for blurring of truth, it's hard to think of a topic more important right now.Audio file (Ground Truths can also be downloaded on Apple Podcasts and Spotify)Eric Topol (00:06):Hello, it's Eric Topol from Ground Truths and I am really delighted to welcome Adam Kucharski, who is the author of a new book, Proof: The Art and Science of Certainty. He's a distinguished mathematician, by the way, the first mathematician we've had on Ground Truths and a person who I had the real privilege of getting to know a bit through the Covid pandemic. So welcome, Adam.Adam Kucharski (00:28):Thanks for having me.Eric Topol (00:30):Yeah, I mean, I think just to let everybody know, you're a Professor at London School of Hygiene and Tropical Medicine and also noteworthy you won the Adams Prize, which is one of the most impressive recognitions in the field of mathematics. This is the book, it's a winner, Proof and there's so much to talk about. So Adam, maybe what I'd start off is the quote in the book that captivates in the beginning, “life is full of situations that can reveal remarkably large gaps in our understanding of what is true and why it's true. This is a book about those gaps.” So what was the motivation when you undertook this very big endeavor?Adam Kucharski (01:17):I think a lot of it comes to the work I do at my day job where we have to deal with a lot of evidence under pressure, particularly if you work in outbreaks or emerging health concerns. And often it really pushes the limits, our methodology and how we converge on what's true subject to potential revision in the future. I think particularly having a background in math's, I think you kind of grow up with this idea that you can get to these concrete, almost immovable truths and then even just looking through the history, realizing that often isn't the case, that there's these kind of very human dynamics that play out around them. And it's something I think that everyone in science can reflect on that sometimes what convinces us doesn't convince other people, and particularly when you have that kind of urgency of time pressure, working out how to navigate that.Eric Topol (02:05):Yeah. Well, I mean I think these times of course have really gotten us to appreciate, particularly during Covid, the importance of understanding uncertainty. And I think one of the ways that we can dispel what people assume they know is the famous Monty Hall, which you get into a bit in the book. So I think everybody here is familiar with that show, Let's Make a Deal and maybe you can just take us through what happens with one of the doors are unveiled and how that changes the mathematics.Adam Kucharski (02:50):Yeah, sure. So I think it is a problem that's been around for a while and it's based on this game show. So you've got three doors that are closed. Behind two of the doors there is a goat and behind one of the doors is a luxury car. So obviously, you want to win the car. The host asks you to pick a door, so you point to one, maybe door number two, then the host who knows what's behind the doors opens another door to reveal a goat and then ask you, do you want to change your mind? Do you want to switch doors? And a lot of the, I think intuition people have, and certainly when I first came across this problem many years ago is well, you've got two doors left, right? You've picked one, there's another one, it's 50-50. And even some quite well-respected mathematicians.Adam Kucharski (03:27):People like Paul Erdős who was really published more papers than almost anyone else, that was their initial gut reaction. But if you work through all of the combinations, if you pick this door and then the host does this, and you switch or not switch and work through all of those options. You actually double your chances if you switch versus sticking with the door. So something that's counterintuitive, but I think one of the things that really struck me and even over the years trying to explain it is convincing myself of the answer, which was when I first came across it as a teenager, I did quite quickly is very different to convincing someone else. And even actually Paul Erdős, one of his colleagues showed him what I call proof by exhaustion. So go through every combination and that didn't really convince him. So then he started to simulate and said, well, let's do a computer simulation of the game a hundred thousand times. And again, switching was this optimal strategy, but Erdős wasn't really convinced because I accept that this is the case, but I'm not really satisfied with it. And I think that encapsulates for a lot of people, their experience of proof and evidence. It's a fact and you have to take it as given, but there's actually quite a big bridge often to really understanding why it's true and feeling convinced by it.Eric Topol (04:41):Yeah, I think it's a fabulous example because I think everyone would naturally assume it's 50-50 and it isn't. And I think that gets us to the topic at hand. What I love, there's many things I love about this book. One is that you don't just get into science and medicine, but you cut across all the domains, law, mathematics, AI. So it's a very comprehensive sweep of everything about proof and truth, and it couldn't come at a better time as we'll get into. Maybe just starting off with math, the term I love mathematical monsters. Can you tell us a little bit more about that?Adam Kucharski (05:25):Yeah, this was a fascinating situation that emerged in the late 19th century where a lot of math's, certainly in Europe had been derived from geometry because a lot of the ancient Greek influence on how we shaped things and then Newton and his work on rates of change and calculus, it was really the natural world that provided a lot of inspiration, these kind of tangible objects, tangible movements. And as mathematicians started to build out the theory around rates of change and how we tackle these kinds of situations, they sometimes took that intuition a bit too seriously. And there was some theorems that they said were intuitively obvious, some of these French mathematicians. And so, one for example is this idea of you how things change smoothly over time and how you do those calculations. But what happened was some mathematicians came along and showed that when you have things that can be infinitely small, that intuition didn't necessarily hold in the same way.Adam Kucharski (06:26):And they came up with these examples that broke a lot of these theorems and a lot of the establishments at the time called these things monsters. They called them these aberrations against common sense and this idea that if Newton had known about them, he never would've done all of his discovery because they're just nuisances and we just need to get rid of them. And there's this real tension at the core of mathematics in the late 1800s where some people just wanted to disregard this and say, look, it works for most of the time, that's good enough. And then others really weren't happy with this quite vague logic. They wanted to put it on much sturdier ground. And what was remarkable actually is if you trace this then into the 20th century, a lot of these monsters and these particularly in some cases functions which could almost move constantly, this constant motion rather than our intuitive concept of movement as something that's smooth, if you drop an apple, it accelerates at a very smooth rate, would become foundational in our understanding of things like probability, Einstein's work on atomic theory. A lot of these concepts where geometry breaks down would be really important in relativity. So actually, these things that we thought were monsters actually were all around us all the time, and science couldn't advance without them. So I think it's just this remarkable example of this tension within a field that supposedly concrete and the things that were going to be shunned actually turn out to be quite important.Eric Topol (07:53):It's great how you convey how nature isn't so neat and tidy and things like Brownian motion, understanding that, I mean, just so many things that I think fit into that general category. In the legal, we won't get into too much because that's not so much the audience of Ground Truths, but the classic things about innocent and until proven guilty and proof beyond reasonable doubt, I mean these are obviously really important parts of that overall sense of proof and truth. We're going to get into one thing I'm fascinated about related to that subsequently and then in science. So before we get into the different types of proof, obviously the pandemic is still fresh in our minds and we're an endemic with Covid now, and there are so many things we got wrong along the way of uncertainty and didn't convey that science isn't always evolving search for what is the truth. There's plenty no shortage of uncertainty at any moment. So can you recap some of the, you did so much work during the pandemic and obviously some of it's in the book. What were some of the major things that you took out of proof and truth from the pandemic?Adam Kucharski (09:14):I think it was almost this story of two hearts because on the one hand, science was the thing that got us where we are today. The reason that so much normality could resume and so much risk was reduced was development of vaccines and the understanding of treatments and the understanding of variants as they came to their characteristics. So it was kind of this amazing opportunity to see this happen faster than it ever happened in history. And I think ever in science, it certainly shifted a lot of my thinking about what's possible and even how we should think about these kinds of problems. But also on the other hand, I think where people might have been more familiar with seeing science progress a bit more slowly and reach consensus around some of these health issues, having that emerge very rapidly can present challenges even we found with some of the work we did on Alpha and then the Delta variants, and it was the early quantification of these.Adam Kucharski (10:08):So really the big question is, is this thing more transmissible? Because at the time countries were thinking about control measures, thinking about relaxing things, and you've got this just enormous social economic health decision-making based around essentially is it a lot more spreadable or is it not? And you only had these fragments of evidence. So I think for me, that was really an illustration of the sharp end. And I think what we ended up doing with some of those was rather than arguing over a precise number, something like Delta, instead we kind of looked at, well, what's the range that matters? So in the sense of arguing over whether it's 40% or 50% or 30% more transmissible is perhaps less important than being, it's substantially more transmissible and it's going to start going up. Is it going to go up extremely fast or just very fast?Adam Kucharski (10:59):That's still a very useful conclusion. I think what often created some of the more challenges, I think the things that on reflection people looking back pick up on are where there was probably overstated certainty. We saw that around some of the airborne spread, for example, stated as a fact by in some cases some organizations, I think in some situations as well, governments had a constraint and presented it as scientific. So the UK, for example, would say testing isn't useful. And what was happening at the time was there wasn't enough tests. So it was more a case of they can't test at that volume. But I think blowing between what the science was saying and what the decision-making, and I think also one thing we found in the UK was we made a lot of the epidemiological evidence available. I think that was really, I think something that was important.Adam Kucharski (11:51):I found it a lot easier to communicate if talking to the media to be able to say, look, this is the paper that's out, this is what it means, this is the evidence. I always found it quite uncomfortable having to communicate things where you knew there were reports behind the scenes, but you couldn't actually articulate. But I think what that did is it created this impression that particularly epidemiology was driving the decision-making a lot more than it perhaps was in reality because so much of that was being made public and a lot more of the evidence around education or economics was being done behind the scenes. I think that created this kind of asymmetry in public perception about how that was feeding in. And so, I think there was always that, and it happens, it is really hard as well as a scientist when you've got journalists asking you how to run the country to work out those steps of am I describing the evidence behind what we're seeing? Am I describing the evidence about different interventions or am I proposing to some extent my value system on what we do? And I think all of that in very intense times can be very easy to get blurred together in public communication. I think we saw a few examples of that where things were being the follow the science on policy type angle where actually once you get into what you're prioritizing within a society, quite rightly, you've got other things beyond just the epidemiology driving that.Eric Topol (13:09):Yeah, I mean that term that you just use follow the science is such an important term because it tells us about the dynamic aspect. It isn't just a snapshot, it's constantly being revised. But during the pandemic we had things like the six-foot rule that was never supported by data, but yet still today, if I walk around my hospital and there's still the footprints of the six-foot rule and not paying attention to the fact that this was airborne and took years before some of these things were accepted. The flatten the curve stuff with lockdowns, which I never was supportive of that, but perhaps at the worst point, the idea that hospitals would get overrun was an issue, but it got carried away with school shutdowns for prolonged periods and in some parts of the world, especially very stringent lockdowns. But anyway, we learned a lot.Eric Topol (14:10):But perhaps one of the greatest lessons is that people's expectations about science is that it's absolute and somehow you have this truth that's not there. I mean, it's getting revised. It's kind of on the job training, it's on this case on the pandemic revision. But very interesting. And that gets us to, I think the next topic, which I think is a fundamental part of the book distributed throughout the book, which is the different types of proof in biomedicine and of course across all these domains. And so, you take us through things like randomized trials, p-values, 95 percent confidence intervals, counterfactuals, causation and correlation, peer review, the works, which is great because a lot of people have misconceptions of these things. So for example, randomized trials, which is the temple of the randomized trials, they're not as great as a lot of people think, yes, they can help us establish cause and effect, but they're skewed because of the people who come into the trial. So they may not at all be a representative sample. What are your thoughts about over deference to randomized trials?Adam Kucharski (15:31):Yeah, I think that the story of how we rank evidence in medicines a fascinating one. I mean even just how long it took for people to think about these elements of randomization. Fundamentally, what we're trying to do when we have evidence here in medicine or science is prevent ourselves from confusing randomness for a signal. I mean, that's fundamentally, we don't want to mistake something, we think it's going on and it's not. And the challenge, particularly with any intervention is you only get to see one version of reality. You can't give someone a drug, follow them, rewind history, not give them the drug and then follow them again. So one of the things that essentially randomization allows us to do is, if you have two groups, one that's been randomized, one that hasn't on average, the difference in outcomes between those groups is going to be down to the treatment effect.Adam Kucharski (16:20):So it doesn't necessarily mean in reality that'd be the case, but on average that's the expectation that you'd have. And it's kind of interesting actually that the first modern randomized control trial (RCT) in medicine in 1947, this is for TB and streptomycin. The randomization element actually, it wasn't so much statistical as behavioral, that if you have people coming to hospital, you could to some extent just say, we'll just alternate. We're not going to randomize. We're just going to first patient we'll say is a control, second patient a treatment. But what they found in a lot of previous studies was doctors have bias. Maybe that patient looks a little bit ill or that one maybe is on borderline for eligibility. And often you got these quite striking imbalances when you allowed it for human judgment. So it was really about shielding against those behavioral elements. But I think there's a few situations, it's a really powerful tool for a lot of these questions, but as you mentioned, one is this issue of you have the population you study on and then perhaps in reality how that translates elsewhere.Adam Kucharski (17:17):And we see, I mean things like flu vaccines are a good example, which are very dependent on immunity and evolution and what goes on in different populations. Sometimes you've had a result on a vaccine in one place and then the effectiveness doesn't translate in the same way to somewhere else. I think the other really important thing to bear in mind is, as I said, it's the averaging that you're getting an average effect between two different groups. And I think we see certainly a lot of development around things like personalized medicine where actually you're much more interested in the outcome for the individual. And so, what a trial can give you evidence is on average across a group, this is the effect that I can expect this intervention to have. But we've now seen more of the emergence things like N=1 studies where you can actually over the same individual, particularly for chronic conditions, look at those kind of interventions.Adam Kucharski (18:05):And also there's just these extreme examples where you're ethically not going to run a trial, there's never been a trial of whether it's a good idea to have intensive care units in hospitals or there's a lot of these kind of historical treatments which are just so overwhelmingly effective that we're not going to run trial. So almost this hierarchy over time, you can see it getting shifted because actually you do have these situations where other forms of evidence can get you either closer to what you need or just more feasibly an answer where it's just not ethical or practical to do an RCT.Eric Topol (18:37):And that brings us to the natural experiments I just wrote about recently, the one with shingles, which there's two big natural experiments to suggest that shingles vaccine might reduce the risk of Alzheimer's, an added benefit beyond the shingles that was not anticipated. Your thoughts about natural experiments, because here you're getting a much different type of population assessment, again, not at the individual level, but not necessarily restricted by some potentially skewed enrollment criteria.Adam Kucharski (19:14):I think this is as emerged as a really valuable tool. It's kind of interesting, in the book you're talking to economists like Josh Angrist, that a lot of these ideas emerge in epidemiology, but I think were really then taken up by economists, particularly as they wanted to add more credibility to a lot of these policy questions. And ultimately, it comes down to this issue that for a lot of problems, we can't necessarily intervene and randomize, but there might be a situation that's done it to some extent for us, so the classic example is the Vietnam draft where it was kind of random birthdays with drawn out of lottery. And so, there's been a lot of studies subsequently about the effect of serving in the military on different subsequent lifetime outcomes because broadly those people have been randomized. It was for a different reason. But you've got that element of randomization driving that.Adam Kucharski (20:02):And so again, with some of the recent shingles data and other studies, you might have a situation for example, where there's been an intervention that's somewhat arbitrary in terms of time. It's a cutoff on a birth date, for example. And under certain assumptions you could think, well, actually there's no real reason for the person on this day and this day to be fundamentally different. I mean, perhaps there might be effects of cohorts if it's school years or this sort of thing. But generally, this isn't the same as having people who are very, very different ages and very different characteristics. It's just nature, or in this case, just a policy intervention for a different reason has given you that randomization, which allows you or pseudo randomization, which allows you to then look at something about the effect of an intervention that you wouldn't as reliably if you were just digging into the data of yes, no who's received a vaccine.Eric Topol (20:52):Yeah, no, I think it's really valuable. And now I think increasingly given priority, if you can find these natural experiments and they're not always so abundant to use to extrapolate from, but when they are, they're phenomenal. The causation correlation is so big. The issue there, I mean Judea Pearl's, the Book of Why, and you give so many great examples throughout the book in Proof. I wonder if you could comment that on that a bit more because this is where associations are confused somehow or other with a direct effect. And we unfortunately make these jumps all too frequently. Perhaps it's the most common problem that's occurring in the way we interpret medical research data.Adam Kucharski (21:52):Yeah, I think it's an issue that I think a lot of people get drilled into in their training just because a correlation between things doesn't mean that that thing causes this thing. But it really struck me as I talked to people, researching the book, in practice in research, there's actually a bit more to it in how it's played out. So first of all, if there's a correlation between things, it doesn't tell you much generally that's useful for intervention. If two things are correlated, it doesn't mean that changing that thing's going to have an effect on that thing. There might be something that's influencing both of them. If you have more ice cream sales, it will lead to more heat stroke cases. It doesn't mean that changing ice cream sales is going to have that effect, but it does allow you to make predictions potentially because if you can identify consistent patterns, you can say, okay, if this thing going up, I'm going to make a prediction that this thing's going up.Adam Kucharski (22:37):So one thing I found quite striking, actually talking to research in different fields is how many fields choose to focus on prediction because it kind of avoids having to deal with this cause and effect problem. And even in fields like psychology, it was kind of interesting that there's a lot of focus on predicting things like relationship outcomes, but actually for people, you don't want a prediction about your relationship. You want to know, well, how can I do something about it? You don't just want someone to sell you your relationship's going to go downhill. So there's almost part of the challenge is people just got stuck on prediction because it's an easier field of work, whereas actually some of those problems will involve intervention. I think the other thing that really stood out for me is in epidemiology and a lot of other fields, rightly, people are very cautious to not get that mixed up.Adam Kucharski (23:24):They don't want to mix up correlations or associations with causation, but you've kind of got this weird situation where a lot of papers go out of their way to not use causal language and say it's an association, it's just an association. It's just an association. You can't say anything about causality. And then the end of the paper, they'll say, well, we should think about introducing more of this thing or restricting this thing. So really the whole paper and its purpose is framed around a causal intervention, but it's extremely careful throughout the paper to not frame it as a causal claim. So I think we almost by skirting that too much, we actually avoid the problems that people sometimes care about. And I think a lot of the nice work that's been going on in causal inference is trying to get people to confront this more head on rather than say, okay, you can just stay in this prediction world and that's fine. And then just later maybe make a policy suggestion off the back of it.Eric Topol (24:20):Yeah, I think this is cause and effect is a very alluring concept to support proof as you so nicely go through in the book. But of course, one of the things that we use to help us is the biological mechanism. So here you have, let's say for example, you're trying to get a new drug approved by the Food and Drug Administration (FDA), and the request is, well, we want two trials, randomized trials, independent. We want to have p-values that are significant, and we want to know the biological mechanism ideally with the dose response of the drug. But there are many drugs as you review that have no biological mechanism established. And even when the tobacco problems were mounting, the actual mechanism of how tobacco use caused cancer wasn't known. So how important is the biological mechanism, especially now that we're well into the AI world where explainability is demanded. And so, we don't know the mechanism, but we also don't know the mechanism and lots of things in medicine too, like anesthetics and even things as simple as aspirin, how it works and many others. So how do we deal with this quest for the biological mechanism?Adam Kucharski (25:42):I think that's a really good point. It shows almost a lot of the transition I think we're going through currently. I think particularly for things like smoking cancer where it's very hard to run a trial. You can't make people randomly take up smoking. Having those additional pieces of evidence, whether it's an analogy with a similar carcinogen, whether it's a biological mechanism, can help almost give you more supports for that argument that there's a cause and effect going on. But I think what I found quite striking, and I realized actually that it's something that had kind of bothered me a bit and I'd be interested to hear whether it bothers you, but with the emergence of AI, it's almost a bit of the loss of scientific satisfaction. I think you grow up with learning about how the world works and why this is doing what it's doing.Adam Kucharski (26:26):And I talked for example of some of the people involved with AlphaFold and some of the subsequent work in installing those predictions about structures. And they'd almost made peace with it, which I found interesting because I think they started off being a bit uncomfortable with like, yeah, you've got these remarkable AI models making these predictions, but we don't understand still biologically what's happening here. But I think they're just settled in saying, well, biology is really complex on some of these problems, and if we can have a tool that can give us this extremely valuable information, maybe that's okay. And it was just interesting that they'd really kind of gone through that kind process, which I think a lot of people are still grappling with and that almost that discomfort of using AI and what's going to convince you that that's a useful reliable prediction whether it's something like predicting protein folding or getting in a self-driving car. What's the evidence you need to convince you that's reliable?Eric Topol (27:26):Yeah, no, I'm so glad you brought that up because when Demis Hassabis and John Jumper won the Nobel Prize, the point I made was maybe there should be an asterisk with AI because they don't know how it works. I mean, they had all the rich data from the protein data bank, and they got the transformer model to do it for 200 million protein structure prediction, but they still to this day don't fully understand how the model really was working. So it reinforces what you're just saying. And of course, it cuts across so many types of AI. It's just that we tend to hold different standards in medicine not realizing that there's lots of lack of explainability for routine medical treatments today. Now one of the things that I found fascinating in your book, because there's different levels of proof, different types of proof, but solid logical systems.Eric Topol (28:26):And on page 60 of the book, especially pertinent to the US right now, there is a bit about Kurt Gödel and what he did there was he basically, there was a question about dictatorship in the US could it ever occur? And Gödel says, “oh, yes, I can prove it.” And he's using the constitution itself to prove it, which I found fascinating because of course we're seeing that emerge right now. Can you give us a little bit more about this, because this is fascinating about the Fifth Amendment, and I mean I never thought that the Constitution would allow for a dictatorship to emerge.Adam Kucharski (29:23):And this was a fascinating story, Kurt Gödel who is one of the greatest logical minds of the 20th century and did a lot of work, particularly in the early 20th century around system of rules, particularly things like mathematics and whether they can ever be really fully satisfying. So particularly in mathematics, he showed that there were this problem that is very hard to have a set of rules for something like arithmetic that was both complete and covered every situation, but also had no contradictions. And I think a lot of countries, if you go back, things like Napoleonic code and these attempts to almost write down every possible legal situation that could be imaginable, always just ascended into either they needed amendments or they had contradictions. I think Gödel's work really summed it up, and there's a story, this is in the late forties when he had his citizenship interview and Einstein and Oskar Morgenstern went along as witnesses for him.Adam Kucharski (30:17):And it's always told as kind of a lighthearted story as this logical mind, this academic just saying something silly in front of the judge. And actually, to my own admission, I've in the past given talks and mentioned it in this slightly kind of lighthearted way, but for the book I got talking to a few people who'd taken it more seriously. I realized actually he's this extremely logically focused mind at the time, and maybe there should have been something more to it. And people who have kind of dug more into possibilities was saying, well, what could he have spotted that bothered him? And a lot of his work that he did about consistency in mass was around particularly self-referential statements. So if I say this sentence is false, it's self-referential and if it is false, then it's true, but if it's true, then it's false and you get this kind of weird self-referential contradictions.Adam Kucharski (31:13):And so, one of the theories about Gödel was that in the Constitution, it wasn't that there was a kind of rule for someone can become a dictator, but rather people can use the mechanisms within the Constitution to make it easier to make further amendments. And he kind of downward cycle of amendment that he had seen happening in Europe and the run up to the war, and again, because this is never fully documented exactly what he thought, but it's one of the theories that it wouldn't just be outright that it would just be this cycle process of weakening and weakening and weakening and making it easier to add. And actually, when I wrote that, it was all the earlier bits of the book that I drafted, I did sort of debate whether including it I thought, is this actually just a bit in the weeds of American history? And here we are. Yeah, it's remarkable.Eric Topol (32:00):Yeah, yeah. No, I mean I found, it struck me when I was reading this because here back in 1947, there was somebody predicting that this could happen based on some, if you want to call it loopholes if you will, or the ability to change things, even though you would've thought otherwise that there wasn't any possible capability for that to happen. Now, one of the things I thought was a bit contradictory is two parts here. One is from Angus Deaton, he wrote, “Gold standard thinking is magical thinking.” And then the other is what you basically are concluding in many respects. “To navigate proof, we must reach into a thicket of errors and biases. We must confront monsters and embrace uncertainty, balancing — and rebalancing —our beliefs. We must seek out every useful fragment of data, gather every relevant tool, searching wider and climbing further. Finding the good foundations among the bad. Dodging dogma and falsehoods. Questioning. Measuring. Triangulating. Convincing. Then perhaps, just perhaps, we'll reach the truth in time.” So here you have on the one hand your search for the truth, proof, which I think that little paragraph says it all. In many respects, it sums up somewhat to the work that you review here and on the other you have this Nobel laureate saying, you don't have to go to extremes here. The enemy of good is perfect, perhaps. I mean, how do you reconcile this sense that you shouldn't go so far? Don't search for absolute perfection of proof.Adam Kucharski (33:58):Yeah, I think that encapsulates a lot of what the book is about, is that search for certainty and how far do you have to go. I think one of the things, there's a lot of interesting discussion, some fascinating papers around at what point do you use these studies? What are their flaws? But I think one of the things that does stand out is across fields, across science, medicine, even if you going to cover law, AI, having these kind of cookie cutter, this is the definitive way of doing it. And if you just follow this simple rule, if you do your p-value, you'll get there and you'll be fine. And I think that's where a lot of the danger is. And I think that's what we've seen over time. Certain science people chasing certain targets and all the behaviors that come around that or in certain situations disregarding valuable evidence because you've got this kind of gold standard and nothing else will do.Adam Kucharski (34:56):And I think particularly in a crisis, it's very dangerous to have that because you might have a low level of evidence that demands a certain action and you almost bias yourself towards inaction if you have these kind of very simple thresholds. So I think for me, across all of these stories and across the whole book, I mean William Gosset who did a lot of pioneering work on statistical experiments at Guinness in the early 20th century, he had this nice question he sort of framed is, how much do we lose? And if we're thinking about the problems, there's always more studies we can do, there's always more confidence we can have, but whether it's a patient we want to treat or crisis we need to deal with, we need to work out actually getting that level of proof that's really appropriate for where we are currently.Eric Topol (35:49):I think exceptionally important that there's this kind of spectrum or continuum in following science and search for truth and that distinction, I think really nails it. Now, one of the things that's unique in the book is you don't just go through all the different types of how you would get to proof, but you also talk about how the evidence is acted on. And for example, you quote, “they spent a lot of time misinforming themselves.” This is the whole idea of taking data and torturing it or using it, dredging it however way you want to support either conspiracy theories or alternative facts. Basically, manipulating sometimes even emasculating what evidence and data we have. And one of the sentences, or I guess this is from Sir Francis Bacon, “truth is a daughter of time”, but the added part is not authority. So here we have our president here that repeats things that are wrong, fabricated or wrong, and he keeps repeating to the point that people believe it's true. But on the other hand, you could say truth is a daughter of time because you like to not accept any truth immediately. You like to see it get replicated and further supported, backed up. So in that one sentence, truth is a daughter of time not authority, there's the whole ball of wax here. Can you take us through that? Because I just think that people don't understand that truth being tested over time, but also manipulated by its repetition. This is a part of the big problem that we live in right now.Adam Kucharski (37:51):And I think it's something that writing the book and actually just reflecting on it subsequently has made me think about a lot in just how people approach these kinds of problems. I think that there's an idea that conspiracy theorists are just lazy and have maybe just fallen for a random thing, but talking to people, you really think about these things a lot more in the field. And actually, the more I've ended up engaging with people who believe things that are just outright unevidenced around vaccines, around health issues, they often have this mountain of papers and data to hand and a lot of it, often they will be peer reviewed papers. It won't necessarily be supporting the point that they think it's supports.Adam Kucharski (38:35):But it's not something that you can just say everything you're saying is false, that there's actually often a lot of things that have been put together and it's just that leap to that conclusion. I think you also see a lot of scientific language borrowed. So I gave a talker early this year and it got posted on YouTube. It had conspiracy theories it, and there was a lot of conspiracy theory supporters who piled in the comments and one of the points they made is skepticism is good. It's the kind of law society, take no one's word for it, you need this. We are the ones that are kind of doing science and people who just assume that science is settled are in the wrong. And again, you also mentioned that repetition. There's this phenomenon, it's the illusory truth problem that if you repeatedly tell someone someone's something's false, it'll increase their belief in it even if it's something quite outrageous.Adam Kucharski (39:27):And that mimics that scientific repetition because people kind of say, okay, well if I've heard it again and again, it's almost like if you tweak these as mini experiments, I'm just accumulating evidence that this thing is true. So it made me think a lot about how you've got essentially a lot of mimicry of the scientific method, amount of data and how you present it and this kind of skepticism being good, but I think a lot of it comes down to as well as just looking at theological flaws, but also ability to be wrong in not actually seeking out things that confirm. I think all of us, it's something that I've certainly tried to do a lot working on emergencies, and one of the scientific advisory groups that I worked on almost it became a catchphrase whenever someone presented something, they finished by saying, tell me why I'm wrong.Adam Kucharski (40:14):And if you've got a variant that's more transmissible, I don't want to be right about that really. And it is something that is quite hard to do and I found it is particularly for something that's quite high pressure, trying to get a policymaker or someone to write even just non-publicly by themselves, write down what you think's going to happen or write down what would convince you that you are wrong about something. I think particularly on contentious issues where someone's got perhaps a lot of public persona wrapped up in something that's really hard to do, but I think it's those kind of elements that distinguish between getting sucked into a conspiracy theory and really seeking out evidence that supports it and trying to just get your theory stronger and stronger and actually seeking out things that might overturn your belief about the world. And it's often those things that we don't want overturned. I think those are the views that we all have politically or in other ways, and that's often where the problems lie.Eric Topol (41:11):Yeah, I think this is perhaps one of, if not the most essential part here is that to try to deal with the different views. We have biases as you emphasized throughout, but if you can use these different types of proof to have a sound discussion, conversation, refutation whereby you don't summarily dismiss another view which may be skewed and maybe spurious or just absolutely wrong, maybe fabricated whatever, but did you can engage and say, here's why these are my proof points, or this is why there's some extent of certainty you can have regarding this view of the data. I think this is so fundamental because unfortunately as we saw during the pandemic, the strident minority, which were the anti-science, anti-vaxxers, they were summarily dismissed as being kooks and adopting conspiracy theories without the right engagement and the right debates. And I think this might've helped along the way, no less the fact that a lot of scientists didn't really want to engage in the first place and adopt this methodical proof that you've advocated in the book so many different ways to support a hypothesis or an assertion. Now, we've covered a lot here, Adam. Have I missed some central parts of the book and the effort because it's really quite extraordinary. I know it's your third book, but it's certainly a standout and it certainly it's a standout not just for your books, but books on this topic.Adam Kucharski (43:13):Thanks. And it's much appreciated. It was not an easy book to write. I think at times, I kind of wondered if I should have taken on the topic and I think a core thing, your last point speaks to that. I think a core thing is that gap often between what convinces us and what convinces someone else. I think it's often very tempting as a scientist to say the evidence is clear or the science has proved this. But even on something like the vaccines, you do get the loud minority who perhaps think they're putting microchips in people and outlandish views, but you actually get a lot more people who might just have some skepticism of pharmaceutical companies or they might have, my wife was pregnant actually at the time during Covid and we waited up because there wasn't much data on pregnancy and the vaccine. And I think it's just finding what is convincing. Is it having more studies from other countries? Is it understanding more about the biology? Is it understanding how you evaluate some of those safety signals? And I think that's just really important to not just think what convinces us and it's going to be obvious to other people, but actually think where are they coming from? Because ultimately having proof isn't that good unless it leads to the action that can make lives better.Eric Topol (44:24):Yeah. Well, look, you've inculcated my mind with this book, Adam, called Proof. Anytime I think of the word proof, I'm going to be thinking about you. So thank you. Thanks for taking the time to have a conversation about your book, your work, and I know we're going to count on you for the astute mathematics and analysis of outbreaks in the future, which we will see unfortunately. We are seeing now, in fact already in this country with measles and whatnot. So thank you and we'll continue to follow your great work.**************************************Thanks for listening, watching or reading this Ground Truths podcast/post.If you found this interesting please share it!That makes the work involved in putting these together especially worthwhile.I'm also appreciative for your subscribing to Ground Truths. All content —its newsletters, analyses, and podcasts—is free, open-access. I'm fortunate to get help from my producer Jessica Nguyen and Sinjun Balabanoff for audio/video tech support to pull these podcasts together for Scripps Research.Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Please don't hesitate to post comments and give me feedback. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years.A bit of an update on SUPER AGERSMy book has been selected as a Next Big Idea Club winner for Season 26 by Adam Grant, Malcolm Gladwell, Susan Cain, and Daniel Pink. This club has spotlighted the most groundbreaking nonfiction books for over a decade. As a winning title, my book will be shipped to thousands of thoughtful readers like you, featured alongside a reading guide, a "Book Bite," Next Big Idea Podcast episode as well as a live virtual Q&A with me in the club's vibrant online community. If you're interested in joining the club, here's a promo code SEASON26 for 20% off at the website. SUPER AGERS reached #3 for all books on Amazon this week. This was in part related to the segment on the book on the TODAY SHOW which you can see here. Also at Amazon there is a remarkable sale on the hardcover book for $10.l0 at the moment for up to 4 copies. Not sure how long it will last or what prompted it.The journalist Paul von Zielbauer has a Substack “Aging With Strength” and did an extensive interview with me on the biology of aging and how we can prevent the major age-related diseases. Here's the link. Get full access to Ground Truths at erictopol.substack.com/subscribe
”I really love the notion of contributing something to physics.” — Chemistry laureate John Jumper has always been passionate about science and understanding the world. With the AI tool AlphaFold, he and his co-laureate Demis Hassabis have provided a possibility to predict protein structures. In this podcast conversation, Jumper speaks about the excitement of seeing how AI can help us more in the future.Jumper also shares his scientific journey and how he ended up working with AlphaFold. He describes a special memory from the 2018 CASP conference where AlphaFold was presented for the first time. Another life-changing moment was the announcement of the Nobel Prize in Chemistry in October 2024 – Jumper tells us how his life has changed since then. Through their lives and work, failures and successes – get to know the individuals who have been awarded the Nobel Prize on the Nobel Prize Conversations podcast. Find it on Acast, or wherever you listen to pods. https://linktr.ee/NobelPrizeConversations© Nobel Prize Outreach. Hosted on Acast. See acast.com/privacy for more information.
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Jason Howell and Jeff Jarvis return for a deep dive into the week's AI news. We cover Apple's new research paper exposing the illusion of AI reasoning, industry leaders' superintelligence hype and hubris, Altman's “Gentle Singularity” vision, Ilya Sutskever's brain-as-computer analogy, Meta's massive superintelligence lab, LaCun and Pichai's call for new AGI ideas, Apple's on-device AI framework, NotebookLM's new sharing features, pairing NotebookLM with Perplexity, Hollywood's awkward embrace of AI tools, and the creative collision of AI and filmmaking. Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:02:27 - Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity 0:05:50 - Sinofsky on the costs of anthropomorphizing LLMs 0:07:34 - Nate Jones: Let's Talk THAT Apple AI Paper—Here's the Takeaway Everyone is Ignoring 0:13:46 - Altman's latest manifesto might be worth mention in comparison 0:19:33 - Ilya Sutskever, a leader in AI and its responsible development, receives U of T honorary degree 0:25:52 - Meta Is Creating a New A.I. Lab to Pursue ‘Superintelligence' 0:29:05 - Google CEO says AGI is impossible with today's tech 0:33:17 - WWDC: Apple opens its AI to developers but keeps its broader ambitions modest 0:39:57 - NotebookLM is adding a new way to share your own notebooks publicly. 0:42:01 - I paired NotebookLM with Perplexity for a week, and it feels like they're meant to work together 0:45:26 - The Googlers behind NotebookLM are launching their own AI audio startup. Here's a sneak peek. 0:50:48 - Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss 0:55:05 - Luca Guadagnino to Direct True-Life OpenAI Movie ‘Artificial' for Amazon MGM 0:59:19 - Everyone Is Already Using AI (And Hiding It) “We can say, ‘Do it in anime, make it PG-13.' Three hours later, I'll have the movie.” Learn more about your ad choices. Visit megaphone.fm/adchoices
Faut-il s'attendre à un "bain de sang chez les cols blancs", selon l'expression de Dario Amodei, le créateur d'Anthropic, qui redoute une hécatombe des métiers cognitifs et artistiques. Mais pour d'autres chercheurs comme Demis Hassabis, l'IA ne signe pas la fin du travail, mais le début d'une transformation profonde — à condition de se former avec ambition. En attendant, avec le court-métrage "Ancestra", le cinéaste Darren Aronofsky teste déjà de nouveaux récits mêlant technologie et émotion.
# TEMA
Thu, 29 May 2025 16:00:00 GMT http://relay.fm/material/518 http://relay.fm/material/518 Andy Ihnatko and Florence Ion Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? clean 4329 Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? This episode of Material is sponsored by: Vitally: A new era for customer success productivity. Get a free pair of AirPods Pro when you book a qualified meeting. Yawn Email: Tame your inbox with intelligent daily summaries. Start your 14-day free trial today. Links and Show Notes: Google I/O 2025 developer keynote Sam and Jony introduce io Google DeepMind's Demis Hassabis on AGI, Innovation and More S
Thu, 29 May 2025 16:00:00 GMT http://relay.fm/material/518 http://relay.fm/material/518 Us Against the Robots 518 Andy Ihnatko and Florence Ion Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? clean 4329 Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? This episode of Material is sponsored by: Vitally: A new era for customer success productivity. Get a free pair of AirPods Pro when you book a qualified meeting. Yawn Email: Tame your inbox with intelligent daily summaries. Start your 14-day free trial today. Links and Show Notes: Google I/O 2025 developer keynote Sam and Jony introduce io Google DeepMind's Demis Hassabis on AGI, Innovation and More Support Ma
# TEMA LAS FRONTERAS DE LA AI : Análisis conversación Google I/O 2025 con Demis Hassabis# PRESENTA Y DIRIGE
This week, we take a field trip to Google and report back about everything the company announced at its biggest show of the year, Google I/O. Then, we sit down with Google DeepMind's chief executive and co-founder, Demis Hassabis, to discuss what his A.I. lab is building, the future of education, and what life could look like in 2030.Guest:Demis Hassabis, co-founder and chief executive of Google DeepMindAdditional Reading:At Google I/O, everything is changing and normal and scary and chillGoogle Unveils A.I. Chatbot, Signaling a New Era for SearchGoogle DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Demis Hassabis is the CEO of Google DeepMind. Sergey Brin is the co-founder of Google. The two leading tech executives join Alex Kantrowitz for a live interview at Google's IO developer conference to discuss the frontiers of AI research. Tune in to hear their perspective on whether scaling is tapped out, how reasoning techniques have performed, what AGI actually means, the potential for an intelligence explosion, and much more. Tune in for a deep look into AI's cutting edge featuring two executives building it. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
OpenAI just pitched “OpenAI for Countries,” offering democracies a turnkey AI infrastructure while some of the world's richest quietly stockpile bunkers and provisions. We'll dig into billionaire Paul Tudor Jones's revelations about AI as an imminent security threat, and why top insiders are buying land and livestock to ride out the next catastrophe. Plus, a wild theory that Gavin has hatched regarding OpenAI's non-profit designation. Then, we break down the updated Google Gemini Pro 2.5's leap forward in coding… just 15 minutes to a working game prototype…and how this could put game creation in every kid's hands. Plus, Suno's 4.5 music model that finally brings human‑quality vocals, and robots gone wild in Chinese warehouses. AND OpenAI drops 3 billion on Windsurf, HeyGen's avatar model achieving flawless lip sync from any angle, the rise of blazing‑fast open source video engines, UCSD's whole‑body ambulatory robots shaking like nervous toddlers, and even Game of Thrones Muppet mashups with bizarre glitch art. STOCK YOUR PROVISIONS. THE ROBOT CLEANUP CREWS ARE NEXT. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Does AI Pose an “Imminent Threat”? Paul Tudor Jones ‘Heard' About It Conference https://x.com/AndrewCurran_/status/1919759495129137572 Terrifying Robot Goes Crazy https://www.reddit.com/r/oddlyterrifying/comments/1kcbkfe/robot_on_hook_went_berserk_all_of_a_sudden/ Cleaner Robots To Pick Up After The Apocalypse https://x.com/kimmonismus/status/1919510163112779777 https://x.com/loki_robotics/status/1919325768984715652 OpenAI For Countries https://openai.com/global-affairs/openai-for-countries/ OpenAI Goes Non-Profit For Real This Time https://openai.com/index/evolving-our-structure/ New Google Gemini 2.5 Pro Model https://blog.google/products/gemini/gemini-2-5-pro-updates/ Demis Hassabis on the coding upgrade (good video of drawing an app) https://x.com/demishassabis/status/1919779362980692364 New Minecraft Bench looks good https://x.com/adonis_singh/status/1919864163137957915 Gavin's Bear Jumping Game (in Gemini Window) https://gemini.google.com/app/d0b6762f2786d8d2 OpenAI Buys Windsurf https://www.reuters.com/business/openai-agrees-buy-windsurf-about-3-billion-bloomberg-news-reports-2025-05-06/ Suno v4.5 https://x.com/SunoMusic/status/1917979468699931113 HeyGen Avatar v4 https://x.com/joshua_xu_/status/1919844622135627858 Voice Mirroring https://x.com/EHuanglu/status/1919696421625987220 New OpenSource Video Model From LTX https://x.com/LTXStudio/status/1919751150888239374 Using Runway References with 3D Models https://x.com/runwayml/status/1919376580922552753 Amo Introduces Whole Body Movements To Robotics (and looks a bit shaky rn) https://x.com/TheHumanoidHub/status/1919833230368235967 https://x.com/xuxin_cheng/status/1919722367817023779 Realistic Street Fighter Continue Screens https://x.com/StutteringCraig/status/1918372417615085804 Wandering Worlds - Runway Gen48 Finalist https://runwayml.com/gen48?film=wandering-woods Centaur Skipping Rope https://x.com/CaptainHaHaa/status/1919377295137005586 The Met Gala for Aliens https://x.com/AIForHumansShow/status/1919566617031393608 The Met Gala for Nathan Fielder & Sully https://x.com/AIForHumansShow/status/1919600216870637996 Loosening of Sora Rules https://x.com/AIForHumansShow/status/1919956025244860864
Lately, there's been growing pushback against the idea that AI will transform geroscience in the short term.When Nobel laureate Demis Hassabis told 60 Minutes that AI could help cure every disease within 5–10 years, many in the longevity and biotech communities scoffed. Leading aging biologists called it wishful thinking - or outright fantasy.They argue that we still lack crucial biological data to train AI models, and that experiments and clinical trials move too slowly to change the timeline.Our guest in this episode, Professor Derya Unutmaz, knows these objections well. But he's firmly on Team Hassabis.In fact, Unutmaz goes even further. He says we won't just cure diseases - we'll solve aging itself within the next 20 years.And best of all, he offers a surprisingly detailed, concrete explanation of how it will happen:building virtual cells, modeling entire biological systems *in silico*, and dramatically accelerating drug discovery — powered by next-generation AI reasoning engines.
Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI. We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN. #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Demis Hassabis, CEO of Google DeepMind, sparked excitement with his 60 Minutes interview, outlining AI's potential to end all diseases within a decade. Drawing parallels to AlphaFold's revolutionary protein folding solution, Hassabis envisions AI drastically accelerating drug discovery, compressing timelines from years and billions to mere months by rapidly analyzing vast datasets. He highlights DeepMind's AI's astonishing discovery of millions of new materials, far surpassing traditional research, showcasing AI's power to "blaze through solutions." We delve into this ambitious vision, considering its feasibility and comparing it to futuristic scenarios, while also exploring AI's growing impact in cybersecurity, fraud prevention, and diagnostics.Beyond healthcare, we touch upon Will Manidis's intriguing observations on unexpected "miracle cures" linked to LLMs and a humorous take from Sam Altman on ChatGPT etiquette. We also spotlight a compelling custom ChatGPT prompt shared by @andrewchen (https://x.com/andrewchen/status/1914168705228882105). Join us for a thought-provoking discussion on the transformative power of AI and its potential to revolutionize our future.Mentioned: @GoogleDeepMind @demishassabis @WillManidis @andrewchen
Bird flu, which has long been an emerging threat, took a significant turn in 2024 with the discovery that the virus had jumped from a wild bird to a cow. In just over a year, the pathogen has spread through dairy herds and poultry flocks across the United States. It has also infected people, resulting in 70 confirmed cases, including one fatality. Correspondent Bill Whitaker spoke with veterinarians and virologists who warn that, if unchecked, this outbreak could lead to a new pandemic. They also raise concerns about the Biden administration's slow response in 2024 and now the Trump administration's decision to lay off over 100 key scientists. Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. One of the most awe-inspiring and mysterious migrations in the natural world is currently taking place, stretching from Mexico to the United States and Canada. This incredible spectacle involves millions of monarch butterflies embarking on a monumental aerial journey. Correspondent Anderson Cooper reports from the mountains of Mexico, where the monarchs spent the winter months sheltering in trees before emerging from their slumber to take flight. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Parmy Olsen, author of “Supremacy - AI, ChatGPT and the race that will change the world,” joins us to discuss how tariffs might affect the AI race, and two of the central figures in the AI world, Demis Hassabis and Sam Altman.
How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ Select mentions: Hitchhiker's Guide to the Galaxy by Douglas Adams AlphaGo documentary: https://www.youtube.com/watch?v=WXuK6gekU1Y Nash equilibrium & US mathematician John Forbes Nash Homo Ludens by Johan Huizinga Veo 2, an advanced, AI-powered video creation platform from Google DeepMind The Culture series by Iain Banks Hartmut Neven, German-American computer scientist Topics: 3:11 - Hellos and intros 5:20 - Brute force vs. self-learning systems 8:24 - How a learning approach helped develop new AI systems 11:29 - AlphaGo's Move 37 16:16 - What will the next Move 37 be? 19:42 - What makes an AI that can play the video game StarCraft impressive 22:32 - The importance of the act of play 26:24 - Data and synthetic data 28:33 - Midroll ad 28:39 - Is it important to have AI embedded in the world? 33:44 - The trade-off between thinking time and output quality 36:03 - Computer languages designed for AI 40:22 - The future of multimodality 43:27 - AI and geographic diversity 48:24 - AlphaFold and the future of medicine 51:18 - Rapid-fire Questions Possible is an award-winning podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. Hosted by Reid Hoffman and Aria Finger, each episode features an interview with an ambitious builder or deep thinker on a topic, from art to geopolitics and from healthcare to education. These conversations also showcase another kind of guest: AI. Each episode seeks to enhance and advance our discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.
Two pioneering tech companies and their CEOs are competing over the development of artificial intelligence: Sam Altman of OpenAI and Demis Hassabis of DeepMind. Lost in this race for control are the threats their creators are ignoring. That's the story found in “Supremacy: AI, ChatGPT, and the Race that Will Change the World” by Parmy Olson.
This week Nick talks to Parmy Olson. Parmy Olson is a prominent technology journalist and author, currently a columnist for Bloomberg Opinion. She previously covered tech and innovation for The Wall Street Journal and Forbes, with a focus on AI, robotics, and emerging technologies. In 2012, she published We Are Anonymous, an acclaimed deep dive into the hacker groups Anonymous and LulzSec. Her 2024 book, Supremacy: AI, ChatGPT, and the Race That Will Change the World, explores the rivalry between tech giants like OpenAI and DeepMind in the pursuit of artificial general intelligence, earning the Financial Times Business Book of the Year Award. Nick and Parmy discuss the intense race to develop artificial general intelligence (AGI) and the far-reaching implications of that pursuit. Their conversation highlights the contrast between the idealistic visions of DeepMind's Demis Hassabis and OpenAI's Sam Altman—who saw AGI as a force for solving global challenges—and the reality that both ultimately became deeply tied to tech giants like Google and Microsoft to fund their ambitions. Parmy explains how this reliance shifted the focus away from social good and towards corporate interests. Together, they explore the broader consequences of this power shift, including the lack of meaningful regulation, ongoing ethical concerns around bias and safety in AI models, and the growing dominance of a few large tech firms. They also reflect on the social risks—from job losses and the disruption of traditional career paths to the emotional dependency people are beginning to form with chatbots—raising important questions about the kind of future society is heading towards. Parmy's Book Choice was: Born to Run by Christopher McdougallParmy's Music Choice was:Rumours by Fleetwood MacThis content is issued by Zeus Capital Limited (“Zeus”) (Incorporated in England & Wales No. 4417845), which is authorised and regulated in the United Kingdom by the Financial Conduct Authority (“FCA”) for designated investment business, (Reg No. 224621) and is a member firm of the London Stock Exchange. This content is for information purposes only and neither the information contained, nor the opinions expressed within, constitute or are to be construed as an offer or a solicitation of an offer to buy or sell the securities or other instruments mentioned in it. Zeus shall not be liable for any direct or indirect damages, including lost profits arising in any way from the information contained in this material. This material is for the use of intended recipients only.
Hugo Penedones é licenciado em Engenharia Informática e Computação pela Universidade do Porto e é cofundador e atualmente CTO da Inductiva.AI, uma empresa de Inteligência Artificial para a ciência e engenharia. Anteriormente, passou pela Google DeepMind, onde foi membro fundador do projeto AlphaFold, um algoritmo de previsão de estruturas de proteínas que viria a revolucionar a ciência nesta área a levar a atribuição do Prémio Nobel de Química de 2024 a Demis Hassabis e John M. Jumper (David Baker foi o 3º laureado com o Nobel). Ao longo da sua carreira, trabalhou em diversas áreas, incluindo visão por computador, pesquisa web, bioinformática e aprendizagem por reforço em instituições de investigação como o Idiap e a EPFL na Suíça. _______________ Índice: (0:00) Início (3:30) PUB (3:54) IA aplicada à Ciência | Projecto Alphafold (Google Deepmind) | Paper em que o convidado foi co-autor (14:01) Alphafold vs LLMs (ex: ChatGPT) | AlphaGo (22:20) Como num hackathon com o Hugo e dois colegas começou o Alphafold | Demis Hassabis (CEO da Deepmind) (28:31) Outras aplicações de AI na ciência: fusão nuclear, previsão do tempo (41:14) IA na engenharia de materiais: descoberta de novos materiais e o potencial dos supercondutores (46:35) IA cientista: Poderá a IA formular hipóteses científicas no futuro? | Matemática | P vs NP (57:10 ) Modelos de machine learning são caixas negras? (1:03:12) Inductiva, a startup do convidado dedicada a simulações numéricas com machine learning (1:13:47) A promessa da computação quântica Cortar de 1:14:44 a 1:16:38 (assegura pf que fica silêncio no final, antes de eu fazer a pergunta seguinte, que muda de tema) (1:16:03) Desafios da qualidade dos dados na ciência com IA | Será possível simularmos uma célula? (1:24:44) Que progressos podemos esperar da IA na ciência nos próximos 10 anos? | Alphacell ______________ Esta conversa foi editada por: João RibeiroSee omnystudio.com/listener for privacy information.
The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Shoot us a Text.Today is our fearless leader Paul J Daly's birthday! So we gave him the morning off and tapped in producer Nathan Southwick. We're talking all about the new Canada and Mexico tariffs that put pressure on the automotive supply chains, plus the top depreciating cars and how Google is pushing to achieve artificial general intelligence.Show Notes with links:The U.S. has enacted 25% tariffs on imports from Canada and Mexico, throwing the highly integrated North American production network into turmoil.The tariffs, effective today, March 4, apply to all imports except Canadian energy products, which face a lower 10% duty. Canada and Mexico both responded with their own tariffs.Industry experts predict vehicle prices could rise between $4,000 and $10,000, with Ford CEO Jim Farley cautioning that prolonged tariffs could "blow a hole in the U.S. industry that we have never seen."Flavio Volpe, president of the Automotive Parts Manufacturers' Association said that there is potential for U.S. and Canadian auto production to revert to "2020 pandemic-level idling and temporary layoffs within the week.”Key auto models at risk include the Toyota RAV4, Ford Mustang Mach-E, Chevrolet Equinox and Blazer, and the Honda Civic and CR-V, while European automakers with manufacturing in Mexico, including Volkswagen, Stellantis, and BMW, saw their stocks drop sharplyThe STOXX Europe 600 Automobiles and Parts index fell 3.8% and Continental AG, a major supplier, saw an 8.4% drop in shares.Used Tesla Model 3 and Model Y vehicles saw the steepest depreciation of any cars in 2024, according to Fast Company's analysis of CarGurus data.Model Y prices dropped 25.5%, while Model 3 prices fell 25% from January 2024 to January 2025.Comparatively, the Nissan Maxima only dropped 5.2%, and the Ford Mustang declined 5%.Full Top 10: Tesla Model Y, Tesla Model 3, Land Rover Range Rover, Jeep Wrangler 4xe, Chevrolet Express Cargo, Ford Transit Connect, RAM ProMaster, Land Rover Range Rover Sport, Chevrolet Bolt EV, and Ford Expedition, all with over 19% depreciationGoogle co-founder Sergey Brin is back and pushing Google DeepMind (GDM) teams to accelerate their progress toward Artificial General Intelligence (AGI). In a newly released memo, Brin outlines the urgency and expectations for Google's AI teams.Brin emphasizes the need for 60-hour work weeks, daily office attendance, and faster execution by prioritizing simple solutions, code efficiency, and small-scale experiments for faster iteration.He calls for a shift away from “nanny products” and urges teams to “trust our users” more.Brin, who has no formal role at Google beyond a board seat, stepped in over the head of Google DeepMind, Demis Hassabis, signaling the urgency of the AGI race."I think we have all the ingredients to wHosts: Paul J Daly and Kyle MountsierGet the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/ Read our most recent email at: https://www.asotu.com/media/push-back-email
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Andrew Altschuler, a researcher, educator, and navigator at Tana, Inc., who also founded Tana Stack. Their conversation explores knowledge systems, complexity, and AI, touching on topics like network effects in social media, information warfare, mimetic armor, psychedelics, and the evolution of knowledge management. They also discuss the intersection of cognition, ontologies, and AI's role in redefining how we structure and retrieve information. For more on Andrew's work, check out his course and resources at altshuler.io and his YouTube channel.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:33 The Demise of AirChat00:50 Network Effects and Social Media Challenges03:05 The Rise of Digital Warlords03:50 Quora's Golden Age and Information Warfare08:01 Building Limbic Armor16:49 Knowledge Management and Cognitive Armor18:43 Defining Knowledge: Secular vs. Ultimate25:46 The Illusion of Insight31:16 The Illusion of Insight32:06 Philosophers of Science: Popper and Kuhn32:35 Scientific Assumptions and Celestial Bodies34:30 Debate on Non-Scientific Knowledge36:47 Psychedelics and Cultural Context44:45 Knowledge Management: First Brain vs. Second Brain46:05 The Evolution of Knowledge Management54:22 AI and the Future of Knowledge Management58:29 Tana: The Next Step in Knowledge Management59:20 Conclusion and Course InformationKey InsightsNetwork Effects Shape Online Communities – The conversation highlighted how platforms like Twitter, AirChat, and Quora demonstrate the power of network effects, where a critical mass of users is necessary for a platform to thrive. Without enough engaged participants, even well-designed social networks struggle to sustain themselves, and individuals migrate to spaces where meaningful conversations persist. This explains why Twitter remains dominant despite competition and why smaller, curated communities can be more rewarding but difficult to scale.Information Warfare and the Need for Cognitive Armor – In today's digital landscape, engagement-driven algorithms create an arena of information warfare, where narratives are designed to hijack emotions and shape public perception. The only real defense is developing cognitive armor—critical thinking skills, pattern recognition, and the ability to deconstruct media. By analyzing how information is presented, from video editing techniques to linguistic framing, individuals can resist manipulation and maintain autonomy over their perspectives.The Role of Ontologies in AI and Knowledge Management – Traditional knowledge management has long been overlooked as dull and bureaucratic, but AI is transforming the field into something dynamic and powerful. Systems like Tana and Palantir use ontologies—structured representations of concepts and their relationships—to enhance information retrieval and reasoning. AI models perform better when given structured data, making ontologies a crucial component of next-generation AI-assisted thinking.The Danger of Illusions of Insight – Drawing from ideas by Balaji Srinivasan, the episode distinguished between genuine insight and the illusion of insight. While psychedelics, spiritual experiences, and intense emotional states can feel revelatory, they do not always produce knowledge that can be tested, shared, or used constructively. The ability to distinguish between profound realizations and self-deceptive experiences is critical for anyone navigating personal and intellectual growth.AI as an Extension of Human Cognition, Not a Second Brain – While popular frameworks like "second brain" suggest that digital tools can serve as externalized minds, the episode argued that AI and note-taking systems function more as extended cognition rather than true thinking machines. AI can assist with organizing and retrieving knowledge, but it does not replace human reasoning or creativity. Properly integrating AI into workflows requires understanding its strengths and limitations.The Relationship Between Personal and Collective Knowledge Management – Effective knowledge management is not just an individual challenge but also a collective one. While personal knowledge systems (like note-taking and research practices) help individuals retain and process information, organizations struggle with preserving and sharing institutional knowledge at scale. Companies like Tesla exemplify how knowledge isn't just stored in documents but embodied in skilled individuals who can rebuild complex systems from scratch.The Increasing Value of First Principles Thinking – Whether in AI development, philosophy, or practical decision-making, the discussion emphasized the importance of grounding ideas in first principles. Great thinkers and innovators, from AI researchers like Demis Hassabis to physicists like David Deutsch, excel because they focus on fundamental truths rather than assumptions. As AI and digital tools reshape how we interact with knowledge, the ability to think critically and question foundational concepts will become even more essential.
Cette semaine, nous plongeons au cœur du sommet pour l'action sur l'intelligence artificielle, un événement majeur qui a eu lieu à Paris. Les enjeux de l'IA, les annonces marquantes et le contexte international en ont fait un sujet brûlant d'actualité. L'ACTU DE LA SEMAINE- Le sommet pour l'action sur l'intelligence artificielle a réuni 58 pays, générant de nombreuses annonces d'investissement, notamment 109 milliards en France pour des data centers.- Elon Musk fait parler de lui avec une proposition d'achat d'OpenAI, suscitant des réponses humoristiques de Sam Altman.- Annonce de la prochaine version de ChatGPT : GPT-5, promettant des améliorations significatives.- Une plainte contre Apple pour atteinte à la vie privée concernant Siri.-----------Découvrez Frogans (https://www.f2r2), l'innovation française qui réinvente le Web [Partenariat]-----------LE DEBRIEF TRANSATLANTIQUE avec Bruno Guglielminetti- Retour sur le Sommet sur l'IALES INTERVIEWS DE LA SEMAINE- Xavier Niel, PDG du groupe Iliad, assure que la France n'est pas en retard sur l'IA et "panique" même les autres pays européens. - Aravind Srinivas, fondateur de Perplexity, explique en quoi son chatbot se diffère des autres IA génératives.- Demis Hassabis, co-fondateur de Google DeepMind, évoque l'IAG, l'intelligence artificielle générale. - Leroy Abiguime, de l'entreprise togolaise Ubanji, présente un outil de traduction qui donne accès à ChatGPT en langue locale.- Laurence Devillers, chercheuse en intelligence artificielle à Sorbonne Université, explique pourquoi l'IA est importante pour l'éducation mais pas à n'importe quel prix. - Mathilde Cerioli, de l'organisation Everyone.ai, évoque l'impact du numérique et de l'IA sur les cerveaux des jeunes jusqu'à 25 ans. - Leïla Mörch, de Magic Lemp, présente l'application Pluralisme qui permet de retrouver instantanément et d'analyser les déclarations des responsables politiques. - André Loesekrug-Pietri, directeur scientifique de JEDI, tire le bilan économique de ce sommet en saluant l'initiative mais en déplorant une ambition insuffisante.Pour retrouver toutes les interviews complètes et soutenir le podcast, abonnez-vous à Monde Numérique Premium sur Apple Podcast ou visitez notre site mondenumérique.info. -----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
A l'occasion du sommet pour l'action sur l'IA, deux figures majeures du secteur partagent leur vision : Demis Hassabis, cofondateur et PDG de DeepMind, et James Manyika, vice-président de Google chargé de la recherche.L'intelligence artificielle transforme le monde, et ses avancées soulèvent autant d'espoirs que de préoccupations. Google France réunissait récemment ces deux personnalités majeures pour une discussion passionnante :Les applications positives de l'IA, notamment en santé, où elle permet d'améliorer le diagnostic de maladies comme la tuberculose dans les pays en développement.L'avènement de l'Intelligence Artificielle Générale (AGI), capable d'accomplir des tâches complexes et de comprendre son environnement de manière autonome.Les défis de la régulation et de la préparation sociétale face à ces technologies, notamment en matière de formation de la main-d'œuvre.Les risques potentiels liés à l'AGI, notamment son détournement à des fins malveillantes et la nécessité d'un cadre de sécurité solide.-----------♥️ Soutenez Monde Numérique : https://donorbox.org/monde-numerique
The 2024 Nobel Prize in Chemistry was shared by three researchers who used artificial intelligence to predict the three-dimensional shapes of proteins based on their amino acid sequences. In this episode, we hear from one of them, Demis Hassabis, CEO and co-founder of Google DeepMind. His AI program, AlphaFold, curates the 3D-structures of more than 200 million naturally-occurring proteins (https://alphafold.ebi.ac.uk/). This database is available to the public for free! After a brief introduction to the topic, we hear Dr. Hassabis's Nobel lecture on how he got involved in this groundbreaking research, and how he sees AI impacting biology in the future. Here is the link to the full public-domain lecture (with slides and charts): https://www.nobelprize.org/prizes/chemistry/2024/hassabis/lecture/ ‘Bench Talk: The Week in Science' is a weekly radio program that airs on WFMP Louisville FORward Radio 106.5 FM (forwardradio.org) every Monday at 7:30 pm, Tuesday at 11:30 am, and Wednesday at 7:30 am. Visit our Facebook page for links to the articles discussed in this episode: https://www.facebook.com/pg/BenchTalkRadio/posts/?ref=page_internal
On the 49th episode of Enterprise AI Innovators, hosts Evan Reiser (Abnormal Security) and Saam Motamedi (Greylock Partners) talk with Vineet Khosla, Chief Technology Officer of The Washington Post. The Washington Post is the third-largest newspaper in the United States, with 135,000 print subscribers and 2 and half million digital subscribers. In this conversation, Vineet shares his thoughts on the mainstream integration of AI technology, the transformative impact of AI on journalism, and the future of personalized news delivery. Quick hits from Vineet:On proof that AI is having a true impact on our lives: âThe Nobel Prize for Physics went to Geoffrey Hinton. The Nobel Prize for chemistry went to Demis Hassabis, the deep mind. This is the first time weâre seeing the top prize in physics and chemistry go to people who created an AI which solved a problem in that field. It is the AI they invented that did such a commendable job that other people were forced to recognize their achievement as being top notch.âOn the impact AI has on human creative roles: âSo when these AI models start to be creative, it is understandable everyone's afraid. Let's put that as the baseline and say this is not wrong. It doesn't make anybody bad. But slowly and the way we're doing it with creative tools is that we want AI to do the part of your job that you shouldn't have been doing anyways, and you start to see a change in people's behavior, their hearts and minds. And of course, some people will move faster than others. But when they see the actual benefit, the skeptics will come around and use it to their power.âOn encouraging Productivity and creativity through AI tools: âYou give people these tools, let them be productive, let them go on their journey, and you encourage them. You obviously give really good use cases. Like I said, when I was writing code recently, I got the AI to write me most of my unit tests because as an engineer, I hate that. And I know they're super important. There is no way I will check in code without it, but I hate writing them. Now that time gets freed up.âRecent Book Recommendation: Our Mathematical Universe by Max Tegmar--Like what you hear? Leave us a review and subscribe to the show on Apple, Google, Spotify, Stitcher, or wherever you listen to podcasts.Enterprise AI Innovators is a show where top technology executives share how AI is transforming the enterprise. Each episode covers the real-world applications of AI, from improving products and optimizing operations to redefining the customer experience. Find more great insights from technology leaders and enterprise software experts at https://www.enterprisesoftware.blog/ Enterprise AI Innovators is produced by Josh Meer.
Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital Mentioned in this episode: PPO: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT. MuJoCo: Open source physics engine used to develop PPO Monte Carlo Tree Search: Heuristic search algorithm used in AlphaGo as well as video compression for YouTube and the self-driving system at Tesla AlphaZero: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go MuZero: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments AlphaChem: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies DQN: Deep Q-Network, Introduced in 2013 paper, Playing Atari with Deep Reinforcement Learning AlphaFold: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry
Join Mike and Paul as they navigate through a week in tech that's too big for just one episode. They unpack Project Stargate, OpenAI's Operators program, and explore SmarterX's ambitious push to democratize AI education. Plus, Trump's actions on AI in his first week in office, Perplexity Assistant, Zapier Agents and more in our rapid-fire section. Access the show notes and show links here This episode is brought to you by our AI Mastery Membership, this 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery. As a special thank you to our podcast audience, you can use the code POD100 to save $100 on a membership. Timestamps: 00:05:57 — Open AI Introduces Operator 00:18:39 — Project Stargate Announced 00:29:00 — The AI Literacy Project 00:40:50 — Trump Actions on AI in First Week 00:44:47 — Perplexity Assistant 00:48:22 — Zapier Agents 00:52:36 — Google Invests Another $1B in Anthropic 00:56:53 — Davos Conversations with OpenAI CPO 01:01:45 — Demis Hassabis on AI for Scientific Progress 01:13:35 — LeCun Predicts New AI Architecture Paradigm in 5 Years 01:17:17 — AI Apps Saw $1B+ in Consumer Spending in 2024 Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers
Google DeepMind co-founder & CEO Demis Hassabis speaks with columnist Bina Venkataraman about AI's role in enabling scientific breakthroughs, why human-level intelligence is an “important benchmark” and the challenge of regulating AI globally. Conversation recorded in Davos, Switzerland on January 22, 2025.
Demis Hassabis is the CEO of Google DeepMind. He joins Big Technology Podcast to discuss the cutting edge of AI and where the research is heading. In this conversation, we cover the path to artificial general intelligence, how long it will take to get there, how to build world models, whether AIs can be creative, and how AIs are trying to deceive researchers. Stay tuned for the second half where we discuss Google's plan for smart glasses and Hassabis's vision for a virtual cell. Hit play for a fascinating discussion with an AI pioneer that will both break news and leave you deeply informed about the state of AI and its promising future. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
For years, artificial intelligence companies have heralded the coming of artificial general intelligence, or AGI. OpenAI, which makes the chatbot ChatGPT, has said that their founding goal was to build AGI that “benefits all of humanity” and “gives everyone incredible new capabilities.”Google DeepMind cofounder Dr. Demis Hassabis has described AGI as a system that “should be able to do pretty much any cognitive task that humans can do.” Last year, OpenAI CEO Sam Altman said AGI will arrive sooner than expected, but that it would matter much less than people think. And earlier this week, Altman said in a blog post that the company knows how to build AGI as we've “traditionally understood it.”But what is artificial general intelligence supposed to be, anyway?Ira Flatow is joined by Dr. Melanie Mitchell, a professor at Santa Fe University who studies cognition in artificial intelligence and machine systems. They talk about the history of AGI, how biologists study animal intelligence, and what could come next in the field.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
2016 höll världen andan när AI-modellen AlphaGo utmanade världsmästaren i spelet Go och vann. Nu belönas Demis Hassabis, hjärnan bakom modellen, med Nobelpris men för en helt annan upptäckt. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sändes första gången 5/12-2024.Bara åtta år gammal köper Demis Hassabis sin första dator för vinstpengarna från en schackturnering. Som vuxen utvecklar han det första datorsystemet som lyckas överlista en mänsklig världsmästare i ett mer avancerat spel än schack. Vetenskapsradion träffar Demis Hassabis, en av Nobelpristagarna i kemi 2024, i ett personligt samtal – om vägen från schacknörd till Google-elit och Nobelpris.Reporter: Annika Östman annika.ostman@sr.se Producent: Lars Broström lars.brostrom@sr.se
Bloomberg columnist, Parmy Olson, won the FT Business Book of 2024 for Supremacy, her story of the race between Sam Altman's OpenAI and Demis Hassabis' Google DeepMind for control of the AI ecosystem. Given that Parmy Olson finished writing Supremacy at the end of 2023, I asked her what she would have added to her narrative with the hindsight of knowing what actually transpired in 2024. And what, exactly, does Olson expect to happen in 2025 - a year which will, no doubt, rival 2024 in determining which multi trillion dollar Silicon Valley behemoth will control our collective AI fate.Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.” which won the Financial Times best business book for 2024. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
2016 höll världen andan när AI-modellen AlphaGo utmanade världsmästaren i spelet Go och vann. Nu belönas Demis Hassabis, hjärnan bakom modellen, med Nobelpris men för en helt annan upptäckt. Lyssna på alla avsnitt i Sveriges Radio Play. Bara åtta år gammal köper Demis Hassabis sin första dator för vinstpengarna från en schackturnering. Som vuxen utvecklar han det första datorsystemet som lyckas överlista en mänsklig världsmästare i ett mer avancerat spel än schack. Vetenskapsradion träffar Demis Hassabis, en av Nobelpristagarna i kemi 2024, i ett personligt samtal – om vägen från schacknörd till Google-elit och Nobelpris.Reporter: Annika Östman annika.ostman@sr.se Producent: Lars Broström lars.brostrom@sr.se
Terminamos nuestro repaso a los premios Nobel de ciencias, como siempre, con el galardón de Química, que este año ha sido todo lo contrario de una sorpresa. Se lo han llevado tres de los candidatos más firmes: David Baker, "por diseñar nuevas proteínas mediante ordenador", y Demis Hassabis y John Jumper, "por sus métodos para predecir la estructura tridimensional de las proteínas". Jumper y Hassabis son los responsables de que exista AlphaFold, una inteligencia artificial de la que hemos hablado más de una vez en La Brújula, y que fue la primera en predecir la forma tridimensional de una proteína a partir de su secuencia de aminoácidos. Esto ha supuesto una revolución para la bioquímica, porque la secuencia de aminoácidos de las proteínas podemos "leerlas" en el ADN, y gracias a programas como éste ahora podemos pasar de "la letra" a "el objeto". Baker, por su parte, es uno de los padres de las técnicas informáticas para el estudio de proteínas, y es responsable de RoseTTAFold, el "competidor" de AlphaFold, que aunque llegó un poco más tarde también está siendo parte de esta revolución. En el programa de hoy repasamos muy rápido la relevancia de estas investigaciones, pero si queréis aprender más sobre ellas podéis volver a escuchar los capítulos s08e16 y s10e17 de este pódcast. También podéis buscar el episodio s05e10 de nuestro pódcast hermano, Aparici en Órbita. En todos ellos os hablamos de estas inteligencias artificiales en mucho más detalle. Este programa se emitió originalmente el 9 de octubre de 2024. Podéis escuchar el resto de audios de La Brújula en la app de Onda Cero y en su web, ondacero.es
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here. Please subscribe on your preferred podcast platform. Want to share feedback? Why not leave a review? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of Sam Altman and Demis Hassabis and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.
On Oct. 9, the 2024 Nobel Prize for Chemistry was awarded to David Baker, Demis Hassabis, and John M. Jumper for their work in prediction and design of protein structures. C&EN's executive editor for life sciences, Laura Howes, joins a special episode of Stereo Chemistry to discuss why the trio won, the significance of their work around proteins, and how she accurately predicted the win in C&EN's annual “Who Will Win?” webinar. Stereo Chemistry offers a deeper look at subjects from recent stories pulled from the pages of Chemical & Engineering News. Check out Laura's story on how these computational chemists won this year's Nobel Prize in Chemistry at cenm.ag/chemnobel2024.
Rival CEOs. A race to build god-like machines. What could go wrong? In "Supremacy," Bloomberg's Parmy Olson shares the thrilling (and sometimes chilling) story of Sam Altman, Demis Hassabis, and the battle to create our future.
The Nobel Prize in chemistry went to three scientists for groundbreaking work using artificial intelligence to advance biomedical and protein research. AlphaFold uses databases of protein structures and sequences to predict and even design protein structures. It speeds up a months or years-long process to mere hours or minutes. Amna Nawaz discussed more with one of the winners, Demis Hassabis. PBS News is supported by - https://www.pbs.org/newshour/about/funders
Danny joins Katie in London for the Times Tech Summit, where the co-founder and boss of Google DeepMind Sir Demis Hassabis sets out his startling view that AI has the potential "to cure all diseases" and could 'have general human cognitive abilities within ten years." But fundamentally - do we really understand what AI is? Professor Neil Lawrence, the inaugural DeepMind Professor of Machine Learning at Cambridge University, Faculty AI CEO, Marc Warner, and Naila Murray, Director of AI Research at Meta share their views. And Danny and Katie ponder whether AI mania could be more about money than the mind? Hosted on Acast. See acast.com/privacy for more information.
Parmy Olson of Bloomberg joins for our weekly discussion of the latest tech news. She's also the author of the new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World. We cover 1) OpenAI's release of its new o1 model, that can do reasoning, also know as Q* or Strawberry 2) o1's features and what makes it different 3) Businesses struggling to find o1 uses 4) Investor concerns over AI 5) A precursor to AI agents? 6) OpenAI raising at a $150 billion valuation now 7) Would people pay $2,000 per month for ChatGPT? 8) When will OpenAI have to return its investment? 9) Lessons about Sam Altman and Demis Hassabis from Parmy's book 10) AI news anchor avatars --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com