POPULARITY
Discusses the responsible integration of AI into healthcare delivery. Our guest today is Dr. Yauheni Solad who is a Managing Partner at Dalos Partners, leading healthcare AI strategy and validation. Dr. Solad is a research affiliate at Yale University and a board-certified physician in clinical informatics. He formerly led digital health innovation at UC Davis Health and Yale, advancing FHIR interoperability, telemedicine, and responsible AI standards. Additional resources: ReAligned Healthcare podcast: https://realignedhealthcare.com/ Journal of the American Medical Informatics Association: https://academic.oup.com/jamia The Lancet Digital Health: https://www.thelancet.com/journals/landig/home Deep Medicine: https://dl.acm.org/doi/10.5555/3350442 Request for Information on the Development of an AI Action Plan: https://www.federalregister.gov/documents/2025/02/06/2025-02305/request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan WHO Harnessing AI for Health: https://www.who.int/teams/digital-health-and-innovation/harnessing-artificial-intelligence-for-health National Academy of Medicine AI and Emerging Technology: https://nam.edu/our-work/key-issues/artificial-intelligence-and-emerging-technology/ CITI Program's course catalog: https://about.citiprogram.org/course-catalog
L'intelligenza artificiale sta rivoluzionando la sanità e i nuovi standard di validazione clinica. Non sostituisce lo specialista, ma diventa un alleato clinico che supporta diagnosi e terapie. Si crea così un nuovo triangolo relazionale tra medico, paziente e intelligenza artificiale. L’obiettivo finale è la "Deep Medicine": liberare i medici dal carico cognitivo e amministrativo, affinché possano dedicarsi maggiormente alla relazione umana e alla cura personalizzata del paziente. Alberto Mattiello ne parla con Erik Lagolio, medico di famiglia e Contributing Physician per OpenAI.See omnystudio.com/listener for privacy information.
From a recent SAND Community Gathering (Feb 2025) hosted by SAND co-founders, Zaya and Maurizio Benazzo. Deep Medicine Circle (DMC), a collective of healers, farmers, artists, and storytellers, is challenging colonial structures by redefining health and wellbeing through practices that heal communities and restore connections to land. Led by Dr. Rupa Marya, Charlene Eigen-Vasquez, and Walter Riley, this visionary group is creating a holistic food and wellbeing model that nourishes both people and land, recognizing the profound interconnectedness of human health within social, environmental, and historical contexts. Dr. Rupa Marya is a physician, activist, writer, mother, and a composer. She is a Professor of Medicine at the University of California, San Francisco and a co-founder of the Do No Harm Coalition. Her work sits at the nexus of climate, health and racial justice. She is the co-author with Raj Patel of the book Inflamed: Deep Medicine and the Anatomy of Injustice. She works to decolonize food and medicine in partnership with communities in Lakhota territory at the Mni Wiconi Health Circle and in Ohlone Territory through the Deep Medicine Circle. She has toured twenty-nine countries with her band, Rupa and the April Fishes, whose music was described by the legend Gil Scott-Heron as “Liberation Music.” Charlene Eigen-Vasquez, J.D. is of Ohlone descent, from the village of Chitactac. She is dedicated to land back initiatives, land preservation, land restoration, cultural revitalization and environmental justice because she feels that these initiatives have a direct impact on physical and mental health. As a mother and grandmother, she completed a law degree so that she might better serve Indigenous communities. Today her focus is on regenerative leadership strategies, leveraging her legal skills, and mediation skills to advocate for Indigenous interests, negotiate agreements and build relational bridges. She is an acknowledged peacemaker, trained by Tribal Supreme Court Justices. Charlene is the former CEO and Director of Self-Governance for the Healing and Reconciliation Institute. Charlene also serves as Chairwoman of the Confederation of Ohlone People, Co-Chair of the Pajaro Valley Ohlone Indian Council and Board Vice President for the Santa Clara Valley Indian Health Center. Charlene was recently brought into the Planet Women's 100 Women Pathway, a cohort designed to increase the number of diverse women leaders at the helm of the environmental movement. Walter Riley was born in 1944, number 9 of 11 children born to a farming family in Durham County, North Carolina. His family farmed until he was about 6 years old. He grew up in the Jim Crow south and in his early teens, Walter became active in the Civil Rights Movement organizing voter registration, sit-ins, jobs campaigns, and in his late teens became Field Secretary for CORE (Congress for Racial Equality), got married and became a father. He moved to the Bay Area in the 1960s where he became active in the political, social justice movements. Walter is a long-time community activist and civil rights attorney. Topics 00:00 Introduction and Greetings 00:47 Introducing Dr. Rupa Marya 01:46 Deep Medicine Circle and Board Members 02:36 Charlene's Introduction and Ancestral Tribute 07:33 Walter Riley's Introduction and Civil Rights Work 23:48 Connecting Food Systems and Colonial History 26:40 Healing Through Music and Cultural Awareness 27:43 Addressing Hunger and Malnutrition During COVID 28:06 Farming as a Path to Justice and Resilience 30:26 The Role of Historical Trauma in Land Restoration 30:51 Holistic Problem Solving and Cultural Stewardship 36:13 Youth and Community Engagement in Healing 41:28 The Importance of Ethnic Studies and Solidarity 43:08 Reflections on Historical Movements and Future Change 52:29 Concluding Thoughts on Healing and Unity Resources Farming is Medicine (film) Do No Harm Coalition Inflamed (Rupa Marya) Rupa and the April Fishes Boots Riley (Filmmaker and Musician) “I'm a Virgo” (TV Series by Boots Riley) “Sorry to Bother You” (Film by Boots Riley) The Coup (Boots Riley's Band) Support the mission of SAND and the production of this podcast by becoming a SAND Member
In this episode, I dive deep into the world of mushrooms—starting in the kitchen and exploring their powerful medicinal benefits for gut health and overall wellness. I'll share how I first fell in love with using mushrooms in my own healing journey, from culinary staples like lion's mane and shiitake to powerful allies like Amanita Muscaria. Whether you're curious about incorporating more mushrooms into your diet or exploring their deeper medicinal properties, this episode offers insights to help you connect with the wisdom of these ancient fungi and nourish your body from the inside out.Learning to identify mushrooms in the wild : Adam Hariton's YT channel Alan Bergo's site Forager Chef Red Flower Apothecary: for mushroom tinctures Fermented mushrooms from Premier Research Foraging for Wild Plants & Mushrooms FB group North Spore to grow your own mushrooms Reishi study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10094145/ Other podcast Episodes mentioned: Chicken of the woods episode Turkey Tail episode Brain Fog episode Stay tuned for the Amanita episodeLearn more about how you can work with me HERE. Join my newsletter HERE Find me on Instagram : @ Lydiajoy.me OR @ holisticmineralbalancing Support the Show: Your Donations Are Greatly Appreciated!PAYPALVENMO
Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Thank you for reading Ground Truths. This post is public so feel free to share it.Transcript with audio and external linksEric Topol (00:05):Hello, it's Eric Topol with Ground Truths, and I am really thrilled to have with me Professor Faisal Mahmood, who is lighting it up in the field of pathology with AI. He is on the faculty at Harvard Medical School, also a pathologist at Mass General Brigham and with the Broad Institute, and he has been publishing at a pace that I just can't believe we're going to review that in chronological order. So welcome, Faisal.Faisal Mahmood (00:37):Thanks so much for having me, Eric. I do want to mention I'm not a pathologist. My background is in biomedical imaging and computer science. But yeah, I work very closely with pathologists, both at Mass General and at the Brigham.Eric Topol (00:51):Okay. Well, you know so much about pathology. I just assume that you were actually, but you are taking computational biology to new levels and you're in the pathology department at Harvard, I take it, right?Faisal Mahmood (01:08):Yeah, I'm at the pathology department at Mass General Brigham. So the two hospitals are now integrated, so I'm at the joint department.Eric Topol (01:19):Good. Okay. Well, I'm glad to clarify that because as far as I knew you were hardcore pathologist, so you're changing the field in a way that is quite unique, I should say, because a number of years ago, deep learning was starting to get applied to pathology just like it was and radiology and ophthalmology. And we saw some early studies with deep learning whereby you could find so much more on a slide that otherwise would be not even looked at or considered or even that humans wouldn't be able to see. So maybe you could just take us back first to the deep learning phase before these foundation models that you've been building, just to give us a flavor for what was the warmup in this field?Faisal Mahmood (02:13):Yeah, so I think around 2016 and 2017, it was very clear to the computer vision community that deep learning was really the state of the art where you could have abstract feature representations that were rich enough to solve some of these fundamental classification problems in conventional vision. And that's around the time when deep learning started to be applied to everything in medicine, including pathology. So we saw some earlier cities in 2016 and 2017, mostly in machine learning conferences, applying this to very basic patch level pathology dataset. So then in 2018 and 2019, there were some studies in major journals including in Nature Medicine, showing that you could take large amounts of pathology data and classify what's known to us and including predicting what's now commonly referred to as non-human identifiable features where you could take a label and this could come from molecular data, other kinds of data like treatment response and so forth, and use that label to classify these images as responders versus non-responders or having a certain kind of mutation or not.(03:34):And what that does is that if there is a morphologic signal within the image, it would pick up on that morphologic signal even though humans may not have picked up on it. So it was a very exciting time of developing all of these supervised, supervised foundation models. And then I started working in this area around 2019, and one of the first studies we did was to try to see if we can make this a little bit more data efficient. And that's the CLAM method that we published in 2021. And then we took that method and applied it to the problem of cancers of unknown primary, that was also in 2021.Eric Topol (04:17):So just to review, in the phase of deep learning, which was largely we're talking about supervised with ground truth images, there already was a sign that you could pick up things like the driver mutation, the prognosis of the patient from the slide, you could structural variations, the origin of the tumor, things that would never have been conceived as a pathologist. Now with that, I guess the question is, was all this confined to whole slide imaging or could you somehow take an H&E slide conventional slide and be able to do these things without having to have a whole slide image?Faisal Mahmood (05:05):So at the time, most of the work was done on slides that were fully digital. So taking a slide and then digitizing the image and creating a whole slide image. But we did show in 2021 that you could put the slide under a microscope and then just capture it with a camera or just with a cell phone coupled to a camera, and then still make those predictions. So these models were quite robust to that kind of domain adaptation. And still I think that even today the slide digitization rate in the US remains at around 4%, and the standard of care is just looking at a glass light under a microscope. So it's very important to see how we can further democratize these models by just using the microscope, because most microscopes that pathologists use do have a camera attached to them. So can we somehow leverage that camera to just use a model that might be trained on a whole slide image, still work with the slide under a microscope?Eric Topol (06:12):Well, what you just said is actually a profound point that is only 4% of the slides are being reviewed digitally, and that means that we're still an old pathology era without the enlightenment of machine eyes. I mean these digital eyes that can be trained even without supervised learning as we'll get to see things that we'll never see. And to make, and I know we'll be recalling back in 2022, you and I wrote a Lancet piece about the work that you had done, which is very exciting with cardiac biopsies to detect whether a heart transplant was a rejection. This is a matter of life or death because you have to give more immunosuppression drugs if it's a rejection. But if you do that and it's not a rejection or you miss it, and there's lots of disagreement among pathologists, cardiac pathologists, regarding whether there's a transplant. So you had done some early work back then, and because much of what we're going to talk about, I think relates more to cancer, but it's across the board in pathology. Can you talk about the inner observer variability of pathologists when they look at regular slides?Faisal Mahmood (07:36):Yeah. So when I first started working in this field, my kind of thinking was that the slide digitization rate is very low. So how do we get people to embrace and adapt digital pathology and machine learning models that are trained on digital data if the data is not routinely digitized? So one of my kind of line of thinking was that if we focus on problems that are inherently so difficult that there isn't a good solution for them currently, and machine learning provides, or deep learning provides a tangible solution, people will be kind of forced to use these models. So along those lines, we started focusing on the cancers of unknown primary problem and the myocardial biopsy problem. So we know that the Cohen's kappa or the intra-observer variability that also takes into account agreement by chance is around 0.22. So it's very, very low for endomyocardial biopsies. So that just means that there are a large number of patients who have a diagnosis that other pathologists might not agree with, and the downstream treatment regimen that's given is entirely based on that diagnosis. The same patient being diagnosed by a different cardiac pathologist could be receiving a very different regimen and could have a very, very different outcome.(09:14):So the goal for that study is published in Nature of Medicine in 2022, was to see if we could use deep learning to standardize that and have it act as an assistive tool for cardiac pathologists and whether they give more standardized responses when they're given a machine learning based response. So that's what we showed, and it was a pleasure to write that corresponding piece with you in the Lancet.Eric Topol (09:43):Yeah, no, I mean I think that was two years ago and so much has happened since then. So now I want to get into this. You've been on a tear every month publishing major papers and leading journals, and I want to just go back to March and we'll talk about April, May, and June. So back in March, you published two foundation models, UNI and CONCH, I believe, both of these and back-to-back papers in Nature Medicine. And so, maybe first if you could explain the foundation model, the principle, how that's different than the deep learning network in terms of transformers and also what these two different, these were mega models that you built, how they contributed to help advance the field.Faisal Mahmood (10:37):So a lot of the early work that we did relied on extracting features from a resonant trained on real world images. So by having these features extracted, we didn't need to train these models end to end and allowed us to train a lot of models and investigate a lot of different aspects. But those features that we used were still based on real world images. What foundation models led us do is they leveraged self supervised learning and large amounts of data that would be essentially unlabeled to extract rich feature representations from pathology images that can then be used for a variety of different downstream tasks. So we basically collected as much data as we could from the Brigham and MGH and some public sources while trying to keep it as diverse as possible. So the goal was to include infectious, inflammatory, neoplastic all everything across the pathology department while still being as diverse as possible, including normal tissue, everything.(11:52):And the hypothesis there, and that's been just recently confirmed that the hypothesis was that diversity would matter much more than the quantity of data. So if you have lots and lots of screening biopsies and you use all of them to train the foundation model, there isn't enough diversity there that it would begin to learn those fundamental feature representations that you would want it to learn. So we used all of this data and then trained the UNI model and then together with it was an image text model where it starts with UNI and then reinforces the feature representations using images and texts. And that sort of mimics how humans learn about pathology. So a new resident, new trainee learning pathology has a lot of knowledge of the world, but it's perhaps looking at a pathology image for the first time. But besides looking at the image, they're also being reinforced by all these language cues from, whether it's from text or from audio signals. So the hope there was that text would kind of reinforce that and generate better feature representation. So the two studies were made available together. They were published in Nature Medicine back in March, and with that we made both those models public. So at the time we obviously had no idea that they would generate so much interest in this field, downloaded 350,000 times on Hugging Face and used for all kinds of different applications that I would've never thought of. So that's been very exciting to see.Eric Topol (13:29):Can you give some examples of some of the things you wouldn't have thought of? Because it seems like you think of everything.Faisal Mahmood (13:35):Yeah, people have used it to when there was a challenge for detecting tuberculosis, I think in a very, very different kind of a dataset. It was from the Nightingale Foundation and they have large data sets. So that was very interesting to see. People have used it to create newer data sets that can then be used for training additional foundation models. It's being used to extract rich feature representations from pathology images, corresponding spatial transcriptomic data, trying to predict spatial transcriptomics directly from histology. And there's a number of other options.Eric Topol (14:27):Well, yeah, that was March. Before we get to April, you slipped in the spatial omics thing, which is a big deal that is ability to look at tissue, human tissue over time and space. I mean the spatial temporal, it will tell us so much whether an evolution of a cancer process or so many things. Can you just comment because this is one of the major parts of this new era of applying AI to biology?Faisal Mahmood (15:05):So I think there are a number of things we can do if we have spatial data spatially resolved omic data with histology images. So the first thing that comes to my mind as a computer scientist would be that can we train a joint foundation model where we would use the spatially resolved transcriptomics to further enforce the pathology signal as a ground truth in a contrastive manner, similar to what we do with text, and can we use that to extract even richer feature representation? So we're doing that. In fact, we made a data set of about a thousand pathology images with corresponding spatial transcriptomic information, both curated from public resources as well as some internal data publicly available so people could investigate that question further. We're entrusted in other aspects of this because there is some indication including a study from James Zou's group at Stanford showing that we can predict histology, predict the spatial transcriptomic signal directly from histology. So there's early indications that we might also be able to do that in three dimensions. So yeah, it's definitely very interesting. More and more of that data is becoming available and how machine learning can sort of augment that is very exciting.Eric Topol (16:37):Yeah, I mean, most of the spatial omics has been a product of single cell sequencing, whether it's single nuclei and different omics, not just DNA, of course, RNA and even methylation, whatnot. So the fact that you could try to impute that from the histologies is pretty striking. Now, that was March and then in April you published to me an extraordinary paper about demographic bias and how generative AI, we're in the generative AI year now since as we discussed with foundation models, here again that gen AI could actually reduce biases and enhance fairness, which of course is so counterintuitive to everything that's been written to date. So maybe you can take us through how we can get a reduction in bias in pathology.Faisal Mahmood (17:34):Yeah, so in the study, the study was about, this had been investigated in other fields, but what we try to show is that a model trained on large, diverse, publicly available data. When that's applied internally and we stratify it based on demographic differences, race and so forth, we see these very clear disparities and biases. And we investigated a lot of different solutions that were out there to equalize the distribution of the data to balance the distribution using or sampling and some of these simple techniques. And none of them worked quite well. And then we observed that using foundation models or just having richer feature representations eliminates some of those biases. In parallel, there was another study from Google where they use generative AI to synthesize additional images from those underrepresented groups and then use those images to enhance the training signal. And then they also showed that you could reduce those biases.(18:49):So I think the common denominator there is that richer feature representations contribute to reduced biases. So the biases not because there is some inherent signal tied to these subgroups, but the bias is essentially there because the feature representations are not strong enough. Another general observation is that there's some kind of a confounder often there that leads to the bias. And one example would be that patients with socioeconomic disparities might just be diagnosed late and there might not be enough advanced cases in the training dataset. So quite often when you go in and look at what your training distribution looks like and how it varies from your test distribution and what that dataset shift is, you're able to figure out where the bias inherently comes from. But as a general principle, if you use the richest possible feature representation or focus on making your feature representations richer by using better foundation models and so forth, you are able to reduce a lot of the bias.Eric Topol (19:58):Yeah, that's really another key point here is about the richer features and the ability counterintuitively to actually reduce bias. And what is important in interrogating data inputs, as you said before, you wind up with a problem with bias. Now, then it comes May since we're just March and April, in May you published TriPath, which is now bringing in the 3D world of pathology. So maybe you can give us a little skinny on that one.Faisal Mahmood (20:36):Yeah. So just looking at the spectrum of where pathology is today, I think that we all agree in the community that pathologists often look at extremely sampled tissue. So human tissue is inherently three-dimensional, and by the time it gets to a pathologist, it's been sampled and cut so many times that it often would lack that signal. And there are a number of studies that have shown that if you subsequently cut sections, you get to a different outcome. If you look at multiple slides for a prostate biopsy, you get to a different Gleason score. There are all of these studies that have shown that 3D pathology is important. And with that, there's been a growing effort to build tools, microscopes, imaging tools that can image tissue in 3D. And there are about 10 startups who've built all these different technologies, open-top light-sheet microscopy, microCT and so forth that can image tissue really well in three dimensions, but none of them have had clinical adoption.(21:39):And we think that a key reason is that there isn't a good way for a pathologist to examine such a large volume of tissue. If they spend so much time examining this large volume of tissue, they would never be able to get through all the, so the goal here really was to develop a computational tool that would look through the large volume and highlight key regions that a pathologist can then examine. And the secondary goal was that does using three dimensional tissue actually improve patient stratification and does using, essentially using three 3D deep learning, having 3D convolutions extract richer features from the three dimensions that can then be used to separate patients into distinct risk groups. So that's what we did in this particular case. The study relied on a lot of data from Jonathan Liu's group at University of Washington, and also data that we collected at Harvard from tissue that came from the Brigham and Women's Hospital. So it was very exciting to show that what the value of 3D pathology can be and how it can actually translate into the clinic using some of these computational tools.Eric Topol (22:58):Do you think ultimately someday that will be the standard that you'll have a 3D assessment of a biopsy sample?Faisal Mahmood (23:06):Yeah, I'm really convinced that ultimately 3D would become the standard because the technology to image these tissue is becoming better and better every year, and it's getting closer to a point where the imaging can be fast enough to get to clinical deployment. And then on the computational end, we're increasingly making a lot of progress.Eric Topol (23:32):And it seems, again, it's something that human eyes couldn't do because you'd have to look at hundreds of slides to try to get some loose sense of what's going on in a 3D piece of tissue. Whereas here you're again taking advantage, exploiting the digital eyes. Now this culminates to your June big paper PathChat in Nature, and this was a culmination of a lot of work you've been doing. I don't know if you do any sleep or your team, but then you published a really landmark paper. Can you take us through that?Faisal Mahmood (24:12):Yeah, so I think that with the foundation models, we could extract very rich feature representation. So to us, the obvious next step was to take those feature representations and link them with language. So a human would start to communicate with a generative AI model where we could ask questions about what's going on in a pathology image, it would be capable of making a diagnosis, it would be capable of writing a report, all of those things. And the reason we thought that this was really possible is because pathology knowledge is a subset of the world's knowledge. And companies like OpenAI are trying to build singular, multimodal, large language models that would harbor the world's information, the world knowledge and pathology is much, much more finite. And if we have the right kind of training data, we should be able to build a multimodal large language model that given any pathology image, it can interpret what's going on in the image, it can make a diagnosis, it can run through grading, prognosis, everything that's currently done, but also be an assistant for research, analyzing lots of images to see if there's anything common across them, cohorts of responders versus non-responders and so forth.(25:35):So we started by collecting a lot of instruction data. So we started with the foundation models. We had strong pathology image foundation models, and then we collected a lot of instruction data where we have images, questions, corresponding answers. And we really leveraged a lot of the data that we had here at Brigham and MGH. We're obviously teaching hospitals. We have questions, we have existing teaching training materials and work closely with pathologists at multiple institutions to collect that data. And then finally trained a multimodal large language model where we could give it a whole slide image, start asking questions, what was in the image, and then it started generating all these entrusting morphologic descriptions. But then the challenge of course is that how do you validate this? So then we created validation data sets, validated on what multiple choice questions on free flowing questions where multiple pathologists, we had a panel of seven pathologists look through every response from our model as well as more generic models like the OpenAI, GPT-4 and BiomedCLIP and other models that are publicly available, and then compare how well this pathology specific model does in comparison to some of those other models.(26:58):And we found that it was very good at morphologic description.Eric Topol (27:05):It's striking though to think now that you have this large language model where you're basically interacting with the slide, and this is rich, but in another way, just to ask you, we talk about multimodal, but what about if you have electronic health record, the person's genome, gut microbiome, the immune status and social demographic factors, and all these layers of data, environmental exposures, and the pathology. Are we going to get to that point eventually?Faisal Mahmood (27:45):Yeah, absolutely. So that's what we're trying to do now. So I think that it's obviously one step at a time. There are some data types that we can very easily integrate, and we're trying to integrate those and really have PathChat as being a binder to all of that data. And pathology is a very good binder because pathology is medicine's ground truth, a lot of the fundamental decisions around diagnosis and prognosis and treatment trajectory is all sort of made in pathology. So having everything else bind around the pathology is a very good idea and indication. So for some of these data types that you just mentioned, like electronic medical records and radiology, we could very easily go that next step and build integrative models, both in terms of building the foundation model and then linking with language and getting it to generate responses and so forth. And for other data types, we might need to do some more specific training data types that we don't have enough data to build foundation models and so forth. So we're trying to expand out to other data types and see how pathology can act as a binder.Eric Topol (28:57):Well if anybody's going to build it, I'm betting on you and your team there, Faisal. Now what this gets us to is the point that, was it 96% or 95% of pathologists in this country are basically in an old era, we're not eking out so much information from slides that they could, and here you're kind of in another orbit, you're in another world here whereby you're coming up with information. I mean things I never thought really the prognosis of a patient over extended period of time, the sensitivity of drugs to the tumor from the slide, no less the driver mutations to be able to, so you wouldn't even have to necessarily send for mutations of the cancer because you get it from the slide. There's so much there that isn't being used. It's just to me unfathomable. Can you help me understand why the pathology community, now that I know you're not actually a pathologist, but you're actually trying to bring them along, what is the reason for this resistance? Because there's just so much information here.Faisal Mahmood (30:16):So there are a number of different reasons. I mean, if you go into details for why digital pathology is not actively happening. Digitizing an entire department is expensive, retaining large amounts of slides is expensive. And then the value proposition in terms of patient care is definitely there. But the financial incentives, reimbursement around AI is not quite there yet. It's slowly getting there, but it's not quite there yet. In the meantime, I think what we can really focus on, and what my group is thinking a lot about is that how can we democratize these models by using what the pathologists already have and they all have a microscope and most of them have a microscope with a camera attached to it. Can we train these models on whole slide images like we have them and adapt them to just a camera coupled to a microscope? And that's what we have done for PathChat2.(31:23):I think one of the demos that we showed after the article came out was that you could use PathChat on your computer with the whole slide image, but you can also use it with a microscope just coupled to a camera and you put a glass light underneath. And in an extreme lower source setting, you can also use it with just a cell phone coupled to a microscope. We're also building a lighter weight version of it that wouldn't require internet, so it would just be completely locally deployed. And then it could be active in lower source settings where sometimes sending a consult can take a really, really long time, and quite often it's not very easy for hospitals in lower source settings to track down a patient again once they've actually left because they might've traveled a long distance to get to the clinic and so forth. So the value of having PathChat deployed in a lower source setting where it can run locally without internet is just huge because it can accelerate the diagnosis so much. In particular for very simple things, which it's very, very good at making a diagnosis for those cases.Eric Topol (32:33):Oh, sure. And it can help bridge inequities, I mean, all sorts of things that could be an outgrowth of that. But what I still having a problem with from the work that you've done and some of the other people that well that are working assiduously in this field, if I had a biopsy, I want all the information. I don't want to just have the old, I would assume you feel the same way. We're not helping patients by not providing the information that's there just with a little help from AI. If it's going to take years for this transformation to occur, a lot of patients are going to miss out because their pathologists are not coming along.Faisal Mahmood (33:28):I think that one way to of course solve this would be to have it congressionally mandated like we had for electronic medical records. And there are other arguments to be made. It's been the case for a number of different hospitals have been sued for losing slides. So if you digitize all your slides and you're not going to lose them, but I think it will take time. So a lot of hospitals are making these large investments, including here at the Brigham and MGH, but it will take time for all the scanners, all the storage solutions, everything to be in place, and then it will also take time for pathologists to adapt. So a lot of pathologists are very excited about the new technology, but there are also a lot of pathologists who feel that their entire career has been diagnosing cases or using a microscope and slide. So it's too big of a transition for them. So I think there'll obviously be some transition period where both would coexist and that's happening at a lot of different institutions.Eric Topol (34:44):Yeah, I get what you're saying, Faisal, but when I wrote Deep Medicine and I was studying what was the pathology uptake then of deep learning, it was about 2% and now it's five years later and it's 4% or 5% or whatever. This is a glacial type evolution. This is not keeping up with how the progress that's been made. Now, the other thing I just want to ask you before finishing up, there are some AI pathology companies like PathAI. I think you have a startup model Modella AI, but what can the companies do when there's just so much reluctance to go into the digital era of pathology?Faisal Mahmood (35:31):So I think that this has been a big barrier for most pathology startups because around seven to eight years ago when most of these companies started, the hope was that digital pathology would happen much faster than it actually has. So I think one thing that we're doing at Modella is that we understand that the adoption of digital pathology is slow. So everything that we are building, we're trying to enable it to work with the current solutions that exist. So a pathologist can capture images from a pathology slide right in their office with a camera with a microscope and PathChat, for example, works with that. And then the next series of tools that we're developing around generative AI would also be developed in a manner that it would be possible to use just a camera coupled to a microscope. So I think that I do feel that all of these pathology AI companies would have been doing much, much better if everything was digital, because adopting the tools that they developed would very straightforward. Right now, the barrier is that even if you want to deploy an AI driven solution, if your hospital is not entirely digital, it's not possible to do that. So it requires this huge upfront investment.Eric Topol (37:06):Yeah, no, it's extraordinary to me. This is such an exciting time and it's just not getting actualized like it could. Now, if somebody who's listening to our conversation has a relative or even a patient or whatever that has a biopsy and would like to get an enlightened interpretation with all the things that could be found that are not being detected, is there a way to send that to a center that is facile with this? Or if that's a no go right now?Faisal Mahmood (37:51):So I think at the moment it's not possible. And the reason is that a lot of the generic AI tools are not ready for this. The models are very, very specific for specific purposes. The generalist models are just getting started, but I think that in the years to come, this would be a competitive edge for institutions who do adopt AI. They would definitely have a competitive edge over those who do not. We do from time to time, receive requests from patients who want us to run their slides on the cancers of unknown primary tool that we built. And it depends on whether we are allowed to do so or not, because it has to go through a regular diagnostic first and how much information can we get from the patient? But it's on a case by case basis.Eric Topol (38:52):Well, I hope that's going to change soon because you have been, your team there has just been working so hard to eke out all that we can learn from a path slide, and it's extraordinary. And it made me think about what we knew five years ago, which already was exciting, and you've taken that to the fifth power now or whatever. So anyway, just to congratulate you for your efforts, I just hope that it will get translated Faisal. I'm very frustrated to learn how little this is being adopted here in this country, a rich country, which is ignoring the benefits that it could provide for patients.Faisal Mahmood (39:40):Yeah. That's our goal over the next five years. So the hope really is to take everything that we have developed so far and then get it in aligned with where the technology currently is, and then eventually deploy it both at our institution and then across the country. So we're working hard to do that.Eric Topol (40:03):Well, maybe patients and consumers can get active about this and demand their medical centers to go digital instead of living in an analog glass slide world, right? Yeah, maybe that's the route. Anyway, thank you so much for reviewing at this pace of your publications. It's pretty much unparalleled, not just in pathology AI, but in many parts of life science. So kudos to you, Richard Chen, and your group and so many others that have been working so hard to enlighten us. So thanks. I'll be checking in with you again on whatever the next model that you build, because I know it will be another really important contribution.Faisal Mahmood (40:49):Thank you so much, Eric. Thanks.**************************Thanks for listening, reading or watching!The Ground Truths newsletters and podcasts are all free, open-access, without ads.Please share this post/podcast with your friends and network if you found it informativeVoluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff for audio and video support at Scripps Research.Side note: My X/twitter account @erictopol was hacked yesterday, 27 July, with no help from the platform to regain access despite many attempts. Please don't get scammed! Get full access to Ground Truths at erictopol.substack.com/subscribe
For years now, even as headlines about the development of AI have become more frequent and more dire, I really never worried about it much, because I couldn't think of anything in scripture that sounded a great deal like a superintelligent machine. I'd read the end of the book (Revelation), I knew how it ended, and it wasn't in a robot apocalypse... so all the fears surrounding that possibility must therefore be much ado about nothing. (I did write a fictional trilogy for young adults back in 2017 about how I imagined a near-miss robot apocalypse might look, though, because I found the topic fascinating enough to research at the time. It's called the "Uncanny Valley" trilogy, where the "uncanny valley" refers to the "creepy" factor, as a synthetic humanoid creature approaches human likeness.) When I finished the trilogy, I more or less forgot about advancing AI, until some of the later iterations of Chat GPT and similar Large Language Models (LLMs). Full disclosure: I've never used any LLMs myself, mostly because (last I checked) you had to create an account with your email address before you started asking it questions. (In the third book of my series, the superintelligent bot Jaguar kept track of everyone via facial recognition cameras, recording literally everything they did in enormous data processing centers across the globe that synced with one another many times per day. Though at that point I doubt it would make any difference, I'd rather not voluntarily give Jaguar's real-life analog any data on me if I can help it!) Particularly the recent release of Chat GPT Omni (which apparently stands for "omniscient" --!!) gave me pause, though, and I had to stop and ask myself why the idea that it could be approaching actual Artificial General Intelligence (AGI) made the hairs on the back of my neck stand up. I recently read a book called "Deep Medicine" by Eric Topol on the integration of AI into the medical field, which helped allay some potential concerns--that book contended that AGI would likely never be realized, largely because AGI inherently requires experience in the real world, and a robot can never have lived experiences in the way that humans can. It painted a mostly rosy picture of narrow (specialized) AI engaging in pattern recognition (reading radiology images or recognizing pathology samples or dermatological lesions, for instance), and thus vastly improving diagnostic capabilities of physicians. Other uses might include parsing a given individual's years of medical records and offering a synopsis and recommendations, or consolidating PubMed studies, and offering relevant suggestions. Topol did not seem to think that the AI would ever replace the doctor, though. Rather, the author contended, at the rate that data is currently exploding, doctors are drowning in the attempt to document and to keep up with it all, and empathic patient care suffers as a result. AI, he argues, will actually give the doctor time to spend with the patient again, to make judgment calls with a summary of all the data at his fingertips, and to put it together in an integrated whole with his uniquely human common sense. Synthetic Empathy and Emotions? But, "Deep Medicine" was written in 2019, which (in the world of AI) is already potentially obsolete. I'm told that Chat GPT Omni is better than most humans at anything involving either logic or creativity, and it does a terrific approximation of empathy, too. Even "Deep Medicine" cited statistics to suggest that most humans would prefer a machine for a therapist than a person (!!), largely due to the fear that the human might judge them for some of their most secret or shameful thoughts or feelings. And if the machine makes you feel like it understands you, does it really matter whether its empathy is "real" or not? What does "real" empathy mean, anyway? In "Uncanny Valley," my main character, as a teenager, inherited a "companion bot" who was programmed with mirror neurons (the seat of empathy in the human brain.) In the wake of her father's death, she came to regard her companion bot as her best friend. It was only as she got older that she started to ask questions like whether its 'love' for her was genuine, if it was programmed. This is essentially the theological argument for free will, too. Could God have made a world without sin? Sure, but in order to do it, we'd all have to be automatons--programmed to do His will, programmed to love Him and to love one another. Would there be any value in the love of a creature who could not do anything else? (The Calvinists might say that's the way the world actually is, for those who are predestined, but everyone else would vehemently disagree.) It certainly seems that God thought it was worth all the misery He endured since creation, for the chance that some of us might freely choose Him. I daresay that same logic is self-evident to all of us. Freedom is an inherent good--possibly the highest good. So, back to AI: real empathy requires not just real emotion, but memories of one's own real emotions, so that we can truly imagine that we are in another person's shoes. How can a robot, without its own lived memories, experience real empathy? Can it even experience real emotion? It might have goals or motives that can be programmed, but emotion at minimum requires biochemistry and a nervous system, at least in the way we understand it. We know from psychology research on brain lesions as well as from psychiatric and recreational medications and experiences with those suffering from neurodegenerative conditions that mood, affect, and personality can drastically change from physiologic tampering, as well. Does it follow that emotions are 'mere' biochemistry, though? This is at least part of the age-old question of materialism versus vitalism, or (to put it another way), reductionism versus holism. Modern medicine is inherently materialistic, believing that the entirety of a living entity can be explained by its physical makeup, and reductionistic, believing that one can reduce the 'whole' of the living system to a sum of its parts. Vitalism, on the other hand, argues that there is something else, something outside the physical body of the creature, that animates it and gives it life. At the moment just before death and just after, all the same biochemical machinery exists... but anyone who has seen the death of a loved one can attest that the body doesn't look the same. It becomes almost like clay. Some key essence is missing. I recently read "The Rainbow and the Worm" by Mae-Wan Ho, which described fascinating experiments on living worms viewed under electron microscopes. The structured water in the living tissue of the worm exhibited coherence, refracting visible light in a beautiful rainbow pattern. At the moment of death, though, the coherence vanished, and the rainbow was gone--even though all of the same physical components remained. The change is immaterial; the shift between death and life is inherently energetic. There was an animus, a vital force--qi, as Chinese Medicine would call it, or prana, as Ayurvedic medicine would describe it, or (as we're now discovering in alternative Western medicine), voltage carried through this structured water via our collagen. That hydrated collagen appears to function in our bodies very much like a semiconductor, animating our tissues with electrons, the literal energy of life. At the moment of death, it's there, and then it's not--like someone pulled the plug. What's left is only the shell of the machine, the hardware. But where is that plug, such that it can be connected and then, abruptly, not? The materialist, who believes that everything should be explainable on the physical level, can have no answer. The Bible tells us, though, that we are body, soul, and spirit (1 Thess 5:23)--which inherently makes a distinction between body and soul (implying that the soul is not a mere product of the chemistry of the body). The spirit is what was dead without Jesus, and what gets born again when we are saved, and it's perfect, identical with Jesus' spirit (2 Cor. 5:17, Eph 4:24). It's God's "seal" on us, vacuum-packed as it were, so that no sin can contaminate it. It's the down-payment, a promise that complete and total restoration is coming (Eph 1:13-14). But there's no physical outlet connecting the spirit and the body; the connection between them is the soul. With our souls, we can see what's ours in the Spirit through scripture, and scripture can train our souls to conform more and more to the spirit (Romans 12:2, Phil 2:12-13). No one would ever argue that a machine would have a spirit, obviously, but the materialists wouldn't believe there is such a thing, anyway. What about the soul, though? What is a soul, anyway? Can it be explained entirely through materialistic means?Before God made Adam, He explicitly stated that He intended to make man after His own image (Gen 1:26-27). God is spirit (John 4:24), though, so the resemblance can't be physical, per se, at least not exclusively or even primarily. After forming his body, God breathed into him the breath of life (Genesis 2:7)--the same thing Jesus did to the disciples after His resurrection when he said "Receive the Holy Spirit" (John 20:22). So it must therefore be in our spirits that we resemble God. Adam and Eve died spiritually when they sinned (Genesis 3:3), but something continued to animate their bodies for another 930 years. This is the non-corporeal part of us that gets "unplugged" at physical death. Since it can be neither body nor spirit, it must be the soul. Andrew Wommack defines the soul as the mind, will, and emotions. I can't think of a single scripture that defines the soul this way; I think it's just an extrapolation, based on what's otherwise unaccounted for. But in our mind, will, and emotions, even before redemption, mankind continued to reflect God's image, in that he continued to possess the ability to reason, to choose, to create, to love, and to discern right from wrong. The materialists would argue that emotion, like everything else, must have its root purely in the physical realm. Yet they do acknowledge that because there are so many possible emotional states, and relatively few physiologic expressions of them, many emotions necessarily share a physiologic expression. It's up to our minds to translate the meaning of a physiologic state, based on the context. In "How Emotions are Made," author Lisa Barrett gave a memorable example of this: once, a colleague to whom she didn't think she was particularly attracted asked her for a date. She went, felt various strange things in her gut that felt a little like “butterflies”, and assumed during the date that perhaps she was attracted to him after all… only to later learn that she was actually in the early stages of gastroenteritis! This example illustrates how the biochemistry and physiologic expressions of emotion are merely the blunt downstream instruments that translate an emotion from the non-corporeal soul into physical perception--and in some cases, as in that one, the emotional perception might originate from the body entirely. This also might be why some people (children especially) can mistake hunger or fatigue for irritability, or why erratic blood sugar in uncontrolled diabetics can manifest as rage, etc. In those cases, the emotional response really does correspond to the materialist's worldview, originating far downstream in the "circuit," as it were. But people who experience these things as adults will say things like, "That's not me." I think they're right--when we think of our true selves, none of us think of our bodies--those are just our "tents" (2 Cor 5:1), to be put off eventually when we die. When we refer to our true selves, we mean our souls: our mind, will, and emotions. It's certainly possible for many of us to feel "hijacked" by our emotions, as if they're in control and not "us," though (Romans 7:15-20). Most of us recognize a certain distinction there, too, between the real "us" and our emotions. The examples of physiologic states influencing emotions are what scripture would call "carnal" responses. If we're "carnal," ruled by our flesh, then physiologic states will have a great deal of influence over our emotions-- a kind of small scale anarchy. The "government" is supposed to be our born-again spirits, governing our souls, which in turn controls our bodies, rather than allowing our flesh to control our souls (Romans 8:1-17) - though this is of course possible if we don't enforce order. With respect to AI, my point is, where does "true" emotion originate? There is a version of it produced downstream, in our flesh, yes. It can either originate from the flesh itself, or it can originate upstream, from the non-corporeal soul, what we think of us "the real us." That's inherently a philosophical and not a scientific argument, though, as science by definition is "the observation, identification, description, experimental investigation, and theoretical explanation of phenomena." Any question pertaining to something outside the physical world cannot fall under the purview of science. But even for those who do not accept scripture as authority, our own inner experience testifies to the truth of the argument. We all know that we have free will; we all know we can reason, and feel emotions. We can also tell the difference between an emotion that is "us" and an emotion that feels like it originates from outside of "our real selves". As C.S. Lewis said in "Mere Christianity," if there is a world outside of the one we can experimentally observe, the only place in which we could possibly expect to have any evidence of it is in our own internal experience. And there, we find it's true. Without a soul, then, a robot (such as an LLM) would of course exist entirely on the physical plane, unlike us. It therefore might have physical experiences that it might translate as emotion, the same way that we sometimes interpret physical experiences as emotion--but it cannot have true emotions. Empathy, therefore, can likewise be nothing more than programmed pattern recognition: this facial expression or these words or phrases tend to mean that the person is experiencing these feelings, and here is the appropriate way to respond. Many interactions with many different humans over a long period of time will refine the LLM's learning such that its pattern recognition and responses get closer and closer to the mark... but that's not empathy, not really. It's fake. Does that matter, though, if the person "feels" heard and understood? Well, does truth matter? If a man who is locked up in an insane asylum believes himself to be a great king, and believes that all the doctors and nurses around him are really his servants and subjects, would you trade places with him? I suspect that all of us would say no. With at least the protagonists in "The Matrix," we all agree that it's better to be awakened to a desperate truth than to be deceived by a happy lie. The Emotional Uncanny Valley Even aside from that issue, is it likely that mere pattern recognition could simulate empathy well enough to satisfy us--or is it likely that this, too, would fall into the "uncanny valley"? Most of us have had the experience of meeting a person who seems pleasant enough on the surface, and yet something about them just seemed ‘off'. (The Bible calls this discernment, 1 Corinthians 12:10.) When I was in a psychology course in college, the professor flashed images of several clean-cut, smiling men in the powerpoint, out of context, and asked us to raise our hands if we would trust each of them. I don't remember who most of them were - probably red herrings to disguise the point - but one of them was Ted Bundy, the serial killer of the 1970s. I didn't recognize him, but I did feel a prickling sense of unease as I gazed at his smiling face. Something just wasn't right. Granted, a violent psychopath is not quite the same, but isn't the idea of creating a robot possessed of emotional intelligence (in the sense that it can read others well) but without real empathy essentially like creating an artificial sociopath? Isn't the lack of true empathy the very definition? (Knowing this, would we really want jobs like social workers, nurses, or even elementary school teachers to be assumed by robots--no matter how good the empathic pattern recognition became?) An analogy of this is the 1958 Harlow experiment on infant monkeys (https://www.simplypsychology.org/harlow-monkey.html), in which the monkeys were given a choice between two simulated mothers: one made of wire, but that provided milk, and one made of cloth, but without milk. The study showed that the monkeys would only go to the wire mother when hungry; the rest of the day they would spend in the company of the cloth mother. My point is that emotional support matters to all living creatures, far more than objective physical needs (provided those needs are also met). If we just want a logical problem solved, we may well go to the robot. But most of our problems are not just questions of logic; they involve emotions, too. As Leonard Mlodinow, author of "Emotional" writes, emotions are not mere extraneous data that colors an experience, but can otherwise be ignored at will. In many cases, the emotions actually serve to motivate a course of action. Every major decision I've ever made in my life involved not just logic, but also emotion, or in some cases intuition (which I assume is a conscious prompting when the unconscious reasoning is present but unknown to me), or a else leading of the Holy Spirit (which "feels" like intuition, only without the presumed unconscious underpinning. He knows the reason, but I don't, even subconsciously.) Obviously, AI, with synthetic emotion or not, would have no way to advise us on matters of intuition, or especially promptings from the Holy Spirit. Those won't usually *seem* logical, based on the available information, but He has a perspective that we don't have. Neither will a machine, even if it could simultaneously process all known data available on earth. There was a time when Newtonian physicists believed that, with access to that level of data in the present, the entire future would become deterministic, making true omniscience in this world theoretically possible. Then we discovered quantum physics, and all of that went out the window. Heisenberg's Uncertainty Principle eliminates the possibility that any creature or machine, no matter how powerful, can in our own dimension ever truly achieve omniscience. In other words, even a perfectly logical machine with access to all available knowledge will fail to guide us into appropriate decisions much of the time -- precisely because they must lack true emotion, intuition, and especially the guidance of the Holy Spirit. Knowledge vs Wisdom None of us will be able to compete with the level of knowledge an AI can process in a split second. But does that mean the application of that knowledge will always be appropriate? I think there's several levels to this question. The first has to do with the data sets on which AI has been trained. It can only learn from the patterns it's seen, and it will (like a teenager who draws sweeping conclusions based on very limited life experience) assume that it has the whole picture. In this way, AI may be part of the great deception mentioned by both Jesus (Matt 24:24) and the Apostle Paul (2 Thess 2:11) in the last days. How many of us already abdicate our own reasoning to those in positions of authority, blindly following them because we assume they must know more than we do on their subject? How much more will many of us fail to question the edicts of a purportedly "omniscient" machine, which must know more than we do on every subject? That machine may have only superficial knowledge of a subject, based on the data set it's been given, and may thus draw an inappropriate conclusion. (Also, my understanding is that current LLMs continue learning only until they are released into the world; from that point, they can no longer learn anything new, because of the risk that in storing new information, they could accidentally overwrite an older memory.) A human may draw an inappropriate conclusion too, of course, and if that person has enough credentials behind his name, it may be just as deceptive to many. But at least one individual will not command such blind obedience on absolutely every subject. AGI might. So who controls the data from which that machine learns? That's a tremendous responsibility... and, potentially, a tremendous amount of power, to deceive, if possible, "even the elect." For the sake of argument, let's say that the AGI is exposed only to real and complete data, though--not cherry-picked, and not "misinformation." In this scenario, some believe that (if appropriate safeguards are in place, to keep the AGI from deciding to save the planet by killing all the humans, for example, akin to science fiction author Isaac Asimov's Three Laws of Robotics), utopia will result. The only way this is possible, though, is if not only does the machine learn on a full, accurate, and complete set of collective human knowledge, but it also has a depth of understanding of how to apply that knowledge, as well. This is the difference between knowledge and wisdom. The dictionary definition of wisdom is "the ability to discern or judge what is true, right, or lasting," versus knowledge, defined as "information gained through experience, reasoning, or acquaintance." Wisdom has to do with one's worldview, in other words, or the lens through which he sees and interprets a set of facts. It is inextricably tied to morality. (So, who is programming these LLMs again? Even without AI, since postmodernism and beyond, there's been a crisis among many intellectuals as to whether or not there's such a thing as "truth," even going so far as to question objective physical reality. That's certainly a major potential hazard right there.) Both words of wisdom and discernment are listed as explicit supernatural gifts of the Spirit (1 Cor 12:8, 10). God says that He is the source of wisdom, as well as of knowledge and understanding (Prov 2:6), and that if we lack wisdom, we should ask Him for it (James 1:5). Wisdom is personified in the book of Proverbs as a person, with God at creation (Prov 8:29-30)--which means, unless it's simply a poetic construct, that wisdom and the Holy Spirit must be synonymous (Gen 1:2). Jesus did say that it was the Holy Spirit who would guide us into all truth, as He is the Spirit of truth (John 16:13). The Apostle Paul contrasts the wisdom of this world as foolishness compared to the wisdom of God (1 Cor 1:18-30)--because if God is truth (John 14:6), then no one can get to true wisdom without Him. That's not to say that no human (or robot) can make a true statement without an understanding of God, of course--but when he does so, he's borrowing from a worldview not his own. The statement may be true, but almost by accident--on some level, if you go down deep enough to bedrock beliefs, there is an inherent inconsistency between the statement of truth and the person's general worldview, if that worldview does not recognize a Creator. (Jason Lisle explains this well and in great detail in "The Ultimate Proof of Creation.") Can you see the danger of trusting a machine to discern what is right, then, simply because in terms of sheer facts and computing power, it's vastly "smarter" than we are? Anyone who does so is almost guaranteed to be deceived, unless he also filters the machine's response through his own discernment afterwards. (We should all be doing this with statements from any human authority on any subject, too, by the way. Never subjugate your own reasoning to anyone else's, even if they do know the Lord, but especially if they don't. You have the mind of Christ! 1 Cor 2:16). Would Eliminating Emotion from the Workplace Actually Be a Good Thing? I can see how one might think that replacing a human being with a machine that optimizes logic, but strips away everything else might seem a good trade, on the surface. After all, we humans (especially these days) aren't very logical, on the whole. Our emotions and desires are usually corrupted by sin. We're motivated by selfishness, greed, pride, and petty jealousies, when we're not actively being renewed by the Holy Spirit (and most of us aren't; even most believers are more carnal than not, most of the time. I don't know if that's always been the case, but it seems to be now). We also are subject to the normal human frailties: we get sick, or tired, or cranky, or hungry, or overwhelmed. We need vacations. We might be distracted by our own problems, or apathetic about the task we've been paid to accomplish. Machines would have none of these drawbacks. But do we really understand the trade-off we're making? We humans have a tendency to take a sliver of information, assume it's the whole picture, and run with it--eliminating everything we think is extraneous, simply because we don't understand it. In our hubris, we don't stop to consider that all the elements we've discarded might actually be critical to function. This seems to me sort of like processed food. We've taken the real thing the way God made it, and tweaked it in a laboratory to make it sweeter, crunchier, more savory, and with better "mouth feel.” It's even still got the same number of macronutrients and calories that it had before. But we didn't understand not only how processing stripped away necessary micronutrients, but also added synthetic fats that contaminated our cell membranes, and chemicals that can overwhelm our livers, making us overweight and simultaneously nutrient depleted. We just didn't know what we didn't know. We've done the same thing with genetically engineered foods. God's instructions in scripture were to let the land lie fallow, and to rotate crops, because the soil itself is the source of micronutrition for the plant. If you plant the same crop in the same soil repeatedly and without a break, you will deplete the soil, and the plants will no longer be as nutritious, or as healthy... and an unhealthy plant is easy prey for pests. But the agriculture industry ignored this; it didn't seem efficient or profitable enough, presumably. Synthetic fertilizer is the equivalent of macronutrients only for plants, so they grow bigger than ever before (much like humans do if they subsist on nothing but fast food), but they're still nutrient depleted and unhealthy, and thus, easy prey for pests. So we added the gene to the plants to make them produce their own glyphosate, the active ingredient in RoundUp. Only glyphosate itself turns out to be incredibly toxic to humans, lo and behold... There are many, many more examples I can think of just in the realm of science, health, and nutrition, to say nothing of our approach to economics, or climate, or many other complex systems. We tend to isolate the “active ingredient,” and eliminate everything we consider to be extraneous… only to learn of the side effects decades later. So what will the consequences be to society if most workers in most professions eventually lack true emotion, empathy, wisdom, and intuition? Finding Purpose in Work There's also a growing concern that AI will take over nearly all jobs, putting almost everyone out of work. At this point, it seems that information-based positions are most at risk, and especially anything involving repetitive, computer-based tasks. I also understand that AI is better than most humans at writing essays, poetry, and producing art. Current robotics is far behind AI technology, though... Elon Musk has been promising self-driving cars in the eminent future for some time, yet they don't seem any closer to ubiquitous adoption now than they were five years ago. "A Brief History of Intelligence" by Max Bennett, published in fall 2023, said that as of the time of writing, robots can diagnose tumors from radiographic imaging better than most radiologists, yet they are still incapable of simple physical tasks such as loading a dishwasher without breaking things. (I suspect this is because the former involves intellectual pattern recognition, which seems to be their forte, while the latter involves movements that are subconscious for most of us, requiring integration of spatial recognition, balance, distal fine motor skills, etc. We're still a very long way from understanding the intricacies of the human brain... but then again, the pace at which knowledge is doubling is anywhere from every three to thirteen months, depending on the source. Either way, that's fast). On the assumption that we'll soon be able to automate nearly everything a human can do physically or intellectually, then, the world's elite have postulated a Universal Basic Income--essentially welfare for all, since we would in theory be incapable of supporting ourselves. Leaving aside the many catastrophically failed historical examples of socialism and communism, it's pretty clear that God made us for good work (Eph 2:10, 2 Cor 9:8), and He expects us to work (2 Thess 3:10). Idleness while machines run the world is certainly not a biblical solution. That said, technology in and of itself is morally neutral. It's a tool, like money, time, or influence, and can be used for good or for evil. Both the Industrial Revolution and in the Information Revolution led to plenty of unforeseen consequences and social upheaval. Many jobs became obsolete, while new jobs were created that had never existed before. Work creates wealth, and due to increased efficiency, the world as a whole became wealthier than ever before, particularly in nations where these revolutions took hold. In the US, after the Industrial Revolution, the previously stagnant average standard of living suddenly doubled every 36 years. At the same time, though, the vast majority of the wealth created was in the hands of the few owners of the technology, and there was a greater disparity between the rich and the poor than ever before. This disparity has only grown more pronounced since the Information Revolution--and we have a clue in Revelation 6:5-6 that in the end times, it will be worse than ever. Will another AI-driven economic revolution have anything to do with this? It's certainly possible. Whether or not another economic revolution should happen has little bearing on whether or not it will, though. But one thing for those of us who follow the Lord to remember is that we don't have to participate in the world's economy, if we trust Him to meet our needs. He is able to make us abound for every good work (2 Cor 9:8)--which I believe means we will also have some form of work, no matter what is going on in the world around us. He will bless the work of our hands, whatever we find for them to do (Deut 12:7). He will give us the ability to produce wealth (Deut 8:18), even if it seems impossible. He will meet all our needs as we seek His kingdom first (Luke 12:31-32)-and one of our deepest needs is undoubtedly a sense of purpose (Phil 4:19). We are designed to fulfill a purpose. What about the AI Apocalyptic Fears? The world's elite seem to fall into two camps on how an AI revolution might affect our world--those who think it will usher in utopia (Isaac Asimov's “The Last Question” essentially depicts this), and those who think AI will decide that humans are the problem, and destroy us all. I feel pretty confident the latter won't occur, at least not completely, since neither Revelation nor any of the rest of the prophetic books seem to imply domination of humanity by machine overlords. Most, if not all of the actors involved certainly appear to be human (and angelic, and demonic). That said, there are several biblical references that the end times will be "as in the days of Noah" (Matt 24:27, Luke 17:26). What could that mean? Genesis 6 states that the thoughts in the minds of men were only evil all the time, so it may simply mean that in the end times, mankind will have achieved the same level of corruption as in the antediluvian world. But that might not be all. In Gen 6:1-4, we're told that the "sons of God" came down to the "daughters of men," and had children by them--the Nephilim. This mingling of human and non-human corrupted the genetic line, compromising God's ability to bring the promised seed of Eve to redeem mankind. Daniel 2:43 also reads, "As you saw iron mixed with ceramic clay, they (in the end times) will mingle with the seed of men; but they will not adhere to one another, just as iron does not mix with clay." What is "they," if not the seed of men? It appears to be humanity, plus something else. Chuck Missler and many others have speculated that this could refer to transhumanism, the merging of human and machine. Revelation 13:14-15 is probably the most likely description I can think of in scripture of AI, describing the image of the beast that speaks, knows whether or not people worship the beast (AI facial recognition, possibly embedded into the "internet of things"?), and turns in anyone who refuses to do so. The mark of the beast sure sounds like a computer chip of some kind, with an internet connection (Bluetooth or something like it - Rev 13:17). Joel 2:4-9 describes evil beings "like mighty men" that can "climb upon a wall" and "when they fall upon the sword, they shall not be wounded," and they "enter in at the windows like a thief." These could be demonic and thus extra-dimensional, but don't they also sound like “The Terminator,” if robotics ever manages to advance that far? Jeremiah 50:9 says, "their arrows shall be like those of an expert warrior; none shall return in vain." This sounds like it could be AI-guided missiles. But the main evil actors of Revelation--the antichrist, the false prophet, the kings of the east, etc, all certainly appear to refer to humans. And from the time that the "earth lease" to humanity is up (Revelation 11), God Himself is the One cleansing the earth of all evil influences. I doubt He uses AI to do it. So, depending upon where we are on the prophetic timeline, I can certainly imagine AI playing a role in how the events of Revelation unfold, but I can't see how they'll take center stage. For whatever reason, it doesn't look to me like they'll ever get that far. The Bottom Line We know that in the end times, deception will come. We don't know if AI will be a part of it, but it could be. It's important for us to know the truth, to meditate on the truth, to keep our eyes focused on the truth -- on things above, and not on things beneath (Col 3:2). Don't outsource your thinking to a machine; no matter how "smart" they become, they will never have true wisdom; they can't. That doesn't mean don't use them at all, but if you do, do so cautiously, check the information you receive, and listen to the Holy Spirit in the process, trusting Him to guide you into all truth (John 16:13). Regardless of how rapidly or dramatically the economic landscape and the world around us may change, God has not given us a spirit of fear, but of power, love, and a sound mind (2 Tim 1:7). Perfect love casts out fear (1 John 4:18), and faith works through love (Gal 5:6). If we know how much God loves us, it becomes easy to not be anxious about anything, but in everything, by prayer and petition, with thanksgiving, present our requests to God... and then to fix our minds on whatever is true, noble, just, pure, lovely, of good report, praiseworthy, or virtuous (Phil 4:6-8). He knows the end from the beginning. He's not surprised, and He'll absolutely take care of you in every way, if you trust Him to do it (Matt 6:33-34). Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Inflammatory diseases are on the rise around the world; when left unaddressed, they can turn chronic. Now, doctors are finally starting to pay more attention. But why and when does a beneficial part of our immune system turn against us? Raj Patel and Rupa Marya think it has a lot to do with the world we live in. They talk about climate change, ecological devastation, the collapse of our planet and what all that has to do with inflammation. Their thesis: our bodies are a mirror of a deeper disease in society and the environment. But there's still hope. They point a way back to health via Deep Medicine, which is the quest to reignite our commitment to the web of life and our place in it. GUESTS: Tré Vasquez, Co-director/collective member at Movement Generation Justice & Ecology Project Raj Patel, author, academic, journalist, activist Rupa Marya, author, Professor of Medicine at the University of California, San Francisco, and a co-founder of the Do No Harm Coalition The post Inflamed: Deep Medicine and the Anatomy of Injustice (encore) appeared first on KPFA.
Inflammatory diseases are on the rise around the world, and when left unaddressed can turn chronic. Now, doctors are finally starting to pay more attention. But why & when does a beneficial part of our immune system turn against us? Raj Patel & Rupa Marya think it has a lot to do with the world we live in. They talk about climate change, ecological devastation, & the collapse of our planet & what that has to do with inflammation. Their thesis: our bodies are a mirror of a deeper disease in society & the environment. But there's still hope. They point a way back to health via Deep Medicine, which is the quest to reignite our commitment to the web of life and our place in it. Learn more about the story and find the transcript on radioproject.org. Making Contact digs into the story beneath the story—contextualizing the narratives that shape our culture. Featuring narrative storytelling and thought-provoking interviews. We cover the most urgent issues of our time and the people on the ground building a more just world. EPISODE FEATURES: This episode features Tré Vasquez, Co-director/collective member at Movement Generation Justice & Ecology Project; Raj Patel, author, academic, journalist, activist; & Rupa Marya, author, Professor of Medicine at the University of California, San Francisco, and a co-founder of the Do No Harm Coalition. MAKING CONTACT: This episode is hosted by Salima Hamirani. It is produced by Anita Johnson, Lucy Kang, Salima Hamirani, and Amy Gastelum. Our executive director is Jina Chung. MUSIC: This episode includes music “Cenote” & “Lithosphere” from Frequency Decree; “Anto” by Blear Moon, & “Juniper” by Broke For Free. Learn More: Inflamed: Deep Medicine and the Anatomy of Injustice Movement Generation Justice and Ecology Project
Dr. Rupa Marya discusses her work at the intersection of medicine, health, land, and justice. She explains the concept of deep medicine, which looks at the health impacts of colonialism and colonial capitalism and emphasizes the need to address the root causes of illness.Dr. Rupa Marya is a physician, activist, writer, and composer at UC, San Francisco. Her work intersects climate, health, and racial justice. As founder of the Deep Medicine Circle and co-founder of the Do No Harm Coalition, she's committed to healing colonialism's wounds and addressing disease through structural change. Recognized with the Women Leaders in Medicine Award, Dr. Marya was a reviewer for the AMA's plan to embed racial justice. Governor Newsom appointed her to the Healthy California for All Commission to advance universal healthcare. Also a musician, she's toured 29 countries with her band, creating what Gil Scott-Heron called "Liberation Music”. Together with Raj Patel, she co-authored the international bestseller, Inflamed: Deep Medicine and the Anatomy of Injustice. Links and Resources: RupaMarya.org Deep Medicine Circle Inflamed: Deep Medicine and the Anatomy of Injustice by Raj Patel & Rupa Marya “Discourse on Colonialism” by Aimé Césaire “The Deep Medicine of Rehumanizing Palestinians” by Dr. Rupa Marya & Ghassan Abu-Sitta Where Olive Trees Weep (film) Where Olive Trees Weep - Conversations on Palestine “Work for Peace” by GIl Scott Heron Topics: 00:00 - Introduction 02:01 - Meeting Dr. Marya 06:31 - Shallow vs Deep Medicine 11:58 - Balancing Deep Medicine and Immediate Health Crises 15:28 - Essential & Integrative of Medicine 19:48 - Media Narratives Around Health 25:32 - Colonialism & Healthcare 30:51 - Dehumanization 36:16 - The Power Mind Virus 40:19 - Imagining What's Possible 44:16 - Narratives Supporting Genocide 50:46 - Heaviness, Hopefulness & Listening 53:37 - Protest Music in the Era of Big Media 56:01 - Closing Support the mission of SAND and the production of this podcast by becoming a SAND Member.
Think about the last time you felt let down by the health care system. You probably don't have to go back far. In wealthy countries around the world, medical systems that were once robust are now crumbling. Doctors and nurses, tasked with an ever expanding range of responsibilities, are busier than ever, which means they have less and less time for patients. In the United States, the average doctor's appointment lasts seven minutes. In South Korea, it's only two.Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isn't available in Canada, similar trends are almost certainly happening here as well).Eric Topol says medicine has become decidedly inhuman – and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.Mentioned:“Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” by Eric Topol“The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations” by H. Singh, A. Meyer, E. Thomas“Burden of serious harms from diagnostic error in the USA” by David Newman-Toker, et al.“How Expert Clinicians Intuitively Recognize a Medical Diagnosis” by J. Brush Jr, J. Sherbino, G. Norman“A Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skills” by Jaclyn Gurwin, et al.“Abridge becomes Epic's First Pal, bringing generative AI to more providers and patients, including those at Emory Healthcare”“Why Doctors Should Organize” by Eric Topol“How This Rural Health System Is Outdoing Silicon Valley” by Erika FryFurther Reading:"The Importance Of Being" by Abraham Verghese
Inflamed by Rupa Marya and Raj Patel takes us on a medical tour through the human body and illuminates the hidden relationships between our biological systems and the profound injustices of our political and economic systems. Inflammation is connected to the food we eat, the air we breathe, and the diversity of the microbes living inside us, which regulate everything from our brain's development to our immune system's functioning. Deep Medicine and the Anatomy of Injustice "Inflamed" by Rupa Marya and Raj Patel - Book PReview Book of the Week - BOTW - Season 7 Book 11 Buy the book on Amazon https://amzn.to/3Tekbem GET IT. READ :) #inflammation #injustice #healing FIND OUT which HUMAN NEED is driving all of your behavior http://6-human-needs.sfwalker.com/ Human Needs Psychology + Emotional Intelligence + Universal Laws of Nature = MASTER OF LIFE AWARENESS https://www.sfwalker.com/master-life-awareness --- Send in a voice message: https://podcasters.spotify.com/pod/show/sfwalker/message Support this podcast: https://podcasters.spotify.com/pod/show/sfwalker/support
"How Artificial Intelligence Can Make Healthcare Human Again"
In the discussion, Peter Fairfield, renowned acupuncturist and medical intuitive, is characterized by his ability to blend traditional wisdom with contemporary relevance. The conversation examines the complexity of medicine and healing, emphasizing the significance of emotional, spiritual, and personal development in becoming a successful healer. Fairfield explains the essential nature of empathy and understanding in building a healer-patient relationship rather than simply relying on facts and figures. Furthermore, the dialogue underlines the importance of acknowledging ancestral lineages in shaping the healer's work. Fairfield shares insights about the human heart's role in health and intuition's integral role in the healing practice. He stresses the importance of authenticity and self-compassion among healers while unearthing the importance of intuition in their practice.00:00 Introduction to the Healer's Journey00:59 The Power of Ancestral Influence01:51 Guest Introduction: Peter Fairfield L.Ac.06:36 Peter's Journey into Healing09:59 The Role of Spirituality in Healing20:37 The Impact of Ancestral Lineage on Healing23:38 The Importance of Self-Understanding for Healers26:34 The Influence of Modern Medicine on Healing29:52 The Role of Emotion and Connection in Healing34:24 The Need for Personal Discovery in Healing35:10 Expressing Discontent and the Basis of Healing35:48 The Power of Heart and Kitten Therapy36:34 The Healer's Council and the Concept of Heart38:18 The Human Heart Beyond a Pump38:39 The Heart in Chinese Medicine41:28 Understanding and Forgiving the Pain in Others44:27 The Role of Healers in Shifting Pain Cycles48:14 The Importance of Self-Compassion for Healers50:31 The Power of Pulse Diagnosis59:20 The Role of Intuition in Healing01:07:17 The Impact of Healers on Patients
New Moon Capricorn begins the prosecution of Israel for Genocide at International Court of Justice, Caroline welcomes Dr Rupa Marya, co- author with Raj Patel of “Inflamed”, “Deep Medicine and the anatomy of injustice.” Medicine and Metaphor…who is especially deft at weaving all the stories together…. Settler Colonialism, all over, crimes against land – brought to accountability… Dr. Marya founded the Deep Medicine Circle, a women of color-led organization committed to healing the wounds of colonialism through food, medicine, story, restoration and learning; Farming is Medicine. The grieving Earth, herself, brings this prosecution… on behalf of Flora Fauna Fungi link to full “Application instituting proceedings” document…. https://www.icj-cij.org/sites/default/files/case-related/192/192-20231228-app-01-00-en.pdf “South Africa … is making the present application to establish Israel's responsibility for violations of the Genocide Convention; to hold it fully accountable under international law for those violations; and — most immediately — to have recourse to this Court to ensure the urgent and fullest possible protection for Palestinians in Gaza who remain at grave and immediate risk of continuing and further acts of genocide.” Support The Visionary Activist Show on Patreon for weekly Chart & Themes ($4/month) and more… *Woof*Woof*Wanna*Play?!?* The post The Visionary Activist Show – Medicine for the Earth, Justice for the People appeared first on KPFA.
This is one of the most enthralling and fun interviews I've ever done (in 2 decades of doing them) and I hope that you'll find it stimulating and provocative. If you did, please share with your network.And thanks for listening, reading, and subscribing to Ground Truths.Recorded 4 December 2023Transcript below with external links to relevant material along with links to the audioERIC TOPOL (00:00):This is for me a real delight to have the chance to have a conversation with Geoffrey Hinton. I followed his work for years, but this is the first time we've actually had a chance to meet. And so this is for me, one of the real highlights of our Ground Truths podcast. So welcome Geoff.GEOFFREY HINTON (00:21):Thank you very much. It's a real opportunity for me too. You're an expert in one area. I'm an expert in another and it's great to meet up.ERIC TOPOL (00:29):Well, this is a real point of conversion if there ever was one. And I guess maybe I'd start off with, you've been in the news a lot lately, of course, but what piqued my interest to connect with you was your interview on 60 Minutes with Scott Pelley. You said: “An obvious area where there's huge benefits is healthcare. AI is already comparable with radiologists understanding what's going on in medical images. It's going to be very good at designing drugs. It already is designing drugs. So that's an area where it's almost entirely going to do good. I like that area.”I love that quote Geoff, and I thought maybe we could start with that.GEOFFREY HINTON (01:14):Yeah. Back in 2012, one of my graduate students called George Dahl who did speech recognition in 2009, made a big difference there. Entered a competition by Merck Frost to predict how well particular chemicals would bind to something. He knew nothing about the science of it. All he had was a few thousand descriptors of each of these chemicals and 15 targets that things might bind to. And he used the same network as we used for speech recognition. So he treated the 2000 descriptors of chemicals as if they were things in a spectrogram for speech. And he won the competition. And after he'd won the competition, he wasn't allowed to collect the $20,000 prize until he told Merck how he did it. And one of their questions was, what qsar did you use? So, he said, what's qsar? Now qsar is a field, it has a journal, it's had a conference, it's been going for many years, and it's the field of quantitative structural activity relationships. And that's the field that tries to predict whether some chemical is going to bind to something. And basically he'd wiped out that field without knowing its name.ERIC TOPOL (02:46):Well, it's striking how healthcare, medicine, life science has had somewhat of a separate path in recent AI with transformer models and also going back of course to the phenomenal work you did with the era of bringing in deep learning and deep neural networks. But I guess what I thought I'd start with here with that healthcare may have a special edge versus its use in other areas because, of course, there's concerns which you and others have raised regarding safety, the potential, not just hallucinations and confabulation of course a better term or the negative consequences of where AI is headed. But would you say that the medical life science AlphaFold2 is another example of from your colleagues Demis Hassabis and others at Google DeepMind where this is something that has a much more optimistic look?GEOFFREY HINTON (04:00):Absolutely. I mean, I always pivot to medicine as an example of all the good it can do because almost everything it's going to do there is going to be good. There are some bad uses like trying to figure out who to not insure, but they're relatively limited almost certainly it's going to be extremely helpful. We're going to have a family doctor who's seen a hundred million patients and they're going to be a much better family doctor.ERIC TOPOL (04:27):Well, that's really an important note. And that gets us to a paper preprint that was just published yesterday, on arXiv, which interestingly isn't usually the one that publishes a lot of medical preprints, but it was done by folks at Google who later informed me was a model large language model that hadn't yet been publicized. They wouldn't disclose the name and it wasn't MedPaLM2. But nonetheless, it was a very unique study because it randomized their LLM in 20 internists with about nine years of experience in medical practice for answering over 300 clinical pathologic conferences of the New England Journal. These are the case reports where the master clinician is brought in to try to come up with a differential diagnosis. And the striking thing on that report, which is perhaps the best yet about medical diagnoses, and it gets back Geoff to your hundred million visits, is that the LLM exceeded the clinicians in this randomized study for coming up with a differential diagnosis. I wonder what your thoughts are on this.GEOFFREY HINTON (05:59):So in 2016, I made a daring and incorrect prediction was that within five years, the neural nets were going to be better than radiologists that interpreting medical scans, it was sometimes taken out of context. I meant it for interpreting medical scans, not for doing everything a radiologist does, and I was wrong about that. But at the present time, they're comparable. This is like seven years later. They're comparable with radiologists for many different kinds of medical scans. And I believe that in 10 years they'll be routinely used to give a second opinion and maybe in 15 years they'll be so good at giving second opinions that the doctor's opinion will be the second one. And so I think I was off by about a factor of three, but I'm still convinced I was completely right in the long term.(06:55):So this paper that you're referring to, there are actually two people from the Toronto Google Lab as authors of that paper. And like you say, it was based on the large language PaLM2 model that was then fine-tuned. It was fine-tuned slightly differently from MedPaLM2 I believe, but the LLM [large language model] by themselves seemed to be better than the internists. But what was more interesting was the LLMs when used by the internists made the internists much better. If I remember right, they were like 15% better when they used the LLMs and only 8% better when they used Google search and the medical literature. So certainly the case that as a second opinion, they're really already extremely useful.ERIC TOPOL (07:48):It gets again, to your point about that corpus of knowledge that is incorporated in the LLM is providing a differential diagnosis that might not come to the mind of the physician. And this is of course the edge of having ingested so much and being able to play back those possibilities and the differential diagnosis. If it isn't in your list, it's certainly not going to be your final diagnosis. I do want to get back to the radiologist because we're talking just after the annual massive Chicago Radiologic Society of North America RSNA meeting. And at those meetings, I wasn't there, but talking to my radiology colleagues, they say that your projection is already happening. Now that is the ability to not just read, make the report. I mean the whole works. So it may not have been five years when you said that, which is one of the most frequent quotes in all of AI and medicine of course, as you probably know, but it's approximating your prognosis. Even nowGEOFFREY HINTON (09:02):I've learned one thing about medicine, which is just like other academics, doctors have egos and saying this stuff is going to replace them is not the right move. The right move is to say it's going to be very good at giving second opinions, but the doctor's still going to be in charge. And that's clearly the way to sell things. And that's fine, just I actually believe that after a while of that, you'll be listening to the AI system, not the doctors. And of course there's dangers in that. So we've seen the dangers in face recognition where if you train on a database that contains very few black people, you'll get something that's very good at recognizing faces. And the people who use it, the police will think this is good at recognizing faces. And when it gives you the wrong identity for a person of color, then the policemen are going to believe it. And that's a disaster. And we might get the same with medicine. If there's some small minority group that has some distinctly different probabilities of different diseases, it's quite dangerous for doctors to get to trust these things if they haven't been very carefully controlled for the training data.ERIC TOPOL (10:17):Right. And actually I did want to get back to you. Is it possible for the reason why in this new report that the LLMs did so well is that some of these case studies from New England Journal were part of the pre-training?GEOFFREY HINTON (10:32):That is always a big worry. It's worried me a lot and it's worried other people a lot because these things have pulled in so much data. There is now a way round that at least for showing that the LLMs are genuinely creative. So he's a very good computer science theorist at Princeton called Sanjeev Arora, and I'm going to attribute all this to him, but of course, all the work was done by his students and postdocs and collaborators. And the idea is you can get these language models to generate stuff, but you can then put constraints on what they generate by saying, so I tried an example recently, I took two Toronto newspapers and said, compare these two newspapers using three or four sentences, and in your answer demonstrate sarcasm, a red herring empathy, and there's something else. But I forget what metaphor. Metaphor.ERIC TOPOL (11:29):Oh yeah.GEOFFREY HINTON (11:29):And it gave a brilliant comparison of the two newspapers exhibiting all those things. And the point of Sanjeev Arora's work is that if you have a large number of topics and a large number of different things you might demonstrate in the text, then if I give an topic and I say, demonstrate these five things, it's very, anything in the training data will be on that topic demonstrating those five skills. And so when it does it, you can be pretty confident that it's original. It's not something it saw in the training data. That seems to me a much more rigorous test of whether it generates new stuff. And what's interesting is some of the LLMs, the weaker ones don't really pass the test, but things like GPT-4 that passes the test with flying colors, that definitely generates original stuff that almost certainly was not in the training data.ERIC TOPOL (12:25):Yeah. Well, that's such an important tool to ferret out the influence of pre-training. I'm glad you reviewed that. Now, the other question that most people argue about, particularly in the medical sphere, is does the large language model really understand? What are your thoughts about that? We're talking about what's been framed as the stochastic parrot versus a level of understanding or enhanced intelligence, whatever you want to call it. And this debate goes on, where do you fall on that?GEOFFREY HINTON (13:07):I fall on the sensible side. They really do understand. And if you give them quizzes, which involve a little bit of reasoning, it's much harder to do now because of course now GPT-4 can look at what's on the web. So you are worried if I mention a quiz now, someone else may have given it to GPT-4, but a few months ago when you did this before, you could see the web, you could give it quizzes for things that it had never seen before and it can do reasoning. So let me give you my favorite example, which was given to me by someone who believed in symbolic reasoning, but a very honest guy who believed in symbolic reasoning and was very puzzled about whether GT four could do symbolic reasoning. And so he gave me a problem and I made it a bit more complicated.(14:00):And the problem is this, the rooms in my house are painted white or yellow or blue, yellow paint fade to white within a year. In two years' time, I would like all the rooms to be white. What should I do and why? And it says, you don't need to paint the white rooms. You don't need to paint the yellow rooms because they'll fade to white anyway. You need to paint the blue rooms white. Now, I'm pretty convinced that when I first gave it that problem, it had never seen that problem before. And that problem involves a certain amount of just basic common sense reasoning. Like you have to understand that if it faded to yellow in a year and you're interested in the stage in two years' time, two years is more than one year and so on. When I first gave it the problem and didn't ask you to explain why it actually came up with a solution that involved painting the blue rooms yellow, that's more of a mathematician solution because it reduces it to a solved problem. But that'll work too. So I'm convinced it can do reasoning. There are people, friends of mine like Jan Leike, who is convinced it can't do reasoning. I'm just waiting for him to come to his sense.ERIC TOPOL (15:18):Well, I've noticed the back and forth with you and Yann (LeCun) [see above on X]. I know it's a friendly banter, and you, of course, had a big influence in his career as so many others that are now in the front leadership lines of AI, whether it's Ilya Sutskever at OpenAI, who's certainly been in the news lately with the turmoil there. And I mean actually it seems like all the people that did some training with you are really in the leadership positions at various AI companies and academic groups around the world. And so it says a lot about your influence that's not just as far as deep neural networks. And I guess I wanted to ask you, because you're frequently regarded to as the godfather of AI, and what do you think of that getting called that?GEOFFREY HINTON (16:10):I think originally it wasn't meant entirely beneficially. I remember Andrew Ng actually made up that phrase at a small workshop in the town of Windsor in Britain, and it was after a session where I'd been interrupting everybody. I was the kind of leader of the organization that ran the workshop, and I think it was meant as kind of I would interrupt everybody, and it wasn't meant entirely nicely, I think, but I'm happy with it.ERIC TOPOL (16:45):That's great.GEOFFREY HINTON (16:47):Now that I'm retired and I'm spending some of my time on charity work, I refer to myself as the fairy godfather.ERIC TOPOL (16:57):That's great. Well, I really enjoyed the New Yorker profile by Josh Rothman, who I've worked with in the past where he actually spent time with you up in your place up in Canada. And I mean it got into all sorts of depth about your life that I wasn't aware of, and I had no idea about the suffering that you've had with the cancer of your wives and all sorts of things that were just extraordinary. And I wonder, as you see the path of medicine and AI's influence and you look back about your own medical experiences in your family, do you see where we're just out of time alignment where things could have been different?GEOFFREY HINTON (17:47):Yeah, I see lots of things. So first, Joshua is a very good writer and it was nice of him to do that.(17:59):So one thing that occurs to me is actually going to be a good use of LLMs, maybe fine tune somewhat differently to produce a different kind of language is for helping the relatives of people with cancer. Cancer goes on a long time, unlike, I mean, it's one of the things that goes on for longest and it's complicated and most people can't really get to understand what the true options are and what's going to happen and what their loved one's actually going to die of and stuff like that. I've been extremely fortunate because in that respect, I had a wife who died of ovarian cancer and I had a former graduate student who had been a radiologist and gave me advice on what was happening. And more recently when my wife, a different wife died of pancreatic cancer, David Naylor, who you knowERIC TOPOL (18:54):Oh yes.GEOFFREY HINTON (18:55):Was extremely kind. He gave me lots and lots of time to explain to me what was happening and what the options were and whether some apparently rather flaky kind of treatment was worth doing. What was interesting was he concluded there's not much evidence in favor of it, but if it was him, he'd do it. So we did it. That's where you electrocute the tumor, being careful not to stop the heart. If you electrocute the tumor with two electrodes and it's a compact tumor, all the energy is going into the tumor rather than most of the energy going into the rest of your tissue and then it breaks up the membranes and then the cells die. We don't know whether that helped, but it's extremely useful to have someone very knowledgeable to give advice to the relatives. That's just so helpful. And that's an application in which it's not kind of life or death in the sense that if you happen to explain it to me a bit wrong, it's not determining the treatment, it's not going to kill the patient.(19:57):So you can actually tolerate it, a little bit of error there. And I think relatives would be much better off if they could talk to an LLM and consult with an LLM about what the hell's going on because the doctors never have time to explain it properly. In rare cases where you happen to know a very good doctor like I do, you get it explained properly, but for most people it won't be explained properly and it won't be explained in the right language. But you can imagine an LLM just for helping the relatives, that would be extremely useful. It'd be a fringe use, but I think it'd be a very helpful use.ERIC TOPOL (20:29):No, I think you're bringing up an important point, and I'm glad you mentioned my friend David Naylor, who's such an outstanding physician, and that brings us to that idea of the sense of intuition, human intuition, versus what an LLM can do. Don't you think those would be complimentary features?GEOFFREY HINTON (20:53):Yes and no. That is, I think these chatbots, they have intuition that is what they're doing is they're taking strings of symbols and they're converting each symbol into a big bunch of features that they invent, and then they're learning interactions between the features of different symbols so that they can predict the features of the next symbol. And I think that's what people do too. So I think actually they're working pretty much the same way as us. There's lots of people who say, they're not like us at all. They don't understand, but there's actually not many people who have theories of how the brain works and also theories of how they understand how these things work. Mostly the people who say they don't work like us, don't actually have any model of how we work. And it might interest them to know that these language models were actually introduced as a theory of how our brain works.(21:44):So there was something called what I now call a little language model, which was tiny. I introduced in 1985, and it was what actually got nature to accept our paper on back propagation. And what it was doing was predicting the next word in a three word string, but the whole mechanism of it was broadly the same as these models. Now, the models are more complicated, they use attention, but it was basically you get it to invent features for words and interactions between features so that it can predict the features of the next word. And it was introduced as a way of trying to understand what the brain was doing. And at the point at which it was introduced, the symbolic AI peoples didn't say, oh, this doesn't understand. They were perfectly happy to admit that this did learn the structure in the tiny domain, the tiny toy domain it was working on. They just argued that it would be better to learn that structure by searching through the space of symbolic rules rather than through the space of neural network weights. But they didn't say this is an understanding. It was only when it really worked that people had to say, well, it doesn't count.ERIC TOPOL (22:53):Well, that also something that I was surprised about. I'm interested in your thoughts. I had anticipated that in Deep Medicine book that the gift of time, all these things that we've been talking about, like the front door that could be used by the model coming up with the diagnoses, even the ambient conversations made into synthetic notes. The thing I didn't think was that machines could promote empathy. And what I have been seeing now, not just from the notes that are now digitized, these synthetic notes from the conversation of a clinic visit, but the coaching that's occurring by the LLM to say, well, Dr. Jones, you interrupted the patient so quickly, you didn't listen to their concerns. You didn't show sensitivity or compassion or empathy. That is, it's remarkable. Obviously the machine doesn't necessarily feel or know what empathy is, but it can promote it. What are your thoughts about that?GEOFFREY HINTON (24:05):Okay, my thoughts about that are a bit complicated, that obviously if you train it on text that exhibits empathy, it will produce text that exhibits empathy. But the question is does it really have empathy? And I think that's an open issue. I am inclined to say it does.ERIC TOPOL (24:26):Wow, wow.GEOFFREY HINTON (24:27):So I'm actually inclined to say these big chatbots, particularly the multimodal ones, have subjective experience. And that's something that most people think is entirely crazy. But I'm quite happy being in a position where most people think I'm entirely crazy. So let me give you a reason for thinking they have subjective experience. Suppose I take a chatbot that has a camera and an arm and it's being trained already, and I put an object in front of it and say, point at the object. So it points at the object, and then I put a prism in front of its camera that bends the light race, but it doesn't know that. Now I put an object in front of it, say, point at the object, and it points straight ahead, sorry, it points off to one side, even though the object's straight ahead and I say, no, the object isn't actually there, the object straight ahead. I put a prism in front of your camera and imagine if the chatbot says, oh, I see the object's actually straight ahead, but I had the subjective experience that it was off to one side. Now, if the chatbot said that, I think it would be using the phrase subjective experience in exactly the same way as people do,(25:38):Its perceptual system told it, it was off to one side. So what its perceptual system was telling, it would've been correct if the object had been off to one side. And that's what we mean by subjective experience. When I say I've got the subjective experience of little pink elephants floating in front of me, I don't mean that there's some inner theater with little pink elephants in it. What I really mean is if in the real world there were little pink elephants floating in front of me, then my perceptual system would be telling me the truth. So I think what's funny about subjective experiences, not that it's some weird stuff made of spooky qualia in an inner theater, I think subjective experiences, a hypothetical statement about a possible world. And if the world were like that, then your perceptual system will be working properly. That's how we use subjective experience. And I think chatbots can use it like that too. So I think there's a lot of philosophy that needs to be done here and got straight, and I didn't think we can lead it to the philosophers. It's too urgent now.ERIC TOPOL (26:44):Well, that's actually a fascinating response and added to what your perception of understanding it gets us to perhaps where you were when you left Google in May this year where you had, you saw that this was a new level of whatever you want to call it, not AGI [artificial general intelligence], but something that was enhanced from prior AI. And you basically, in some respects, I wouldn't say sounded any alarms, but you were, you've expressed concern consistently since then that we're kind of in a new phase. We're heading in a new direction with AI. Could you elaborate a bit more about where you were and where your mind was in May and where you think things are headed now?GEOFFREY HINTON (27:36):Okay, let's get the story straight. It's a great story. The news media puts out there, but actually I left Google because I was 75 and I couldn't program any longer because I kept forgetting what the variables stood for. I took the opportunity also, I wanted to watch a lot of Netflix. I took the opportunity that I was leaving Google anyway to start making public statements about AI safety. And I got very concerned about AI safety a couple of months before. What happened was I was working on trying to figure out analog ways to do the computation so you could do these larger language models for much less energy. And I suddenly realized that actually the digital way of doing the computation is probably hugely better. And it's hugely better because you can have thousands of different copies of exactly the same digital model running on different hardware, and each copy can look at a different bit of the internet and learn from it.(28:38):And they can all combine what they learned instantly by sharing weights or by sharing weight gradients. And so you can get 10,000 things to share their experience really efficiently. And you can't do that with people. If 10,000 people go off and learn 10,000 different skills, you can't say, okay, let's all average our weight. So now all of us know all of those skills. It doesn't work like that. You have to go to university and try and understand what on earth the other person's talking about. It's a very slow process where you have to get sentences from the other person and say, how do I change my brain? So I might've produced that sentence, and it's very inefficient compared with what these digital models can do by just sharing weights. So I had this kind of epiphany. The digital models are probably much better. Also, they can use the back propagation algorithm quite easily, and it's very hard to see how the brain can do it efficiently. And nobody's managed to come up with anything that'll work in real neural nets as comparable to back propagation at scale. So I had this sort of epiphany, which made me give up on the analog research that digital computers are actually just better. And since I was retiring anyway, I took the opportunity to say, Hey, they're just better. And so we'd better watch out.ERIC TOPOL (29:56):Well, I mean, I think your call on that and how you back it up is really, of course had a big impact. And of course it's still an ongoing and intense debate, and in some ways it really was about what was the turmoil at OpenAI was rooted with this controversy about where things are, where they're headed. I want to just close up with the point you made about the radiologists, and not to insult them by saying they'll be replaced gets us to where we are, the tension of today, which is our humans as the pinnacle of intelligence going to be not replaced, but superseded by the likes of AI's future, which of course our species can't handle that a machine, it's like the radiologist, our species can't handle that. There could be this machine that could be with far less connections, could do things outperform us, or of course, as we've, I think emphasized in our conversation in concert with humans to even take it to yet another level. But is that tension about that there's this potential for machines outdoing people part of the problem that it's hard for people to accept this notion?GEOFFREY HINTON (31:33):Yes, I think so. So particularly philosophers, they want to say there's something very special about people. That's to do with consciousness and subjective experience and sentience and qualia, and these machines are just machines. Well, if you're a sort of scientific materialist, most of us are brain's just a machine. It's wrong to say it's just a machine because a wonderfully complex machine that does incredible things that are very important to people, but it is a machine and there's no reason in principle why there shouldn't be better machines than better ways of doing computation, as I now believe there are. So I think people have a very long history of thinking. They're special.(32:19):They think God made them in his image and he put them at the center of the universe. And a lot of people have got over that and a lot of people haven't. But for the people who've got over that, I don't think there's any reason in principle to think that we are the pinnacle of intelligence. And I think it may be quite soon these machines are smarter than us. I still hope that we can reach a agreement with the machines where they act like benevolent parents. So they're looking out for us. They have, we've managed to motivate them, so the most important thing for them is our success, like it is with a mother and child, not so much for men. And I would really like that solution. I'm just fearful we won't get it.ERIC TOPOL (33:15):Well, that would be a good way for us to go forward. Of course, the doomsayers and the people that are much worse at their level of alarm tend to think that that's not possible. But we'll see obviously over time. Now, one thing I just wanted to get a quick read from you before we close is as recently, Demis Hassabis and John Jumper got the Lasker Award, like a pre Nobel Award for AlphaFold2. But this transformer model, which of course has helped to understand the structure 3D of 200 million proteins, they don't understand how it works. Like most models, unlike the understanding we were talking about earlier on the LLM side. I wrote that I think that with this award, an asterisk should have been given to the AI model. What are your thoughts about that idea?GEOFFREY HINTON (34:28):It's like this, I want people to take what I say seriously, and there's a whole direction you could go in that I think Larry Page, one of the founders of Google has gone in this direction, which is to say there's these super intelligences and why shouldn't they have rights? If you start going in that direction, you are going to lose people. People are not going to accept that these things should have political rights, for example. And being a co-author is the beginning of political rights. So I avoid talking about that, but I'm sort of quite ambivalent and agnostic about whether they should. But I think it's best to stay clear of that issue just because the great majority of people will stop listening to you if you say machines should have rights.ERIC TOPOL (35:28):Yeah. Well, that gets us course of what we just talked about and how it's hard the struggle between humans and machines rather than the thought of humans plus machines and symbiosis that can be achieved. But Geoff, this has been a great, we've packed a lot in. Of course, we could go on for hours, but I thoroughly enjoyed hearing your perspective firsthand and your wisdom, and just to reinforce the point about how many of the people that are leading the field now derive a lot of their roots from your teaching and prodding and challenging and all that. We're indebted to you. And so thanks so much for all you've done and we'll continue to do to help us, guide us through the very rapid dynamic phase as AI moves ahead.GEOFFREY HINTON (36:19):Thanks, and good luck with getting AI to really make a big difference in medicine.ERIC TOPOL (36:25):Hopefully we will, and I'll be consulting with you from time to time to get some of that wisdom to help usGEOFFREY HINTON (36:32):Anytime. Get full access to Ground Truths at erictopol.substack.com/subscribe
“A.I. is not the problem; it's the solution.”—Andrew Ng at TED, 17 October 2023Recorded 21 November 2023Transcript with relevant links and links to audio fileEric Topol (00:00):Hello, it's Eric Topol with Ground Truths, and I'm really delighted to have with me Andrew Ng, who is a giant in AI who I've gotten to know over the years and have the highest regard. So Andrew, welcome.Andrew Ng (00:14): Hey, thanks Eric. It's always a pleasure to see you.Eric Topol (00:16):Yeah, we've had some intersections in multiple areas of AI. The one I wanted to start with is that you've had some direct healthcare nurturing and we've had the pleasure of working with Woebot Health, particularly with Alison Darcy, where the AI chatbot has been tested in randomized trials to help people with depression and anxiety. And, of course, that was a chatbot in the pre-transformer or pre-LLM era. I wonder if you could just comment about that as well as your outlook for current AI models in healthcare.Andrew Ng (01:05):So Alyson Darcy is brilliant. It's been such a privilege to work with her over the years. One of the exciting things about AI is a general purpose technology. It's not useful for one thing. And I think in healthcare and more broadly across the world, we're seeing many creative people use AI for many different applications. So I was in Singapore a couple months ago and I was chatting with some folks, Dean Chang and one of his doctors, Dr. M, about how they're using AI to read EHRs in a hospital in Singapore to try to estimate how long a patient's going to be in the hospital because of pneumonia or something. And it was actually triggering helpful for conversations where a doctor say, oh, I think this patient will be in for three days, but the AI says no, I'm guessing 15 days. And this triggers a conversation where the doctor takes a more careful look. And I thought that was incredible. So all around the world, many innovators everywhere, finding very creative ways to apply AI to lots of different problems. I think that's super exciting.Eric Topol (02:06):Oh, it's extraordinary to me. I think Geoff Hinton has thought that the most important application of current AI is in the healthcare/ medical sphere. But I think that the range here is quite extraordinary. And one of the other things that you've been into for all these years with Coursera starting that and all the courses for deep learning.AI —the democratization of knowledge and education in AI. Since this is something like all patients would want to look up on whatever GPT-X about their symptoms different than of course a current Google search. What's your sense about the ability to use generative AI in this way?Andrew Ng (02:59):I think that instead of seeing a doctor as a large language model, what's up with my symptoms, people are definitely doing it. And there have been anecdotes of this maybe saving a few people's lives even. And I think in the United States we're privileged to have some would say terrible, but certainly better than many other country's healthcare system. And I feel like a lot of the early go-to market for AI enabled healthcare may end up being in countries or just places with less access to doctors. The definitely countries where you can either decide do you want to go see if someone falls sick? You can either send your kid to a doctor or you can have your family eat for the next two weeks, pick one. So with families made these impossible decisions, I wish we could give everyone in the world access to a great doctor and sometimes the alternatives that people face are pretty harsh. I think any hope, even the very imperfect hope of LLM, I know it sounds terrible, it will hallucinate, it will give bad medical advice sometimes, but is that better than no medical advice? I think there's really some tough ethical questions are being debated around the world right now.Eric Topol (04:18):Those hallucinations or confabulation, won't they get better over time?Andrew Ng (04:24):Yes, I think LLM technology is advanced rapidly. They still do hallucinate, they do still mix stuff up, but it turns out that I think people still have an impression of LLM technology from six months ago. But so much has changed in the last six months. So even in the last six months, it is actually much harder now to get an LMM, at least many of the public ones offered by launch companies. It's much harder now compared to six months ago to get it to give you deliberately harmful advice or if you ask it for detailed instructions on how to commit a crime. Six months ago it was actually pretty easy. So that was not good. But now it's actually pretty hard. It's not impossible. And I actually ask LLMs for strange things all the time just to test them. And yes, sometimes I can get them when I really try to do something inappropriate, but it's actually pretty difficult.(05:13):But hallucination is just a different thing where LLMs do mix stuff up and you definitely don't want that when it comes to medical advice. So it'll be an interesting balance I think of when should we use web search for trust authoritative sources. So if I have a sprained ankle, hey, let me just find a webpage on trust from a trusted medical authority on how to deal with sprained ankle. But there are also a lot of things where there is no one webpage that just gives me an answer. And then this is an alternative for generating a novel thing that's need to my situation. In non-healthcare cases, this has clearly been very valuable in just the healthcare, given the criticality of human health and human life. I think people are wrestling with some challenging questions, but hallucinations are slowly going down.Eric Topol (05:59):Well, hopefully they'll continue to improve on that. And as you pointed out the other guardrails that will help. Now that gets me to a little over a month ago, we were at the TED AI program and you gave the opening talk, which was very inspirational, and you basically challenged the critics of the negativism on AI with three basic issues: amplifying our worst impulses, taking our jobs and wiping out humanity. And it was very compelling and I hope that that will be posted soon. And of course we'll link it, but can you give us a skinny of your antidote to the doomerism about AI?Andrew Ng (06:46):Yeah, so I think AI is a very beneficial technology on average. I think it comes down to do we think the world is better off or worse off with more intelligence in it, be it human intelligence or artificial intelligence? And yes, intelligence can be used for nefarious purposes and it has been in history, I think a lot of humanity has progress through humans getting smarter and better trained and more educated. And so I think on average the world is better off with more intelligence in it. And as for AI wiping oiut humanity, I just don't get it. I've spoken with some of the people with this concern, but their arguments for how AI could wipe up humanity are so vague that they boil down to it could happen. And I can't prove it won't happen any more than I can prove a negative like that. I can't prove that radio wave is being emitted from earth won't cause aliens to find us and space aliens to wipe us out. But I'm not very alarmed about space aliens, maybe I should be. I don't know. And I find that there are real harms that are being created by the alarmist narrative on AI. One thing that's quite sad was chatting with they're now high school students that are reluctant to enter AI because they heard they could lead to human extinction and they don't want any of that. And that's just tragic that we're causing high school students to make a decision that's bad for themselves and bad for humanity because of really unmerited alarms about human extinction.Eric Topol (08:24):Yeah, no question about that. You had, I think a very important quote is “AI is not the problem, it's the solution” during that. And I think that gets us to the recent flap, if you will, with OpenAI that's happened in recent days whereby it appears to be the same tension between the techno-optimists like you and I would say, versus the effective altruism (EA) camp. And I wonder what your thoughts are regarding, obviously we don't know all the inside dynamics of this, with probably the most publicized interactions in AI that I can remember in terms of its intensity, and it's not over yet. But what were your thoughts about as this has been unfolding, which is, of course, still in process?Andrew Ng (09:19):Yeah, honestly, a lot of my thoughts have been with all the employees of OpenAI, these are hundreds of hardworking, well-meaning people. They want to build tech, make available others, make the world better off and out of the blue overnight. The jobs livelihoods and their levers to make a very positive impact to the world was disrupted for reasons that seem vague and at least from the silence of the board, I'm not aware of any good reasons for really all these wonderful people's work and then livelihoods and being disrupted. So I feel sad that that just happened, and then I feel like OpenAI is not perfect, no organization in the world is, but frankly they're really moving AI forward. And I think a lot of people have benefited from the work of OpenAI. And I think the disruptions of that as well is also quite tragic. And this may be—we will see if this turns out to be one of the most dramatic impacts of unwarranted doomsaying narratives causing a lot of harm to a lot of people. But we'll see what continuously emerges from the situation.Eric Topol (10:43):Yeah, I mean I think this whole concept of AGI, artificial general intelligence and how it gets down to this fundamental assertion that we're at AGI, the digital brain or we're approximating or the whole idea that the machine understanding is that at unprecedented levels. I wonder your thoughts because obviously there still is the camp that says this is a sarcastic parrot. It's all anything that suggests understanding is basically because of pre-training or other matters and to try to assign any real intelligence that's at the level of human even for a particular task no less beyond human is unfounded. What is your sense about this tension and this ongoing debate, which seemed to be part of the OpenAI board issues?Andrew Ng (11:50):So I'm not sure what happening in the OpenAI board, but the most widely accepted definition of AGI is AI to do any intellectual tasks that the human can. And I do see many companies redefining AGI to other definitions. So for the original definition, I think we're decades away. We're very clearly not there, but many companies that, let's say alternative definitions and yeah, you have an alternative definition, maybe we're there already. One of my eCommerce friends looked at one of the alternative definitions. He said, well, for that definition, I think we got AGI 30 years ago.(12:29):And looking on the more positive side. And I think one of the signs that the companies reach AGI frankly would be if they're rational economic player, they should maybe let go all of their employees that do maybe intellectual work. So until that happens, I just don't, not to joke about it, that would be a serious thing. But I think we're still many decades away from that original definition of AGI. But on the more positive side in healthcare and other sectors, I feel like there's a recipe for using AI that I find fruitful and exciting, which is it turns out that jobs are made out of tasks and I think of AI as automating tasks rather than jobs. So a few years ago, Geoff Hinton had made some strong statements about AI replacing radiologists. I think those predictions have really not come true today, but it turns out as Eric, I enjoy your book, which is very thoughtful about AI as well.(13:34):And I think if you look at say the job of radiologists, they do many, many different things, one of which is read x-rays, but they also do patient intakes, they operate X-ray machines. And I find that when we look at the healthcare sector or other sectors and look at what people are doing, break jobs down into tasks, then usually there can often be a subset of tasks. There's some that are amenable to AI automation and that recipe is helping a lot of businesses create value and also in some cases make healthcare better. So I'm actually excited and because healthcare, so many people doing such a diverse range of tasks, I would love to see more organizations do this type of analysis.(14:22):The interesting thing about that is we can often automate, I'm going to make up a number, 20% or 30% or whatever, have a lot of different jobs tasks. So one, there's a strong sign we're far from AGI because we can't automate a hundred percent of the intellectual tasks, but second, many people's jobs are safe because when we automate 20% of someone's job, they can focus on the other 80% and maybe even be more productivity and causes the marginal value of labor and therefore maybe even salaries that go uprooted and down. Actually recently, a couple weeks ago, few weeks ago, released a new course on Coursera “Generative AI for Everyone” where I go deeper into this recipe for finding opportunities, but I'm really excited about working with partners to go find these opportunities and go build to them.Eric Topol (15:15):Yeah, I commend you for that because you have been for your career democratizing the knowledge of AI and this is so important and that new course is just one more example. Everyone could benefit from it. Getting back to your earlier point, just because in the clinician doctor world, the burdensome task of data clerk function of having to be slave to keyboards and entering the visit data and then all the post- visit things. Now, of course, we're seeing synthetic notes and all this can be driven through an automated note that is not involving any keyboard work. And so, just as you say, that comprises maybe 20, 30% of a typical doctor's day, if not more. And the fact is that that change could then bring together the patient and doctor again, which has been a relationship that suffered because of electronic records and all of the data clerk functions. That's just a really, I think, a great example of what you just pointed out. I love “Letters from Andrew” which you publish, which as you mentioned, one of your recent posts was about the generative AI for everyone. And in those you recently addressed loneliness, which is as associated with all sorts of bad health outcomes. And I wonder if you could talk about how AI could help loneliness.Andrew Ng (16:48):So this is a fascinating case study where, so AI fund, we had wanted to do something on AI and relationships, kind of romantic relationships. And I'm an AI guy, I feel like, what do I know about romance? And if you don't believe me, you can ask my wife, she'll confirm I know nothing about romance, but we're privileged to partner with the former CEO of Tinder, Renata Nyborg, who knows about relationships in a very systematic way far more than anyone I know. And so working with her with a deep expertise about relationships, and it turns out she actually knows a lot about AI too. But then my team's knowledge about AI we're able to build something very unique that she launched that she announced called me. Now I've been playing around with it on my phone and it's actually interesting, remarkably good. I think relationship mentor, frankly, I wish I had Meeno back when I was single instead, I've asked my dumb questions to, and I'm excited that maybe AI, I feel like tech maybe has contributed to loneliness. I know the data is mixed, that social media contributes to social isolation. I know that different opinions are different types of data, but this is one case where hopefully AI can clearly not be the problem, but be part of the solution to help people gain the skills to build better relationships.Eric Topol (18:17):Yeah, now, it's really interesting here again, the counterintuitive idea that technology could enhance human bonds, which are all too short that we want to enhance. Of course, you've had an incredible multi-dimensional career. We talked a little bit about your role in education with the founding of the massive online courses (MOOCs), but also with Baidu and Google. And then of course at Stanford you've seen the academic side, you've seen the leading tech titan side, the entrepreneurial side with the various ventures of trying to get behind companies that have promised you have the whole package of experience and portfolio. How do you use that now going forward? You're still so young and the field is so exciting. Where do you try to just cover all the bases or do you see yourself changing gears in some way? You haven't had a foot in every aspect?Andrew Ng (19:28):Oh, I really like what I do. I think these days I spend a lot of time at AI fund builds new companies using AI and deep learning.ai is an educational arm. And one of the companies that AI fund has helped incubate does computer vision work than AI. We actually have a lot of healthcare users as well using, I feel like with the recent advances in AI at the technology layer, things like large language models, I feel like a lot of the work that lies ahead of the entire field is to build applications on top of that. In fact, a lot of the media buzz has been on the technology layer, and this happens every time this technology change. When the iPhone came out, when we shifted the cloud, it's interesting for the media to talk about the technology, but it turns out the only way for the technology suppliers to be successful is if the application builders are even more successful.(20:26):They've got to generate enough revenue to pay the technology suppliers. So I've been spending a lot of my time thinking about the application layer and how to help either myself or support others to build more applications. And the annoying and exciting thing about AI is as a general purpose technology, there's just so much to do, there's so many applications to build. It's kind of like what is electricity good for? Or what is the cloud good for? It's just so many different things. So it is going to take us, frankly, longer than we wish, but it will be exciting and meaningful work to go to all the corners of healthcare and all the corners of education and finance and industrial and go find these applications and go help them.Eric Topol (21:14):Well, I mean you have such a broad and diverse experience and you predicted much of this. I mean, you knew somehow or other that when the graphic processing unit (GPU) would go from a very low number to tens of thousands of them, what might happen. And you were there, I think, before and perhaps anyone else. One of the things of course that this whole field now gets us to is potential tech dominance. And by what I mean there is that you've got a limited number of companies like Microsoft and Google and Meta and maybe Inflection AI and a few others that have capabilities of 30,000, 40,000, whatever number of GPUs. And then you have academic centers like your adjunct appointment at Stanford, which maybe has a few hundred or here at Scripps Research that has 150. And so we don't have the computing power to do base models and what can we do? How do you see the struggle between the entities that have what appears to be almost, if you will, if it's not unlimited, it's massive computing power versus academics that want to advance the field. They have different interests of course, but they don't have that power base. Where is this headed?Andrew Ng (22:46):Yeah, so I think the biggest danger to that concentration is regulatory capture. So I've been quite alarmed over moves that various entities, some companies, but also governments here in the US and in Europe, especially US and Europe, less than other places have been contemplating regulations that I think places a very high regulatory compliance burden that big tech companies have the capacity to satisfy, but that smaller players will not have the capacity to satisfy. And in particular, the definitely companies would rather not have the computer open source. When you take a smaller size, say 7 billion parameters model and fine tune it for specific to, it works remarkably well for many specific tasks. So for a lot of applications, you don't need a giant model. And actually I routinely run a seven or 13 billion parameters model on my laptop, more inference than fine tuning. But it's within the realm of what a lot of players can do.(23:51):But if inconvenient laws are passed, and they've certainly been proposed in Europe under the EU AI Act and also the White House Executive Order, if I think we've taken some dangerous steps to what putting in place very burdensome compliance requirements that would make it very difficult for small startups and potentially very difficult for less smaller organizations to even release open source software. Open source software has been one of the most important building blocks for everyone in tech. I mean, if you use a computer or a smartphone that because open, that's built on top of open source software, TCP, IP, internet, just how the internet works, law of that is built on top of open source software. So regulations that pamper people just wanting to release open source, that would be very destructive for innovation.Eric Topol (24:48):Right? In keeping with what we've been talking about with the doomsday prophecies and the regulations and things that would slow up things, the whole progress in the field, which we are obviously in touch with both sides and the tension there, but overregulation, the potential hazards of that are not perhaps adequately emphasized. And another one of your letters (Letters from Andrew), which you just got to there, was about AI at the edge and the fact that we can move towards, in contrast to the centralized computing power at a limited number of entities as you, I think just we're getting at, there's increasing potential for being able to do things on a phone or a laptop. Can you comment about that?Andrew Ng (25:43):Yeah, I feel like I'm going against many trends. It sounds like I'm off in a very weird direction, but I'm bullish about AI at the edge. I feel like if I want to do grammar checking using a large language model, why do I need to send all my data to a cloud provider when a small language model can do it just fine on my laptop? Or one of my collaborators at Stanford was training a large language model in order to do electronic health records. And so at Stanford, this actually worked done by one of the PhD students I've been working with. But so Yseem wound up fine tuning a large language model at Stanford so that he could run inference over there and not have to ship EHR and not have to ship private medical records to a cloud provider. And so I think that was an important thing to, and if open source were shut down, I think someone like Yseem would have had a much harder time doing this type of work.Eric Topol (27:04):I totally follow you the point there. Now, the last thing I wanted to get to was a multimodal AI in healthcare. When we spoke 5 years ago, when I was working on the Deep Medicine book, multimodal AI wasn't really possible. And the idea was that someday we'll have the models to do it. The idea here is that each of us has all these layers of data, our various electronic health records, our genome, our gut microbiome, our sensors and environmental data, social determinants of health, our immunome, it just goes on and on. And there's also the corpus of medical knowledge. So right now, no one has really done multimodal. They've done bimodal AI in healthcare where they take the electronic health records and the genome, or usually it's electronic health records and the scan, medical scan. No one has done more than a couple layers yet.(28:07):And the question I have is, it seems like that's imminently going to be accomplished. And then let's then get to will there be a virtual health coach? So unlike these virtual coaches like Woebot and the diabetes coaches and the hypertension coaches, will we ultimately have with multimodal AI, your forecast on that, the ability to have feedback to any given individual to promote their health, to prevent conditions that they might be at risk for having later in life or help managing all their conditions that they actually have already been declared. What's your sense about where we are with multimodal AI?Andrew Ng (28:56):I think there's a lot of work to be done still at unimodal, a lot of work to be done in text. LLM AI does a lot of work on images, and maybe not to talk about Chang's work all the time, but just this morning, I was just earlier, I was chatting with him about he's trying to train a large transformer on some time series other than text or images. And then semi collaborative, Stanford, Jeremy Irvin, Jose kind of poking at the corners of this. But I think a lot of people feel appropriately that there's a lot of work to be done still in unimodal. So I'm cheering that on. But then there's also a lot of work to be done in multimodal, and I see work beyond text and images, maybe genome, maybe some of the time series things, maybe some the HR specific things, which maybe is kind of textbook kind of not, I think it was just about a year ago that check GP was announced. So who knows? Just one more year of progress, who knows where it will be.Eric Topol (29:55):Yeah. Well, we know there will be continued progress, that's for sure. And hopefully as we've been discussing, there won't be significant obstacles for that. And hopefully there will be a truce between the two camps of the doomerism and optimism or somehow we're meet in the middle. But Andrew, it's been a delight to get your views on all this. I don't know how the OpenAI affair will settle out, but it does seem to be representative of the times we live in because at the same TED AI that you and I spoke at Ilya, spoke about AGI and that was followed onlhy a matter by days by Sam Altman talking about AGI and how OpenAI was approaching AGI capabilities. And it seems like this is, even though as you said, that there's a lot of different definition for AGI, the progress that's being made right now is extraordinary.(30:57):And grappling with the idea that there are certain tasks, at least certain understandings, certain intelligence that may be superhuman via machines is more than provocative. And I know you are asked to comment about this all the time, and it's great because in many respects, you're an expert, neutral observer. You're not in one of these companies that's trying to assert that they have sparks of AGI or actual AGI or whatever. So in closing, I think we look to you as , not just an expert, but one who has had such broad experience in this field and who has predicted so much of its progress and warned of the reasons that we would not continue to make that type of extraordinary progress. So I want to thank you for that. I'll keep reading Letters from Andrew. I hope everybody does, as many people as possible, should attend your “Generative AI for Everyone” course. And thank you for what you've done for the field, Andrew, we're all indebted to you.Andrew Ng (32:17):Thank you, Eric. You're always so gracious. It's always such a pleasure to see you and collaborate with you.Thanks for listening and reading Ground Truths. Please share this podcast if you found it informative. Get full access to Ground Truths at erictopol.substack.com/subscribe
Ayurveda is a five thousand year old system of healing with origins in the Vedic culture of ancient India. The Sanskrit word Ayurveda is derived from the root words ayuh, meaning “life” or “longevity,” and veda, meaning “science” or “sacred knowledge.” Ayurveda therefore translates as, "the sacred knowledge of life.” Today Niv dives into 1. Three reasons and associated ways for you to infuse Pranayama into your Ayurvedic wellness practices and guidance. Find Niv on IG: @yourhealthcompass STEP BOLDLY INTO YOUR MISSION OF FACILITATING LASTING CHANGE IN THIS WORLD. COACH CLIENTS AND LOVED ONES TOWARDS 10X MORE EFFECTIVE AND IMPACTFUL RESULTS, PERFORMANCE AND MIND-BODY WELLNESS. FEEL THE CONFIDENCE AND COURAGE TO OWN AND CULTIVATE YOUR PRESENCE AS A GO-TO HOLISTIC AYURVEDIC GUIDE. https://nivrajendra.com/embodied-ayurveda-certification
Creative Visions Factory director Michael Kalmbach joins Rob in the bunker to talk about the latest updates on what the peer-run program is up to, the importance of harm reduction and other wholistic strategies, and the importance of self-care and understanding in difficult work.Show Notes:Inflamed: Deep Medicine and the Anatomy of InjusticeCreative Vision Factory
Emily speaks with cardiologist Eric Topol about his 2019 book Deep Medicine, which explores the potential for AI to enhance medical decision-making, improve patient outcomes, and restore the doctor-patient relationship. Find show notes, transcript, and more at thenocturnists.com.
Drs. Douglas Flora and Shaalan Beg discuss the use of artificial intelligence in oncology, its potential to revolutionize cancer care, from early detection to precision medicine, and its limitations in some aspects of care. TRANSCRIPT Dr. Shaalan Beg: Hello and welcome to the ASCO Daily News Podcast. I'm Dr. Shaalan Beg, your guest host of the podcast today. I'm the vice president of oncology at Science37 and an adjunct associate professor at the UT Southwestern Medical Center in Dallas. On today's episode, we'll be discussing the use of artificial intelligence in oncology, its potential to revolutionize cancer care from early detection to precision medicine, and we'll also go over limitations in some aspects of care. I'm joined by Dr. Douglas Flora, the executive medical director of oncology services at St. Elizabeth Healthcare in northern Kentucky, and the founding editor-in-chief of AI in Precision Oncology, the first peer-reviewed, academic medical journal dedicated specifically to advancing the applications of AI in oncology. The journal will launch early next year. You'll find our full disclosures in the transcript of this episode and disclosures of all guests on the podcast are available at asco.org/DNpod. Doug, it's great to have you on the podcast today. Dr. Douglas Flora: I'm glad to be here. Thanks for having me. Dr. Shaalan Beg: First of all, Doug, congrats on the upcoming launch of the journal. There has been a lot of excitement on the role of AI in oncology and medicine, and also some concern over ethical implications of some of these applications. So, it's great to have you here to address some of these issues. Can you talk about how you got into this space and what motivated you to pursue this endeavor? Dr. Douglas Flora: I think, Shaalan, I've embraced my inner nerd. I think that's pretty obvious. This is right along brand for me, along with my love of tech. And so, I started reading about this maybe 5, 6, 7 years ago, and I was struck by how little I understood and how much was going on in our field, and then really accelerated when I read a book that the brilliant Eric Topol wrote in 2019. I don't know if you've seen it, but everything he writes is brilliant. This was called Deep Medicine, and it touched on how we might embrace these new technologies as they're rapidly accelerating to ultimately make our care more human. And that really resonated with me. You know, I've been in clinical practice for almost 20 years now, and the same treadmill many medical oncologists are on as we run from room to room to room and wish we had more time to spend in the depths of the caves with our patients. And this technology has maybe lit me up again in my now 50-year-old age, say, wow, wouldn't it be great if we could use this stuff to provide softer, better, smarter care? Dr. Shaalan Beg: When I think about different applications in oncology specifically, my mind goes to precision oncology. There are many challenges in the precision oncology space from the discovery of new targets, from finding people to enroll them on clinical trials, ensuring the right person is started on the right treatment at the right time. And we've been talking a lot about and we've been reading and hearing a lot about how artificial intelligence can affect various aspects of the entire spectrum of precision medicine. And I was hoping that you can help our listeners identify which one of those efforts you find are closest to impacting the care that we deliver for our patients come Monday morning in our clinics and which have the highest clinical impact in terms of maturity. Dr. Douglas Flora: You know, I think the things that are here today, presently, the products that exist, the industry partners that have validated their instruments, it's in 2 things. One is certainly image recognition, right? Pattern doctors like dermatologists and people that read eye grounds and radiologists are seeing increasing levels of accuracy that now are starting to eclipse even specialists in chest radiology and CT or digital pathology with pixelated images now for companies like Path AI and others are publishing peer review data that suggests that the accuracy can be higher than that of a board-certified pathologist. We're all seeing stuff in USA Today and the New York Times about passing medical boards and passing the bar. I think image recognition is actually right here right now. So that's number 1. Number 2, I think is less sexy, but more important. And that is getting rid of all the rote mechanical mundane tasks that pollute your days as a doc. And I mean specifically time spent on keyboard, pajama time, documenting the vast amounts of material we need for payers and for medical documentation. That can be corrected in hours with the right programming. And so, I think as these large language models start to make their way into clinic, we're going to give doctors back 3, 4, 6 hours a day that they currently spend documenting their care and let them pay attention to their patients again, face to face, eye to eye. Dr. Shaalan Beg: I love the concept of pajama time. It's sort of become normalized in many folks that the time to do your charting is when you're at home and with your family or in your bedroom in your pajamas, cleaning notes and that's not normal behavior. But it has been normalized in clinical care for many reasons, some necessary and just some not maybe so much. We hear about some of the applications that are coming into electronic medical records. It's been many years since I saw this one demo which one of the vendors had placed where the doctor talks to the patient and then asks the electronic medical record to sum up the visit in a note and then voila, you have a note and you have the orders and you have the billing all tied up. It's been at least 4 years since I've seen that. And I'm not seeing the applications in the clinic or maybe something's turning around the corner because for a lot of people, AI and machine learning was just an idea. It was pie in the sky until chat GPT dropped and everybody got to put their hands on it and see what it can produce. And that's literally scratching the surface of what's possible. So, when you think about giving the doctors their pajama time back, and you think about decision support, trial matching, documentation, which one of those applications are you most excited about as an oncologist? Dr. Douglas Flora: I'm still in the trenches. I just finished my Wednesday clinic notes Friday afternoon at 4:30 pm, so I think medical documentation is such a burden and it's so tedious and so unnecessary to redouble the efforts again and again to copy a note that four other doctors have already written on rounds It's silly. So, I think that's going to be one of the early salvos that Hospital systems recognize because there's a higher ROI if you can give 400 doctors back two hours a day. It's also satisfying because the notes will be better. The notes will be carefully curated. They may bring in order sets for the MRI with gadolinium that you forgot you wanted to order; the digital personal assistant will get that. It will set a reminder on your calendar to call the patient back with their test results. It will order the next set of labs, and you're going in the next room, and you're going to be watching that patient in the room. And I've talked to other colleagues about this earlier today. You'll be able to see the daughter getting hives because you're watching her or the look that fleets across the husband's face when you go a little bit too far and you go out too much information when they're not quite ready for that. And I think that's the art of oncology that we're missing when we're flying in a room, and we've got our face on the screen and a keyboard, and we're buried in our own task and we're not there to be present for our patients. So, I'm hopeful that that's going be one of the easy and early wins for oncologists. Dr. Shaalan Beg: Fantastic. And when we think about the spectrum of cancer care for the people who we care for, a lot happens before they walk into their medical oncologist's office in terms of early identification of cancer, just the diagnosis of cancer, the challenges around tissue acquisition, imaging acquisition. You mentioned a couple of the tools around radiomics, which are being implemented right now. Again, same question: Separate fact from fiction, which ones are we going to see in 2023 or 2024 in the clinical practice that we have? We've been hearing that pathologists and radiologists are going to be out of their jobs if AI takes off, right? Of course, that is a lot of hyperbole there. But how do you view that space and how do you see it impacting the overall burden of care that people receive, and the burden of care that physicians are experiencing? Dr. Douglas Flora: I'm an eternal optimist, almost infuriating optimist to my partners and colleagues. So, I'm going to lean into this and say, burdens are going be reduced all over the place. We're going to have personal digital navigators to help our patients from the first touch so that they're going to have honest and empathetic questions answered within an hour of diagnosis. The information that they're going have at their fingertips with Chatbot 4 or Med-PALM 2 with Google that's about to be released as a medical generative AI. These are going to give sensitive and empathetic answers that don't put our patients on the cliff, you know, that they're falling off waiting for a doctor's visit 10 days down the road. So, I think the emotional burdens will be improved with better access to better information. I think that the physicians will also have access to that, giving us reassurance that we're going down the right path in terms of really complicated patients taking very, very large datasets and saying a digital twin of this patient would have been more successful with this approach and those sorts of things. And those are probably 3 to 5 years down the road but being tested heavily right now in academic settings with good data coming. Dr. Shaalan Beg: Robotic empathy sounds like an oxymoron. Dr. Douglas Flora: Yeah, look at the published studies. Dr. Shaalan Beg: We've all seen the data on how a chatbot can outperform physicians in terms of empathy. I really find that to be hard to stomach. Help me out. Dr. Douglas Flora: Yeah, we say that, and we say that to be provocative, but no, there's no substitute for a clinician laying a hand on a patient. We talked about how you need to see that fleeting glance or the hives on the daughter's chest and that you've gone too far and shared too much too soon before that family is ready for it. I have no doubt in my mind, these tools can make us more efficient at our care, but don't get me wrong. There's no chance that these will replace us in the room, giving a hug to a patient or a scared daughter. They're going to remember every word you say; I just want it to be the right words delivered carefully and I don't want us to rush it. So ultimately, as we make our care more human, these tools might actually give us time back in the room to repair that doctor-patient relationship that's been so transactional for the last 4 or 5 or 10 years. And my hope is, we're going to go back to doing what we went into oncology to do, to care deeply about the patients in our care and let the computers handle the rote mechanical stuff; let me be the doctor again and deserve that patient's attention and give it right back in return. Dr. Shaalan Beg: And I think we're hearing a lot of themes in terms of AI helping the existing clinical enterprise and helping make that better. And it's not your deep blue versus Kasparov, one person is going to win. It's the co-pilot. It's reducing burden. It's making the work more meaningful so that the actual time that's spent with our patients is more meaningful and hopefully can help us make deeper connections. Let's talk about challenges. What are some of the challenges that worry you? There've been many innovations that have come and gone, and health systems and hospitals have resisted change. And we all remember saying during COVID that we're never going to go back to the old ways. And here we are in 2023 and we are back to the old ways for a lot of things. So, what are the major limitations of AI, even at its... peak success that you see, which our listeners should be aware of, and which may worry you at times. Dr. Douglas Flora: Well, you've actually spoken to why I started this journal. I want to make sure that clinicians are guiding some of those conversations to make sure that guardrails are up so that we're ethical and we are making sure that we are policing bias. It's no secret now you've seen these things – a lot of language models, a lot of the deep learning was programmed by people that look like me and did not include things that were culturally competent. You can look at data that's been published on Amazon and facial recognition software for Facebook and Instagram and others. And they can identify me out of a crowd as a middle-aged white guy, but 60% of the time they will not recognize Oprah Winfrey or Serena Williams or Michelle Obama. I mean, iconic global icons. And with darker skin, with darker features, with different facial features than my white Caucasian, Eurocentric features, these recognition softwares are not as good. And I'm worried about that for clinical trial selection and screening for that. I'm really, really worried about building databases that don't represent the patients in our charge. So bias is a big deal and that's got to be transparent. That's got to be published how you arrived at this decision. And so that would be number 1. Number 2 is probably that we don't have as much. visibility to how decisions are made, this so-called black box in AI. And that's vexing for doctors, especially conservative oncologists that need 3 published randomized phase 3, blinded, placebo-controlled trials before we move an inch. So, there must be more transparency. And that again is in publications, it's in peer review. They say we need real scientific rigor and not to belabor this, but our industry partners are well ahead of us. We're not generally inclined to believe them until we see it because I've got 150 AI companies coming to my hospital system as vendors some of them are worthy great partners and some of them are a little bit over their skis and selling more than they can actually deliver yet. So, I'd like to give that an opportunity to see the papers. There's about 300 produced a day in AI in medicine. Let's give them a forum and we'll duke it out with letters of the editor and careful review. Dr. Shaalan Beg: I will say Doug, it is becoming hard to separate fact from fiction. There is so much information which is coming across us in medical journals and through our email, through our professional social media accounts that I sometimes worry that people will just start tuning it all out because they can't separate the high impact discoveries from the more pie in the sky ideas. So, tell us more about how we got here and how you see this curve of enthusiasm shifting maybe in the next 6 months or 1 year. Dr. Douglas Flora: Yeah, it's a great question. And it's rapidly accelerating, isn't it? We can't escape this. It's entering our hourly lives, much like the iPhone did before, or me having to switch from my BlackBerry to a smartphone that didn't have buttons. I felt like I was adapting. And maybe this is what people felt like when Henry Ford was out there, and all the buggy drivers were getting fired. The reality is it's here and it was here 6 months ago. And maybe we're feeling that urgency and maybe it's starting to catch on in general society because the advent of generative AI is easier to understand. These aren't complicated mathematical models with stacking diagrams and high-tech stuff that's just happening in Palo Alto. It's Siri, it's Cortana. It's my Google digital assistant notifying me that it's time to get on for my next meeting. And those things have been infiltrating our daily lives and our minds quietly for some time. About November 30th when chatbot GPT-3 came out from OpenAI and we started toying with it, you started to see the power. It can be creative, it can be funny, it can articulate your thoughts better than you can articulate them on paper immediately. English students have figured it out. People in marketing and writing legal briefs have figured it out and it's coming to medicine now. It is actually here, and this might be one instance where I think the hype is legit. and these tools will probably reshape our lives. There have been some estimates by Accenture that 70% of jobs in medicine are going to be altered irretrievably by generative AI. And so, I think it's incumbent upon those of us that are leaders in healthcare systems to at least assemble the team that can help make sense and separate, like you said, the signal from the noise. I know we're doing that here at St. Elizabeth Healthcare. We've got a whole team being formed around this. We have 5 or 6 different products we bought. that we're using to help read mammograms and read lung nodules and read urinalyses, etc. You need a construct to do that appropriately. You need a team of people that are well read and well-studied and able to separate that fact from fiction. I think we're all going to have to work towards that in the next 6 to 12 months. Dr. Shaalan Beg: Tell me about that construct. How did you, what is the framework that you use to evaluate opportunities as they come through the door? Dr. Douglas Flora: It's something I think we're all struggling with. As I mentioned, we've got all of these fantastic industry partners, but you can't buy 200 products off the shelf as Epic add-ons as third-party software to solve 200 problems. So, it's interesting, you've just said this. I just shared a piece on LinkedIn that I loved. “Don't pave the cow's path.” It's a really thoughtful thing to say, “Before you build an AI solution, let's make sure we're solving the correct problem.” And the author of that piece on Substack said: Let's not use AI to figure out how to have more efficient meetings by capturing our minutes and transcribing them immediately. Let's first assess how many of these meetings are absolutely necessary. What's the real job to be done and why would you have 50% of your leadership team in meetings all day long and capture those in yet another form? Let's take a look first at the structure around the meetings and say, are these necessary in 2023 and are these productive? So, my thought would be as we're starting this. We're going to get other smart people who are well-read, who are studying, who are listening to experts that do it six months ahead of us, and really doing a careful contemplative look at this as a team before we dive in with both feet. And there are absolutely tools that are going to be useful, but I think the idea, how do we figure this out without having 200 members of my medical staff coming to me saying, you've got to purchase all 200 of these products, and have a way to vet them scientifically with the same rigor you would for a journal before you put out that kind of outsource. Dr. Shaalan Beg: Doug, thanks for coming on the podcast today and sharing your valuable insights with us on the ASCO Daily News Podcast. We'll be looking out for your journal, AI in Precision Oncology, early next year. Tell our listeners where they can learn more about your journal. Dr. Douglas Flora: I really appreciate you guys having me. I love this topic, obviously, I'm excited about it. So, this journal will be ready for a launch in early October in a preview. And then our premier issue will come out in January. We're about to invite manuscripts in mid-August. I guess parties that are interested right now go to Doug Flora's LinkedIn page because that's where I'm sharing most of this and I'll put links in there that will lead you to Liebert's site and our formal page and I think we can probably put it in the transcript here for interested parties. Dr. Shaalan Beg: Wonderful. Thank you very much and thank you to our listeners for your time today. Finally, if you have any insights on if you value the insights a little. And thank you to our listeners for your time today. Finally, if you value the insights that you hear on the podcast, please take a moment to rate, review and subscribe wherever you get your podcast. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experiences, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. Find out more about today's speakers: Dr. Shaalan Beg @ShaalanBeg Dr. Douglas Flora St. Elizabeth Healthcare Follow ASCO on social media: @ASCO on Twitter ASCO on Facebook ASCO on LinkedIn Disclosures: Dr. Shaalan Beg: Employment: Science 37 Consulting or Advisory Role: Ipsen, Array BioPharma, AstraZeneca/MedImmune, Cancer Commons, Legend Biotech, Foundation Medicine Research Funding (Inst.): Bristol-Myers Squibb, AstraZeneca/MedImmune, Merck Serono, Five Prime Therapeutics, MedImmune, Genentech, Immunesensor, Tolero Pharmaceuticals Dr. Douglas Flora: Honoraria: Flatiron Health
From pediatrician to Chief Medical Officer of Athenahealth, Dr. Nele Jessel is on a mission to leverage technology to restore the human touch to healthcare and alleviate the administrative burden for doctors. She shares her approach to transforming EMRs from physician foe to friend.In this episode, you will be able to:Discover the remarkable potential of AI in easing physicians' documentation workload.Learn how virtual care is shaping a better work-life balance for medical professionals.Uncover Athenahealth's essential role in supporting physician-driven startups.Recognize the crucial impact of mid-career physicians on healthcare innovation.Determine the importance of promoting responsible AI development within the healthcare sector.My special guest is Dr. Nele JesselMeet Dr. Nele Jessel, a pediatrician who has made a career out of advancing healthcare through technology. Her expertise in implementing electronic health records, combined with her insights into the daily challenges physicians face, has led her to become a Chief Medical Officer at Athenahealth. Dr. Jessel's focus on the intersection of medicine and technology drives her to explore the potential of tech in improving doctoring, building more efficient healthcare systems, and enhancing patient care. Get ready to be inspired by her unique perspective on the future of healthcare.Dr. Jessel recommends the book Deep Medicine by Eric Topol to better understand how AI will influence medicine.The key moments in this episode are:00:00:00 - Introduction, 00:03:15 - Road to Informatics, 00:07:10 - EMRs and AI, 00:13:48 - Female Leadership in Healthcare Technology, 00:17:40 - Work-Life Balance, 00:15:53 - The Role of AI in Physician Documentation, 00:17:22 - Leveraging Generative AI for Triage Nurses, 00:19:36 - Legacy EHR Challenges, 00:22:04 - The Role of Clinical Informatics, 00:28:43 - The Value of Clinical Informatics, 00:31:41 - Importance of Keeping Mid-Career Physicians in Medicine, 00:35:26 - Athena Health's Services for Start-up Companies and Virtual Practices, 00:39:10 - Optimism Around Advancements in Medicine and AI, 00:40:06 - Call to Action for Physicians to Get Involved in AI, 00:42:00 - Invitation to Connect with Dr. Jessel, Support the showConnect with us: Twitter: https://twitter.com/RevitalizeWomenLinkedIn: https://www.linkedin.com/company/revitalize-womens-mastermind-groupWebsite: https://www.peoplealwayshcc.com/revitalize
Link to the book: The AI Revolution in MedicineLink to my review of the bookLink to the Sparks of Artificial General Intelligence preprint we discussedLink to Peter's paper on GPT-4 in NEJMTranscript (with a few highlights in bold of many parts that could be bolded!)Eric Topol (00:00):Hello, I'm Eric Topol, and I'm really delighted to have with me Peter Lee, who's the director of Microsoft Research and who is the author, along with a couple of colleagues for an incredible book called The AI Revolution in Medicine, GPT-4 and Beyond. Welcome, Peter.Peter Lee (00:20):Hello Eric. And thanks so much for having me on. This is a real honor to be here.Eric Topol (00:24):Well, I think you are in the enviable position of having spent now more than seven months looking at GPT-4's S capability, particularly in the health and medicine space. And it was great that you recorded that in a book for everyone else to learn because you had such a nice head start. I guess what I wanted to start with is, I mean, it's, it's a phenomenal book. I [holding the book up], this prop. I can't resistPeter Lee (00:52):Eric Topol (00:53):When, when I got it, I, I couldn't, I stayed up most of the night because I couldn't put it down. It was, it is so engrossing. But when you, when you first got your hands on this and started testing it, what were, what were your initial thoughts?Peter Lee (01:09):Yeah. I, let me first start by saying thank you for the nice words about the book, but really, so much of the credit goes to the co-authors, Carey Goldberg and Zach Kohane and Corey in particular took my overly academic writing. I suspect you have the same kind of writing style as well as Zach's pretty academic writing and helped turn it into something that would be approachable to non-computer scientists and as she put it, as much as possible as a page turner. So I'm glad that her work helped make the, the book an easy read. I,Eric Topol (01:54):I want to just say you're very humble because the first three chapters that you wrote yourself were clearly the, the best ones for me. Anyway. I don't mean to interrupt, but it, it, it is an exceptional book, really.Peter Lee (02:06):Oh thank you very much. It means a lot. Hearing that from you. You know, my own view is that the, the best writing and the best analyses and the best ideas for applications or not of this type of technology in medicine are yet to come. But you're right that I did benefit from this seven-month head start. And so, you know, I think the timing is, is very good. but I'm hoping that much better books and much better writings and ideas will come, you know, when you start with something like this, I, I suspect, Eric, you had the same thing. you start off with a lot of skepticism and I, in fact, I sort of now made light with this. I talk about the nine stages of grief that you have to go through.(02:55): I was extremely skeptical. Of course, I was very aware of GPT 2, GPT 3 and GPT 3.5. I understand, you know, what goes into those models really deeply. and so some of the claims, when I was exposed to the early development, GPT-4 just seemed outlandish and impossible. So I, I was, you know, skeptical, somewhat quietly skeptical. We've all been around the block before and, you know, we've heard lots of AI claims and I was in that state for maybe more than two weeks. And then I started to become in that two weeks annoyed, because I know that some of my colleagues like falling into what I felt was the trap of getting fooled by this technology. And then that turned into frustration and fear. I actually got angry. And one colleague who I won't name I've since had to apologize because then I into the phase of amazement because you start to encounter things that you can't explain that this thing seems to be doing that turns into joy.(04:04): I remember the exhilaration of thinking, wow, I did not think I would live long enough to see a technology like this. and then intensity, There was a period of about three days when I didn't sleep, I was just experimenting. Then you run into some limits and some areas of puzzlement and that's a phase of chagrin. And then real dangerous missteps and mistakes that this system can make that you realize might end up really hurting people. and then, you know, ChatGPT gets released and to our surprise it catches fire with people. And we learn directly through communications that some clinicians are using it in clinical settings. And that heightens the concern. And I, I can't say I'm in the ninth stage of enlightenment yet, but you do become very committed to wanting to help the medical community get up to speed and to be in a position to take ownership of the question of whether, when, and how a technology like this should be used. and that was really the motivating force behind the book. And it, it was really that journey. And that journey also has given me patience with everyone else in the world, because I realize everyone else in the world has to go through those same nine, nine stages.Eric Topol (05:35):Well, those stages that you went through are actually a great way to articulate this pluripotent technology. I mean, I think you, you touched on that chat. ChatGPT was released November 30th and within 90 days had a billion distinct users, which is beyond anything in history. And then of course, this transcended that quite a bit as you showed in the book coming out in you know, just a very short time in March. right. And I think a lot of people want access to GPT-4 because they know that there is this jump in its capabilities. But the book starts off after Sam Altman's forward, which was also nice because he said, you know, this is just an early, as you pointed out there, there's a lot more to come in the large language model space.(06:30):But the grabber to me was this futuristic, this second year medical resident who's using an app on the phone to get to the latest GPT to help manage her patient, and then all the other things that it's doing to check on her patients and do all the things that are the tasks that clinicians don't really want to do, that they need help with. And that just grabs you as to the futuristic potential, which may not be so far away. And I think then you get into the nuts and bolts, but one of the things that I think is a misnomer that you really nailed is how you say it isn't just that it generates, but it really is great at editing and analyzing. And here it's, it's called generative AI. Can you, can you expound on that? And it's unbelievable conversationalist capability.Peter Lee (07:23):Yeah. you know, the term Generative AI, I tried for a while to push back on this, but I think it's just caught on and I've given up on that. And I get it. You know, I, I think especially with ChatGPT it's of course reasonable for the public to be, you know infatuated with a thing that can write love letters, write poetry and that generative capability. and of course, you know school children writing their essays and so on this way. But as you say one thing we have discovered through a lot of experimentation is it's actually somewhat of a marginal generator of text. I would not say at all. That is, it is not as good a poet as good human poets. It's not the, you know, people have programmed GPT-4 to try to write whole novels and it can do that,(08:24):they aren't great. and it's a challenge, you know within Microsoft, our Nuance division has been integrating GPT-4 to help write clinical and encounter notes. and you can tell it's just hitting at the very limits of the capabilities in and of the intelligence of GPT-4 to be able to do that well. But one area where it really excels is in evaluating or judging or reviewing things. And we've seen that over and over again. in chapter three. You know, I have this example of its analysis of some contemporary poetry which is just stunning in its kind of insights and its use of metaphor and allegory. And but then in other situations in interactions with the New England Journal Journal of Medicine experimentations with the use of GPT-4 as an adjunct to the review process for papers it is just incredibly insightful in spotting inconsistencies missing citations to precursor studies to understanding lack of inclusivity and diversity, you know, in approach or in terminology.(09:49):And these sorts of review things end up being especially intriguing for me when we think about the whole problem of medical errors and the possibility of using GPT-4 to look over the work of doctors, of nurses of insurance, adjudicators and others, just as a second set of eyes to check for errors check for kind of missing possibilities if there's a differential diagnosis. Is there a possibility that's been something that's been missed? If there's a calculation for an IV medication administration, well, it's a calculation done correctly or not. And it's in those types of applications of GPT-4 as a reviewer, as a second set of eyes that I think I've been especially impressed with. And we try to highlight that in the book.Eric Topol (10:43):Yeah. That's one of the very illuminating things about going well beyond what are the assumed utilities in a little bit, we'll talk about the liabilities, but certainly these are functions part of that flurry potent spectrum that I think a lot of people are not aware of. One, particularly of interest in the medical space is something I had not anticipated as, you know, when I wrote a Deep Medicine chapter, “Deep Empathy,” I said, well, we got to rely totally on humans for that. But here you had examples that were quite stunning of coaching physicians by going through their communication, their note and saying, you know, you could have been more sensitive with this. You could have done this, but you, you could be more empathic. And as you know, since the book was published, there was an interesting study that compared a couple hundred questions directed to physicians and then to ChatGPT, which of course isn't necessarily called, we wouldn't say it's state of the art at this point, right. But what was seen that chatbot exhibited, the more empathy, the more sensitive, higher quality responses. So do you think, ultimately that this will be a way we can actually use technology to foster a better communication between clinicians and patients?Peter Lee (12:10):Well I'll try to answer that, but then I want to turn the question to you because I'm just dying to understand how others especially leading thinkers like you think about this. Because as a human being and as a patient, there's something about this that doesn't quite sit right. You know I, I want the empathy to come from my doctor, my human doctor that's in my heart the way that I feel. And yet there's just no getting around the fact that GPT-4 and even weaker versions like GPT 3.5, CHatGPT can be remarkably empathetic. And as you say, there was that study that came out of UC, San Diego Medicine, Johns Hopkins Medicine that you know, was just another fairly significant piece of evidence to that point.Here's another example. You know, my colleague Greg Moore was assisting a patient who had late stage pancreatic cancer.(13:10):And there was a real struggle for both the specialists and for Greg to know what to say to this desperate patient how to support this patient. And the thing that was remarkable Greg decided to use GPT-4 to get advice and they had a conversation and there was very detailed advice to Greg on what to say and how to support this patient. And at the end when Greg said, thank you, GPT-4 said, and you're welcome, Greg, but what about you? You know, do you have all the support that you need? This must be very difficult for you. So the empathy just goes remarkably deep. And, you know, if you just look at how busy good doctors and especially nurses are, you can start to realize that people don't necessarily have the time to think about that.(14:02):And also that what GPT-4 is suggesting ends up being a prompt to the human doctor or the human nurse to actually take the time to reflect on what the patient might need to hear, right. What might be going through their minds. And, and so there is some empathy aid going on here. At the same time, I think as a society, we have to understand how comfortable we are with the idea of this concept of empathetic care being assisted by a machine. and this is something that I'm very keen and curious about just in the medical community. And, and that's why I wanted to turn the question back around to you. how do you see this?Eric Topol (14:46):Yeah, I didn't foresee this, but I, and I also recognize that we're talking about a machine vector of it. I mean, it's a pseudo-empathy of sort. But the fact that it can process where it can be improved and it can help foster essentially are features that I think are extraordinary. I, I wouldn't have predicted that. And I've seen now, you know, many good examples in the book and, and even beyond. So it's a welcome thing and it adds another capability which is partly isn't that, that physicians and nurses are lacking empathy, but because their biggest issue, I think is lacking time. Yes. And the fact that someday there's a rescue in the works, hopefully, that a lot of that time of tasks that are, you know, the data clerk functions and other burdens right, will be alleviated the keyboard liberation that has been a fantasy of mine for some years, maybe ultimately will be achieved.(15:52):And the other thing I think that's really special in the book that I wanted to comment, there is a chapter by I think Carey Goldberg. And that was about the patient side, right? And this is what we, we all, the talk is about, you know, doctors and clinicians, but it's the patients who could derive the most. And out of those first billion people that used ChatGPT, many were of course health and medical question conversations. But these are patients, we're all patients. And the idea that you could have a personal health advisor, a concept which was developed in that chapter, and the whole idea that that as opposed to a search today, that you could get citations and it would be at the, at the literacy level of the person asking them, making the prompts. Yeah. Could you comment about that? Because that seems to be very much underemphasized, this democratization of this high level capability of getting you know, very useful information and conversation.Peter Lee (16:56):Yeah. And I think also this is also where some of the most difficult societal and regulatory questions might come, because while the medical community knows how to abide by regulations, and there is a regulatory framework, the same is much less true for a doctor in your pocket, which is what GPT-4 and, you know, other large language models that are emerging can, can become. And you know, I think for me personally I have come to depend on GPT-4. I use it through the Bing search engine. sometimes it's simple things that previously weren't mysterious. Like I received an explanation of benefits notice from my insurance company, and it is this notice it has some dollar figures in it. It has some CPT codes, and I have no idea. And sometimes it's things that my son or my wife got treated for.(17:55):It's, it's just mysterious. It's great to have an AI that can decode these things and can answer questions. similarly, when I go for a medical checkup and I get my blood test results just decoding those CBC lab test numbers, it, it's, again, something that is just incredible convenience. But then even more you know, my father recently passed away. He was 90 years old, but he was very ill for the last year or so of his life seeing various specialists. I, my two sisters and I all lived far away from him. And so we were struggling to take care of him and to understand his medical care. and it's a situation that I found all too common in our world right now. And it actually creates stress and phrase of relationships amongst siblings and so on.(18:56):And so just having an AI that can take all of the data from the three different specialists and, you know, have it all summed up and be able to answer questions, be able to summarize and communicate efficiently from one specialist to the next to really provide kind of some sound advice ends up being a godsend. Not so much for my father's health, because he was on a trajectory that was really not going to be changed, but just for the peace of mind and the relationships between me and my two sisters and my mother-in-law. And so it's that kind of empowerment. you know, in corporate speak at Microsoft, we would say that's empowerment of a consumer, but it is truly empowerment. I mean, it's for real. And you know, that kind of use of these technologies, I think is spreading very, very rapidly and I think is is incredibly empowering.(19:57):Now the big question is can the medical community really harness that kind of empowered patient? I think there's a desire to do that. That's always been one of the big dreams, I think in medicine today. and then the other question is, the assistants are fallible. They make mistakes. and so, you know, what is the regulatory or legal or, you know, ethical disposition of that? And so these are still big questions I think we have to answer. But the, you know, overall big picture is that there's an incredible potential to empower patients with a, a new tool and also to kind of democratize access to really expert medical information. and I, I just think it's, you're absolutely right. It doesn't get enough attention even in our book we only devoted one chapter to this, right?Eric Topol (21:00):Right. But at Least it was in there though. That's good. At least you had it because I think it's so critical to figure that out. And as you say, the ability to discriminate bad information, confabulation hallucination among people without medical training is, is, is much more challenging. Yes. but I also liked in the book how you could go to go back to another conversation to audit the first one or a third one, so that if you ever are suspicious that you might not be getting the best information you could do, like double data entry or triple data entry, you know, I thought that was really interesting. Now Microsoft made a humongous investment in open AI yesterday Sam Altman was getting grilled, not again, not really in a much more friendly sense, I'm sure about what should we do. We have this, we have this two edge sword likes of which we've never seen.(21:59):Of course, you get in the book about does it really matter if it's AGI or some advanced intelligence? If it's working well, it's kind of like the explainability-- black box story. But of course, it, it can get off the tracks. We know that. And there isn't that much difference perhaps between ChatGPT and GPT-4 established so far. So in that discussion, he said, well, we got to have regulatory oversight and licensing. And it's very complex. I mean, what, what are your thoughts as to how to deal with the potential limitations that are still there that may be difficult to eradicate that are the worries?Peter Lee (22:43):Right. You know, at, at, at least when it comes to medicine and healthcare. I personally can't imagine that this should not be regulated. it, it just and it just seems also more approachable to think about regulation because the whole practice of medicine has grown up in this regulated space. if there's any part of life and of our society that knows how to deal with regulation and can actually make regulations actually work it is medicine. And so now having said that I do understand coming from Microsoft, and even more so for Sam Altman coming from open eye, it can sometimes be interpreted as being self-serving. You're wanting to set up regulatory barriers against others. I would say in Sam Almond's defense that at back to 2019 prior, just prior to the release of GPT-2 Sam Altman made public calls for thinking about regulation for need for external audit and, you know, for the world to prepare for the possibility of AI technologies that would be approaching AGI..(24:05): and in fact just a month before the release of GPT-4, he made a very public call saying even at greater length, asking for the for the world to, to do the same things. And so I think one thing that's misunderstood about Sam is that he's been saying the same thing for years. It isn't new. And so I think that that should give people who are suspicious of Sam's motives in calling for regulation, that it should give them pause because he basically has not changed his tune, at least going back to 2019. But if we just put that aside you know, what I hope for most of all is that the medical community, and I really look at leading thinkers like you, particularly in our best medical research institutions would quickly move to take assertive ownership of the fundamental questions of whether, when, and how a technology like this should be used would engage in the research to create the foundations for you know, for sensible regulations with an understanding that this isn't about GPT-4 this is about the next three or four or five even more powerful models.(25:31):And so, you know, ideally, I think it's going to take some real research, some real inventiveness. What we explain in chapter nine of the book is that I don't believe we have a workable regulatory framework no, right now in that we need to develop it. But the foundations for that, I think have to be a product of research and ideally research from our best thinkers in the medical research field. I think the race that we have in front of us is that regulators will rightfully feel very bad if large nervous people start to get injured or, or worse because of the lack of regulation. and so there, you know, and, and you can't blame them for wanting to intervene if that starts to happen. And so, so we do have kind of an urgency here. whereas normally our medical research on say, methods for clinical validation of large language models might take, you know, several years to really come to fruition. So there's a problem there. But at the, I think the medical field can very quickly come up with codes of contact guidelines and expectations and the education so that people can start to understand the technology as well as possible.Eric Topol (26:58):Yeah. And I think the tricky part here is that, as you know, there's a lot of doomsayers and existential threats that have been laid out by people who I respect, and I know you do as well, like Geoffrey Hinton who is concerned, but you know, let's say you have a multimodal AI like GPT-4, and you want to put in your skin rash or skin lesion to it. I mean, how can you regulate everything? And, you know, if you just go to Bing and you go to creative mode and you're going get all kinds of responses. So this is a new animal, this is a new alien, the question is that as you say, we don't have a framework and we should move to, to get one. To me, the biggest question that you, you, you really got to in the book, and I know you continue, of course, it was with within two days of your book's publishing, the famous preprint came out, the Sparks preprint from all your team at Microsoft Research, which is incredible.(27:54):169 page preprint downloaded. I don't how many millions of times already, but that is a rich preprint we'll, we'll put in the link, of course. But there, the question is, what are we seeing here? Is this really just a stochastic parrot a JPEG with, you know, loose stuff and juxtaposition of word linguistics, or is this a form of intelligence that we haven't seen from some machines ever before? Right. and, you get at that in so many ways, and you point out, does it matter? I I wonder if you could just expound on this, because to me, this really is the fundamental question.Peter Lee (28:42):Yeah. I think I get into that in the book in chapter three. and I think chapter three is my expression of frustration on this, because it's just a machine, right? And in that sense, yes, it is just a stochastic parrot, you know, it's a big probabilistic machine that's making guesses on the next word that it should spit out, or that you will spit out. It, it, and it's making a projection for a whole conversation. And you know, in that, the first example I use in chapter three is the analysis of this poem. And the poem talks about being splashed with cold water and feeling fever. And the machine hasn't felt any of those things And so when it's opining about those lines in the poem, it can't possibly be authentic. And so you know, so we can't say it understands these things.(29:39):It it hasn't experienced these things, but the frustration I have is as a scientist, and here's now where I have to be very disciplined to be a scientist, is the inability to prove that. Now, there has been some very, very good research by researchers who I really respect and admire. I mean, there was Josh Tenenbaum's team, whole team, and his colleagues at MIT or at Harvard, the University of Washington, and the Allen Institute, and many, many others who have just done some really remarkable research and research that's directly relevant to this question of does the large language model, quote unquote, understand what it's hearing and what it's saying? And often times providing tests that are grounded in the foundational theories about why these things can't possibly be understanding what they're saying. And therefore, these tests are designed to expose these shortcomings in large language models. But what's been frustrating is, but also kind of amazing is GPT-3tends to pass most, if not all of these tests!(31:01):And, and so it, it leaves you kind of, if we're really honest, as scientists, it and even if we know this thing, you know, is not sentient, it leaves us in this place where we're, we're without definitive proof of that. And the arguments from some of the naysayers who I also deeply respect, and I've really read so much of their work don't strike me as convincing proof either, you know, because if you say, well, here's a problem that I can use to cause GPT-4 to get tripped up, I, I have no shortage of problems. I, I think I could get you to trip, get tripped up , Eric. And yet that does not prove that you are not intelligent. And so, so I think we're left with this kind of set of two mysteries. One is we see GPT-4 doing things that we can't explain given our current understanding of how a neural transformer operates.(32:09):And then secondly we're lacking a test that's derived from theory and reason that consistently shows a limitation of GPT-4's understanding abilities. and so in my heart, of course, I, I understand these things as machines and I actively resist anthropomorphizing these machines. But as it, I, maybe I'm fooling myself, but as a discipline scientist, I, I'm, I'm trying to stay grounded in proof and evidence. and right at the moment, I don't believe the world has that I, we'll get there. We're understanding more and more every day, but at the moment we don't have it.Eric Topol (32:55):I think hopefully everyone who's listening is getting some experience now in these large language models and realizing how much fun it is and how we're in a new era in our lives. This is a turning point.Peter Lee (33:13):Yeah. That's stage four of amazement and joyEric Topol (33:16):Yeah. No, there's no question. And you know, I think about you, Peter, because you know, at one point you were in a high level academic post at Carnegie Mellon, one of our leading computer science institutions in the country, in the world, and now you're at this enviable spot of having helped Microsoft to get engaged with a, a risk, I mean a big, big bet. And one that's fascinating, and that is obviously just an iteration for many things to come. So I wonder if you could just give us your sense about where you think we'll be headed over the next few years, because the velocity that this is moving. Not only is it this new technology that is so different than anything previously, but to go, you know, from a few months to get to where things are now and to know that this road is still a long ways in front of us. What, what's your sense of, you know, are we going to get hallucinations under control? Are we going to start to see this pluripotency rollout particularly in the health and medicine arena?Peter Lee (34:35):Yeah. You know, I think first off, I can't say enough good things about the team at OpenAI. You know, I think their dedication and their focus and I think it'll come out eventually also, the, the care that they've taken in understanding the potential risks and, and really trying to create a model for how to cope with those things. I, I think as those stories come out, I think it it will it'll be quite impressive. at the same time, it's also incredibly disruptive, even for us as researchers, it just disrupts everything. Right. You know, I was having interaction after I read Sid Muhkerjee's's new book, the Song of the Cell. Because in that book on cellular biology one of the prime characters historically Rudolph Virchow who confirmed the cell mitosis and the you know, the thing that was disruptive about Virchow is that well, first off, the whole theory of cell mitosis was debunked.(35:44): that didn't invalidate the scientists who were working on cell mitosis, but it certainly debunks many of their scientific legacies. And the other is after Virchow, to call yourself a biology researcher, you had to have a microscope and you had to know how to use it. and in a way, there's a scientific disruption similar here, where there are now new tools and new computing infrastructure that you need, if you want to call yourself a com, a computer science researcher. And that's really incredibly disruptive. so I, I see kind of two bifurcation, I think that's likely to happen. I, I think the team at Open AI and with Microsoft's support and collaboration will continue to push the boundaries and the frontiers with the idea of seeing how close to AGI can truly be achieved and largely through scale. And you know, there, there will be tremendous focus of attention on improving its abilities in mathematics and in planning and being able to use tools and, and so on there. and in that, there's a strong suspicion and belief that as greater and greater levels of general cognitive intelligence are achieved, that issues around things like hallucination will be, become much more manageable. Or at least manageable to the same extent that they're manageable in human beings.(37:25):But then I, I think there's going to be an explosion of activity in much smaller, more specialized models as well. I think there's going be a gigantic explosion in, say, in open-source smaller models, and those models probably will not be as steerable and alignable, so they might have more uncontrollable hallucination might go off the rails more easily, but for the right applications --integrated into the right settings--that might not matter. And so exactly then how these models will get used and also what dangers they might pose, what negative consequences they might bring is hard to predict. But I, I do think we're going to see those two different flavors of these large AI systems coming really very, very quickly, much less in the next year.Eric Topol (38:23):Well, that's an interesting perspective, an important one in the book you wrote in this sentence that I thought was particularly notable “the neural network here is so large that only a handful of organizations have enough computing power to train it.” we're talking about 20 or 30,000 GPUs, something like that. We're lucky to have two here or four. this is something that I think again, if you were sitting at Carnegie Mellon right now versus sitting with at Microsoft or some of the tech titan companies that have this capabilities, can you comment about this? Because this sets off a very, you know, distinct situation we've not seen before,Peter Lee (39:08):Right? First off you know, I can't really comment on the size of the compute infrastructure for training these things, but, but it is, as we wrote in the book, is at a size that very, very few organizations at this point. This has got to change at some point in the future. and even on the inference side, forgetting about training you know, GPT-4 is much more power hungry than the human brain. So it is just the human brain is an existence proof that there must be much more efficient architectures for accomplishing the same tasks. So I think there's really a lot yet to discover and a lot of headroom for, for improvement. but you know, what I think is ultimately the, the kind of challenge that I see here is a technology like this could become as essential infrastructure of life as the mobile phone in your pocket.Peter Lee (40:18):And, and so then the question is, can the cost of this technology, how quickly can the cost of this technology, if it should also become as necessary to modern life as the technology's in your pocket how quickly can the costs of this be get to a point where that's, you know, where that is can be reasonably accomplished, right? If we don't accomplish that, then we risk creating new digital divides that would be extremely destructive to society. And what we want to do here is to really empower everybody if it does turn out that this technology becomes as empowering as we think it could be.Eric Topol (41:04):RIght I, I think your point about the efficiency the drain on electricity and no less water for cooling. I mean, these are big, big-ticket things and, you know hopefully simulating the human brain will become, and it's less power-hungry state will become part of the future as well.Peter Lee (41:24):You, well, and hopefully these technologies will solve problems like you know, a clean energy, right? Fusion containment, all better lower energy production of fertilizers, better nanoparticles for more efficient lubricants. There's all a new catalyst for carbon capture. we, if you think about it in terms of making a bet to kind of invent our way out of climate disaster this is one of the tools that you would consider betting on.Eric Topol (42:01):Oh, absolutely. You know, I'm going to be talking soon with Al Gore about that, and I know he's quite enthusiastic about the potential. This is engrossing having this conversation, and I would like to talk to you for many hours, but I know you have to go. But I, I just want to say, as I wrote in my review of the book, talking with you is very different than talking with, you know, somebody with bravado. You're, you know, you have great humility and you're so balanced that when, when I hear something from you or read something that you've written, it's a very different perspective because I don't know anybody who's more balanced, who is more trying to say it like it is. And so, you know, I just, not everybody knows you a lot of people do that might be listening. I just want to add that and just say thank you for taking the effort, not just that you obviously wanted to experiment with GPT-4, but you also, I think, put this together in a great package so others can learn from it, and of course, expand from that as we move ahead in this new era.(43:06):So, Peter, thank you. It's really a privilege to have this conversation.Peter Lee (43:11):Oh thank you, Eric. You're really really too kind. But it, it means a lot to me to hear that from you. So thank you.Thanks for listening and or reading Ground Truths. If you found it as interesting a conversation as I did, please share it.Much appreciation to paid subscribers—you've already helped fund many high school and college students at our summer intern program at Scripps Research and all proceeds from Ground Truths go to Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
Dr. Rupa Marya illuminates the hidden connections between our biological systems and the profound injustices of our political and economic systems. What is deep medicine? How can re-establishing our relationships with the Earth and one another help us to heal? The first part of the episode is taken from a live SAND Community Conversation hosted by SAND Co-founders Zaya and Maurizio Benazzo. The book Inflamed: Deep Medicine and the Anatomy of Injustice by Rupa Marya and Raj Patel is available now. In the second part of this episode, Rupa is part of a panel hosted by Dr. Gabor Maté as part of The Wisdom of Trauma film launch 'Talks on Trauma' series. This panel discussion is called: “How Trauma Literacy Can Transform Medicine” with MDs: Pamela Wible, Will Van Derveer, Jeffrey Rediger, Dr. Gabor Maté, and Rupa Marya. You can listen to this entire panel and 32 other talks as part of The Wisdom of Trauma All Access Pass. Dr. Rupa Marya is a physician, activist, writer, mother, and a composer. She is a Professor of Medicine at the University of California, San Francisco, where she practices and teaches internal medicine. Her work sits at the nexus of climate, health and racial justice. Dr Marya founded and directs the Deep Medicine Circle, a women of color-led organization committed to healing the wounds of colonialism through food, medicine, story, restoration and learning. She is also a co-founder of the Do No Harm Coalition, a collective of health workers committed to addressing disease through structural change. Dr Marya was recognized in 2021 with the Women Leaders in Medicine Award by the American Medical Student Association. She was a reviewer of the American Medical Association's Organizational Strategic Plan to Embed Racial Justice and Advance Health Equity. Because of her work in health equity, Dr. Marya was appointed by Governor Newsom to the Healthy California for All Commission, to advance a model for universal healthcare in California. She has toured twenty-nine countries with her band, Rupa and the April Fishes, whose music was described by the legend Gil Scott-Heron as “Liberation Music.” Together with Raj Patel, she co-authored the international bestselling book Inflamed: Deep Medicine and the Anatomy of Injustice. Topics: 01:00:00 – Introduction 01:03:16 – Part 1, SAND Community Conversation 01:04:28 – Rupa's Personal Story and Childhood 01:07:58 – Patterns in Traditional vs. Western Medicine and the Writing of ‘Inflamed' 01:11:10 – Influence of Collective and Individual Trauma of Health 01:12:49 – Colonial Power Structures in Medicine 01:15:39 – Climate Collapse and Global Health 01:17:27 – Indigenous Wisdom of the Interconnected Web of Life 01:21:11 – How Do We Heal in a Balanced Way? 01:31:33 – Part 2, How Trauma Literacy Can Transform Medicine with Gabor Maté 01:35:59 – Pamela Wilbe Introduction 01:38:37 – Jeffery Rediger Introduction 01:41:55 – Will Van Derveer Introduction 01:46:35 – Rupa Marya Introduction 01:51:15 – Jeffrey Rediger Introduction 01:54:17 – Overcoming Incurable Diseases 02:03:45 – The Science of How Society Gets Into Our Cells 02:36:39 – Conclusions
Progress in AI is accelerating, and the potential in healthcare and precision medicine is enormous. In 2019, we had the pleasure of speaking with Dr Eric Topol, author of ‘The Patient Will See You Now' and ‘Deep Medicine'. Eric has had an incredible career which has been largely focused on researching cardiovascular disease and heart attacks, both of which he worked on in the Cleveland Clinic and Scripps Institute. Now, we're reposting the interview as the conversation is more relevant than ever. Join Patrick and Eric as they discuss wireless medicine and the role of artificial intelligence and machine learning in medicine and healthcare.
Progress in AI is accelerating, and the potential in healthcare and precision medicine is enormous. In 2019, we had the pleasure of speaking with Dr Eric Topol, author of ‘The Patient Will See You Now' and ‘Deep Medicine'. Eric has had an incredible career which has been largely focused on researching cardiovascular disease and heart attacks, both of which he worked on in the Cleveland Clinic and Scripps Institute. Now, we're reposting the interview as the conversation is more relevant than ever. Join Patrick and Eric as they discuss wireless medicine and the role of artificial intelligence and machine learning in medicine and healthcare.
We talk to Raj Patel and Rupa Marya about their book "Inflamed: Deep Medicine and the Anatomy of Injustice."
We shared this episode with doctor and musician Rupa Marya. While society requires us to pick just one path for our professional life, we need to give ourselves permission to choose many paths and find a way to make it our own. Music and Medicine are not separate, arguably, their separation might be one of the reasons we live in a sick society and a sick planet. Deep Medicine is an acknowledgment that health is really a phenomenon that emerges out of systems harmonizing well together so it specifically requires an analysis of power and an understanding of how structures are set in place that predispose certain groups to poor health. The “social determinants of health” do help in showing these relationships but they lack a deeper level of analysis, exposition and even activism. We also spoke about Death and Grieving as portals for regeneration; about the Exposome and the way that collective stories are part of it; and about the richness of ancestral knowledge and how to make space for it to co-evolve with our modern western cosmologies. Rupa´s projects can be found on her website and her book Inflamed. Dr. Rupa Marya is a physician, activist, writer, mother and composer. She is an Associate Professor of Medicine at the University of California and a co-founder of the Do No Harm Coalition. Her work sits at the nexus of climate, health and racial justice. She is the co-author of the book Inflamed: Deep Medicine and the Anatomy of Injustice.
From the use of the data captured by wearable devices to the relationship between doctors and patients in an AI world, in our first episode host Bruno Giussani explores visions of future health. Jane Metcalfe, founder of Neo.Life (and, three decades ago, co-founder of Wired magazine) elaborates on the coming neo-biological revolution and the human immunome; Soumya Swaminathan, chief scientist of the World Health Organisation and head of its science division, reflects on which innovations will have the biggest impacts on global health and; Eric Topol, founder and director of the Scripps Research Translational Institute and author of “Deep Medicine”, explains how artificial intelligence can make healthcare human again. Guests: Jane Metcalfe, Soumya Swaminathan, Eric Topol Host: Bruno Giussani Production CERN, Geneva: Claudia Marcelloni, Lila Mabiala, Sofia Hurst Whistledown Productions, London: Will Yates and Sandra Kanthal Copyright: CERN, 2022
Rupa Marya is a physician, activist, artist and writer. She is an Associate Professor of Medicine at the University of California, San Francisco, founder and director of the Deep Medicine Circle, and co-author of Inflamed: Deep Medicine and The Anatomy of Injustice. She sits down with Dylan Heuer to discuss the connections she sees between colonization and contemporary afflictions like the disproportionate harm caused by Covid-19. She draws on Indigenous knowledge to advocate for a more holistic approach to wellbeing that includes treating farmers as stewards of our health and involving doctors in social justice organizing.HRN is back "On Tour" thanks , in part, to the generous support of the Julia Child Foundation.HRN On Tour is powered by Simplecast.
We talk to Raj Patel and Rupa Marya about their new book "Inflamed: Deep Medicine and the Anatomy of Injustice."
Is colonial capitalism making us sick? From the vast outdoors to the depths of our guts, centuries of colonialism have reordered life on the planet. In its wake have come the dubious efficiencies of big agribusiness and the uneven advances of modern medicine. And our bodies have responded to this colonial condition, in many cases, […]
According to renowned political economist Raj Patel and physician and activist Rupa Marya, our bodies, our societies, and our planet are inflamed. In their recent book, Inflamed, Raj and Rupa reveal the links between health and structural injustices—and offer a new deep medicine that can heal our bodies and our world. In this episode, Raj and Rupa are joined in a rich, unique conversation with CIIS professor Charlotte María Sáenz as they illuminate the hidden relationships between our biological systems and the profound injustices of our political and economic systems. This episode was recorded during a live online event on March 8th, 2022. A transcript is available at ciispod.com. We hope that each episode of our podcast provides opportunities for growth, and that our listeners will use them as a starting point for further introspection. Many of the topics discussed on our podcast have the potential to bring up feelings and emotional responses. If you or someone you know is in need of mental health care and support, here are some resources to find immediate help and future healing: -Visit 988lifeline.org or text, call, or chat with The National Suicide Prevention Lifeline by dialing 988 from anywhere in the U.S. to be connected immediately with a trained counselor. Please note that 988 staff are required to take all action necessary to secure the safety of a caller and initiate emergency response with or without the caller's consent if they are unwilling or unable to take action on their own behalf. -Visit thrivelifeline.org or text “THRIVE” to begin a conversation with a THRIVE Lifeline crisis responder 24/7/365, from anywhere: +1.313.662.8209. This confidential text line is available for individuals 18+ and is staffed by people in STEMM with marginalized identities. -Visit translifeline.org or call (877) 565-8860 in the U.S. or (877) 330-6366 in Canada to learn more and contact Trans Lifeline, who provides trans peer support divested from police. -Visit ciis.edu/counseling-and-acupuncture-clinics to learn more and schedule counseling sessions at one of our centers. -Find information about additional global helplines at https://www.befrienders.org/.
Dr. Rupa Marya and Dr. Raj Patel join the Agents of Change in Environmental Justice podcast to talk about how practitioners of modern medicine and public health are trained to be technicians rather than healers.
With Ravana defeated, Arya goes to visit the Medicine Woman one last time. He realizes it is Ma as her True Self. As they talk, many things become clear. The reason for his journey to the backwaters wasn't to find Deep Medicine but to see his Ma, not as a helpless woman on life support, but here guiding him. The Deep Medicine he sought, was in the Ramayana the whole time. Just as Arya was key to the epic's survival, the epic was also key to his survival. Its stories were real and powerful. With the power of the Ramayana in his hands, he could finally say goodbye to Ma. IN THIS EPISODE: [00:01] Previously on Shadow Realm [00:30] Arya goes to visit the Medicine Woman [02:17] Arya sees his Ma as her True Self [03:06] Arya and his Ma talk about the power of the Ramayana [04:25] Arya receives a blessing from his Ma and says goodbye [07:02] How to support and connect with The Arya Chronicles [07:17] Next episode teaser SYNOPSIS: Arya travels to the backwaters once more to visit the Medicine Woman A misty halo formed around her head, and it dawns on Arya that she was the Moonlight who lit his path through the forest the night before Arya realizes that the Medicine Woman is his Ma, in her true self, not strapped to a life support, but here guiding him The medicine she had told him to search for was never in the Wild Woods, it was with Arya the whole time in the Ramayana Ma gives him a rock that will allow him to travel back home to San Francisco On the next episode: Arya prepares to go back home to San Francisco Links Mentioned: thearyachronicles.com CREDITS: Shadow Realm was written and executive produced by Reenita Hora. Based on her middle-grade novel When Arya Fell Through the Fault, this story constitutes ‘Part 1 of The Arya Chronicles.' An original soundtrack was composed for this story by Kanniks Kannikeswaran (http://www.kanniks.com/). Audio adaptation by Shane Sakhrani. Audio Direction and Production by Daniel Gonzalez and Gabe Mara. The narrative for the trailer was voiced by Carnie Wilson (https://en.wikipedia.org/wiki/Carnie_Wilson) The podcast features Arya Vir Hora as ‘Arya' (https://www.imdb.com/name/nm7845578/) (https://www.backstage.com/u/aryavirhora/) Vishesh Chachra as Rishiji, Pa, Nicolas, Ranga, and Surayu (https://www.imdb.com/name/nm5189387/) Reenita Hora as Ma, Medicine Woman - (reenita.com) Danish Farooqui as Chimpu, Ravana, and Elias (https://www.imdb.com/name/nm7211550/) Laura Smith as Balchorni, Athena, the Doctor, and Jackson's mom (https://www.imdb.com/name/nm2269470/) Kal Monsoor as Buddha, Head of School, and Hanumana (https://www.imdb.com/name/nm0543964/) Avinash Muddappa as Pandu, Gopu, and Raja (https://www.imdb.com/name/nm8765273/) Asha Noel as Female Vanara, Nurse, and Adriana (https://www.imdb.com/name/nm6340920/) Recorded at Studio City Sounds and The Room at Melrose. Chapter 13 uses “Lake Waves Hjalmaren” by Owl Chapter 17 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Chapter 18 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Episodes are available on Spotify, Apple Podcasts or wherever you get your podcasts. Music is available on Spotify. If you are already a fan of the show, tap the share button in your podcast player and post this trailer on Facebook or Twitter. Or text it directly to someone you know who'd love to journey through the Arya Chronicles. For more information visit thearyachronicles.com.
Neuroradiologist and AI champion Dr. Suzanne Bash and host Dr. Eric Gantwerker discuss the present and future applications of artificial intelligence (AI) in medical imaging. --- EARN CME Reflect on how this Podcast applies to your day-to-day and earn AMA PRA Category 1 CMEs: https://earnc.me/Cblsja --- SHOW NOTES In this episode, neuroradiologist and AI champion Dr. Suzanne Bash and host Dr. Eric Gantwerker discuss the present and future applications of artificial intelligence (AI) in medical imaging. Dr. Bash defines AI as a way to utilize computers to enhance human thinking. The ultimate goal is to use this technology to achieve better outcomes for clinical efficiency and quality of patient care. While AI technology has played a role in radiology for the past 15 years, its use has exploded in recent years. Dr. Bash describes her current role at RadNet, a large US-based outpatient imaging enterprise. She is interested in conducting clinical validation trials and evaluating product fit with her company's imaging facilities. Additionally, she serves as a clinical advisor to multiple AI companies. We cover a variety of AI applications in medical imaging, including triage, stroke detection, and cancer screenings. Dr. Bash gives us examples of companies and products that are at the forefront of each mission. She encourages all AI companies to stay in touch with clinicians to determine the clinical applicability of their products. One major factor to consider is how a product can be integrated with a radiologist's workflow. Successful products will save the radiologist time, while adding value to their clinical decision-making. We also cover the challenges that the AI industry faces, such as FDA regulation and the question of legal liability when technology makes mistakes. Finally, Dr. Bash gives advice about entering the AI space, as well as the case for accepting and adapting to AI technology. She tells clinicians to find opportunities to learn from colleagues and experts and envision the types of products that will be useful in their particular clinical setting. --- RESOURCES RadNet: https://www.radnet.com/ Viz.ai: https://www.viz.ai/ Deep Medicine: https://www.amazon.com/Deep-Medicine-Artificial-Intelligence-Healthcare/dp/1541644638 Radiological Society of North America (RSNA): https://www.rsna.org/
In this special episode Dr HPM discusses book by Eric Topol's book , "Deep Medicine :- How AI can make healthcare human again". Dr HPM feels the healthcare system needs a serious reboot , AI is just the thing to press the restart button.Deep medicine ultimately shows us how we can leverage AI for better care at lower costs with more empathy for the benefits of patients and physicians alike.
Upon reaching the edge of the forest, Arya and his friends find themselves surrounded by a series of waterfalls. The sun glistens off the water and they see a woman standing near the edge of the stream. It's the Medicine Woman. Arya is drawn magnetically towards her as if they have met before, though he can't seem to place where or how. Arya is convinced she can read his mind because she knows so much about his life. Instead of giving Arya the Deep Medicine they need, the mysterious woman gives him another riddle. Racing against time, Arya and his friends leave in search of the medicine that will help their friend. IN THIS EPISODE: [00:06] Previously on Shadow Realm [01:22] Arya and his companions arrive at the edge of the forest and find the Medicine Woman [05:02] Arya and the Medicine Woman talk as if they have met before [11:18] The Medicine Woman tells Arya his next step [11:29] How to support and connect with The Arya Chronicles [12:51] Next episode teaser SYNOPSIS: Arya and his companions make it to the edge of the dark forest where the waterfalls are Arya meets the Medicine Woman and immediately feels drawn to her. He's convinced they have met before As they talk, it seems the Medicine Woman can read Arya's thoughts and knows many things about him and life in San Francisco Before they depart, the Medicine Woman tells Arya his next step in the form of a riddle: find a plant with no medicinal value In the next episode: Arya continues his journey to help Pandu but runs into some trouble Links Mentioned: thearyachronicles.com CREDITS: Shadow Realm was written and executive produced by Reenita Hora. Based on her middle-grade novel When Arya Fell Through the Fault, this story constitutes ‘Part 1 of The Arya Chronicles.' An original soundtrack was composed for this story by Kanniks Kannikeswaran (http://www.kanniks.com/). Audio adaptation by Shane Sakhrani. Audio Direction and Production by Daniel Gonzalez and Gabe Mara. The narrative for the trailer was voiced by Carnie Wilson (https://en.wikipedia.org/wiki/Carnie_Wilson) The podcast features Arya Vir Hora as ‘Arya' (https://www.imdb.com/name/nm7845578/) (https://www.backstage.com/u/aryavirhora/) Vishesh Chachra as Rishiji, Pa, Nicolas, Ranga, and Surayu (https://www.imdb.com/name/nm5189387/) Reenita Hora as Ma, Medicine Woman - (reenita.com) Danish Farooqui as Chimpu, Ravana, and Elias (https://www.imdb.com/name/nm7211550/) Laura Smith as Balchorni, Athena, the Doctor, and Jackson's mom (https://www.imdb.com/name/nm2269470/) Kal Monsoor as Buddha, Head of School, and Hanumana (https://www.imdb.com/name/nm0543964/) Avinash Muddappa as Pandu, Gopu, and Raja (https://www.imdb.com/name/nm8765273/) Asha Noel as Female Vanara, Nurse, and Adriana (https://www.imdb.com/name/nm6340920/) Recorded at Studio City Sounds and The Room at Melrose. Chapter 13 uses “Lake Waves Hjalmaren” by Owl Chapter 17 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Chapter 18 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Episodes are available on Spotify, Apple Podcasts or wherever you get your podcasts. Music is available on Spotify. If you are already a fan of the show, tap the share button in your podcast player and post this trailer on Facebook or Twitter. Or text it directly to someone you know who'd love to journey through the Arya Chronicles. For more information visit thearyachronicles.com.
The aftermath of Arya's recklessness leaves the vanaras homeless and Pandu badly burned. His wounds need Deep Medicine that is made by a mysterious Medicine Woman who lives further South on the outskirts of the Dark Side of the forest. Her exact location is unknown and dangerous. Desperate to help, Arya looks to the Ramayana to try to find the way. Using his imagination, he is able to interpret the story of Hanumana's journey and find a path. Arya and his companions set out on a dangerous journey to find the Medicine Woman and bring back the deep medicine for Pandu. IN THIS EPISODE: [00:04] Previously on Shadow Realm [00:20] The vanaras are upset with Arya for his reckless use of the asthra [01:06] Pandu is badly burned and the vanaras lack the medicine to heal him [2:01] Arya learns of the medicine woman of the backwaters but no one knows how to find her [03:30] Arya looks to the Ramayana to help find the way to the medicine woman [4:24] Arya uses his imagination and is able to find the path to the medicine woman [05:43] How to support and connect with The Arya Chronicles [06:16] Next episode teaser SYNOPSIS: Some of the varnaras are upset with Arya for having the asthra and using it recklessly, but others trust that Rishiji has a plan for him. Pandu is burned very badly and the varnara lack the medicine that he needs Arya takes it upon himself to find the Deep Medicine made by the medicine woman of the backwaters No one knows how to find the Medicine Woman so Arya looks to the Ramayana for information Using his imagination, Arya interprets the story of Hanumana's journey and finds the path to the Dark Side. In the next episode: Arya and his companions finally find the mysterious Medicine Woman Links Mentioned: thearyachronicles.com CREDITS: Shadow Realm was written and executive produced by Reenita Hora. Based on her middle-grade novel When Arya Fell Through the Fault, this story constitutes ‘Part 1 of The Arya Chronicles.' An original soundtrack was composed for this story by Kanniks Kannikeswaran (http://www.kanniks.com/). Audio adaptation by Shane Sakhrani. Audio Direction and Production by Daniel Gonzalez and Gabe Mara. The narrative for the trailer was voiced by Carnie Wilson (https://en.wikipedia.org/wiki/Carnie_Wilson) The podcast features Arya Vir Hora as ‘Arya' (https://www.imdb.com/name/nm7845578/) (https://www.backstage.com/u/aryavirhora/) Vishesh Chachra as Rishiji, Pa, Nicolas, Ranga, and Surayu (https://www.imdb.com/name/nm5189387/) Reenita Hora as Ma, Medicine Woman - (reenita.com) Danish Farooqui as Chimpu, Ravana, and Elias (https://www.imdb.com/name/nm7211550/) Laura Smith as Balchorni, Athena, the Doctor, and Jackson's mom (https://www.imdb.com/name/nm2269470/) Kal Monsoor as Buddha, Head of School, and Hanumana (https://www.imdb.com/name/nm0543964/) Avinash Muddappa as Pandu, Gopu, and Raja (https://www.imdb.com/name/nm8765273/) Asha Noel as Female Vanara, Nurse, and Adriana (https://www.imdb.com/name/nm6340920/) Recorded at Studio City Sounds and The Room at Melrose. Chapter 13 uses “Lake Waves Hjalmaren” by Owl Chapter 17 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Chapter 18 uses “New Holland Honey Eaters 32x Slower” by digifishmusic Episodes are available on Spotify, Apple Podcasts or wherever you get your podcasts. Music is available on Spotify. If you are already a fan of the show, tap the share button in your podcast player and post this trailer on Facebook or Twitter. Or text it directly to someone you know who'd love to journey through the Arya Chronicles. For more information visit thearyachronicles.com.
Dr. Rupa Marya and Raj Patel are the authors of a brilliant new book entitled Inflamed: Deep Medicine and The Anatomy of Injustice (https://us.macmillan.com/books/9780374602529/inflamed). Dr. Rupa Marya is a specialist in internal medicine. Her research looks at the ways that social structures predispose certain groups to health or illness. And while Rupa is central to a number of revolutionary health initiatives, a few I want to make sure I mention are her work on the Justice Study–a national research effort to examine the links between police violence and health outcomes in black, brown and indigenous communities–and her work on the board of Seeding Sovereignty, an international group that promotes Indigenous autonomy in response to climate change. Raj Patel is an award-winning author and film-maker, and a Research Professor in the Lyndon B Johnson School of Public Affairs at the University of Texas. He has worked for the World Bank and the WTO, and he's also participated in global protests against both of these institutions. He's served as a member of the International Panel of Experts on Sustainable Food Systems and published on an extraordinary array of things in a variety of different fields. He's written for The Guardian, the Financial Times, the New York Times, Times of India, among many others. His first book, Stuffed and Starved: The Hidden Battle for the World Food System, made a big impact on me when I was a doctoral researcher. His second, The Value of Nothing, was a New York Times and international best-seller. I speak with them about our current moment, as another year begins, as the Omicron variant of COVID-19 rips through beleaguered cities, as climate fires in Colorado destroy almost a thousand homes (despite there still being snow on the ground), and as we somehow still see new year's resolutions being discussed, as they are every year without fail–even in spite of the pandemic. New year's, though, as Antonio Gramsci wrote, is less about renewal and more about “turn[ing] life and [the] human spirit into a commercial concern,” a sort of gut-check moment that is imagined to matter as a means of cultivating well-being. But it's a means of cultivating well-being where we end up thinking, as Gramsci put it, “that between one year and the next there is a break, that a new history is beginning.” But the notion of a new year's resolution seems nonsensical if we take seriously Marya and Patel's sense that health, in its truest sense, is an “emergent phenomenon of systems interacting well with other systems.” Inflamed is a book that can help us locate the roots of disease outside of the body, in an economic system that generates obscene levels of toxicity and risk. The body, they point out, is really just doing what it is so incredibly efficient at: achieving equilibrium with its environment – the problem is that the environment has been so thoroughly damaged that the work of equilibrium has become corrosive to our bodies. Marya and Patel describe Inflamed as a “call to advance health” through “vivid and radical experimentation.” Their intervention privileges anti-capitalist, anti-colonialist and anti-white-supremacist perspectives. It acknowledges how important self-care can be in a profoundly exhausting system, but reinforces this idea that self-care is still totally inadequate when the problems we face are so clearly collective. For this reason, their notion of deep medicine is all about decentring the individual, learning ways of being a “plural being,” reengaging with what Rupa describes as “old new ways of being, knowing and learning” that encourage life-preserving networks of care. What would it mean, here, to reimagine water and land protection as acts of “care,” as acts of “love toward future generations” that also, crucially, upend the logic of private property?
Chronic inflammatory diseases are on the rise, especially in so-called industrialized countries that have been structured by the hands of colonialism. Could this collective inflammation we are experiencing be a sign from our bodies that we are indeed mired in systemically unhealthy living conditions? What we might have once understood as an individual ailment, must now be understood as a side effect of daily exposures of air pollution, economic precarity, contaminated water, police brutality, mounting debt, and an overall increasingly difficult social structure to stay afloat in. In this week's episode, Dr. Rupa Marya and Raj Patel discuss the biological impacts of oppressive social structures. We are left with the resounding reminder that inflammation is an indicator that we must change our collective ways in order to heal, and in today's world that requires us to dismantle oppressive systems and expand our understanding of health beyond inadequate colonial definitions. Dr. Rupa Marya is a physician, an activist, a mother, and a composer. She is an associate professor of medicine at the University of California, San Francisco, where she practices and teaches internal medicine. She is a cofounder of the Do No Harm Coalition, a collective of health workers committed to addressing disease through structural change. Raj Patel is a research professor at the University of Texas at Austin's Lyndon B. Johnson School of Public Affairs, a professor in the University's Department of Nutrition, and a research associate at Rhodes University, South Africa. He is the author of Stuffed and Starved and The Value of Nothing. He serves on the International Panel of Experts on Sustainable Food Systems and has advised governments worldwide on the causes of and solutions to crises of sustainability. Music by Roma Ransom and Lindsey Mills. Visit our website at forthewild.world for the full episode description, references, and action points.
Activist, journalist, and academic Raj Patel, co-author of the new book “Inflamed: Deep Medicine and the Anatomy of Injustice,” discusses why corporations encourage people to make changes within themselves rather than within society, the consequences of treating nature as a cheap and infinite resource, and how external anxieties, from payday loans to the stress of living in an exploitative culture, can prime the body for illness.
SMOA Survey: bit.ly/SMOAsurvey Raj Patel and Rupa Marya join on this episode to draw the links between physical inflammation, injustice, decolonizing medicine, and the relationship between human and non-human flourishing. They discuss environmental racism, political economy and capitalism, the way that inflammation modulates social and biological health, reductive Enlightenment science, the need for decolonized care, and what deep healing looks like. Their new book is Inflamed: Deep Medicine and the Anatomy of Injustice (2021). Raj Patel is an author, film-maker, activist, and academic. He is a Research Professor in the Lyndon B Johnson School of Public Affairs at the University of Texas, Austin. He has degrees from the University of Oxford, the London School of Economics and Cornell University, has worked for the World Bank and WTO, and protested against them around the world. He is the author of Stuffed and Starved: The Hidden Battle for the World Food System and The Value of Nothing, as well as co-author of A History of the World in Seven Cheap Things. He co-directed the documentary The Ants & The Grasshopper. Rupa Marya is a physician, activist, artist and writer who is an Associate Professor of Medicine at the University of California, San Francisco, the founder of the Do No Harm Coalition, and the founder and executive director of the Deep Medicine Circle, a worker-directed nonprofit committed to healing the wounds of colonialism through food, medicine, story, learning and restoration. In addition to her work in medicine and writing, Rupa is also the composer and front-woman for Rupa and the April Fishes. Animation Video (3:18) for Inflamed: bit.ly/3B4Zp6y Video (28:28): Health and Justice: The Path of Liberation through Medicine (Rupa Marya): bit.ly/3a0xXLe Synopses of Inflamed: Deep Medicine and the Anatomy of Injustice (New York: Farrar, Straus, Giroux, 2021): Prasad A, "Inflamed by Rupa Marya and Raj Patel review – Modern Medicine's Racial Divide," The Guardian (2021), bit.ly/3nQWUkp Jones S, "The Public Body: How Capitalism Made The World Sick," The Nation (2021), bit.ly/3lLHlYu (Disclaimer: at the request of the podcast, two free pre-print copies of the book were supplied by FSG in preparation for this episode)
Why do Black people have a higher death rate than white people from COVID-19? Why do the working class have higher instances of respiratory diseases? If someone is saddled with debt, what does that do to their bodies? Inflamed illuminates the hidden relationships between our biological systems and the injustices of our political, social, and economic systems. Dr. Marya and Patel took us on a tour through the human body – our digestive, endocrine, circulatory, respiratory, reproductive, immune, and nervous systems. From there, they discussed the ways in which those systems break down due to the society we live in. Systemic racism affects the body, they argue. Doctors themselves, by the way, are not immune. For example, Black newborn babies die at more than twice the rate as white newborns. Research suggests this mortality rate is halved when Black infants are cared for by Black physicians. There is a cure to all of this. They suggested that it's the deep medicine of decolonization. Decolonizing heals what has been divided and reestablishes relationships, to the Earth and to each other. We can heal not only our bodies, they offer, but the world. Dr. Rupa Marya is an associate professor of medicine at the University of California, San Franciscio, where she practices and teaches internal medicine. She is cofounder of the Do No Harm Coalition, a collective of health workers committed to addressing disease through structural change. Raj Patel is a research professor at the University of Texas at Austin's Lyndon B. Johnson School of Public Affairs, a professor in the university's department of nutrition, and a research associate at Rhodes University, South Africa. Brady Piñero Walkinshaw is the CEO of Grist.org, the leading national environmental media nonprofit dedicated to climate, justice, and solutions. Buy the Book: Inflamed: Deep Medicine and the Anatomy of Injustice (Hardcover) from Elliott Bay Books Presented by Town Hall Seattle and GRIST.
The Covid pandemic has starkly demonstrated the reality that those individuals experiencing poverty and social inequality get sick and die at higher rates than the general population. This is also true with other illnesses. Inflammation is the body's response to infectious agents and environmental toxins but also to chronic stress and suffering inflicted by things like poverty and structural racism. It is not hyperbolic to say at this juncture that we are an ‘inflamed' society and planet, and radical change is needed. “Most patients you sit with long enough will tell you why they are sick,” says Marya. However, for doctors to truly identify and treat the underlying causes of ill health, the authors argue that we must start by understanding how systemic racism, inequality, and environmental degradation all contribute to a type of persistent, harmful inflammation leading to an illness of not just the body but also of our political, economic, and health care systems. As doctors and advocates, these two disruptors have both been in the trenches, the streets, the villages, and worked in some of the most prestigious academic and medical institutions in the world. Dr. Patel is a PhD, journalist, author, father, and academic, often referred to as "the rock star of social justice writing”. Dr. Marya, when not working as an internal medicine specialist at UCSF, is an activist as well as a mother, composer, singer, and guitarist, fronting the global alternative group Rupa and the April Fishes, infusing her music with the same passion and urgency. It is this combination of activism, academia, medical experience, creativity and tireless spirit that has propelled our guests to demand radical change in our world view and approach to illness and medicine. They are daring us to not only listen to their analysis, but become a part of the change. In this provocative and groundbreaking work, the pair endeavors to shift the traditional paradigm. Marya and Patel explain the unique tasks performed by each operating system of our amazing human bodies, head-to-toe and everything in between, tying each to its approximate counterpart in our healthcare system. Inflamed is not a work of naivete but one that delivers a message of precarious hope, offering a clear diagnosis and treatment plan but with a truly uncertain prognosis. Join Paul for a lively discussion of the book, their life's work and the revolutionary path they are proposing to humanize medical care for all.
Guest Name: Dr. Shadi Battah Contact the guest: Sbattah@icloud.com Summary: Ever wanted to know what angel investing is all about? Listen in as I talk with Dr. Shadi Battah, intensive care physician, about the state of medicine today, how investing can make a difference, and the basics of how to be a savvy angel investor. Dr. Battah's reading list suggestions: Angel investing: 1. Angel investing by David rose 2. Build your fortune in the fifth era. by Matthew Le Merle & Alison Davis 3. Angel by Jason Calacanis. Healthcare innovation: 1. The creative destruction of medicine by Eric Topol. 2. Deep Medicine by Eric Topol. 3. The innovator's prescription by Clayton Christensen. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Curious about the prospect of Artificial Intelligence (AI) in medicine? Have you ever felt the doctor-patient relationship--the heart of medicine--is broken and the doctors are too distracted and overwhelmed to truly connect with their patients?In this conversation, Roy & Bino is making an attempt to explore Deep Medicine and how it can make healthcare human again.Our attempt is to look at the pros and cons of using such advanced technologies in Healthcare. And the big question is do the benefits outweigh the cost? Episode hosts: Roy Vrindavanam & Bino ManjasserilFor further reference:Deep Medicine: How Artificial Intelligence Can Make Healthcare Human AgainDISCLAIMER: THIS PODCAST DOES NOT PROVIDE MEDICAL ADVICE. The information, shared on this podcast are for informational purposes only. No material is intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment and before undertaking a new health care regimen, and never disregard professional medical advice or delay in seeking it because of something you have heard on this podcast.Like this content? Subscribe to the podcast in your favorite podcasting platform (Google, Spotify, Apple) or YouTube, email us at podcast@day2project.com, follow us on twitter @Day2Project
We had the pleasure of speaking with Dr Eric Topol, author of ‘The Patient Will See You Now' and ‘Deep Medicine'. Eric has had an incredible career which has been largely focused on researching cardiovascular disease and heart attacks, both of which he worked on in the Cleveland Clinic and Scripps Institute. In this episode, we discuss wireless medicine and the role of artificial intelligence and machine learning in medicine and healthcare.
Dr. Eric Topol isn't playing around. The author of *Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again* wants physicians to become activists—and to use technology to transform medicine. “That's where we need to see the breakout. Doctors leading this charge, getting organized, and saying: ‘We're not going to take it anymore and we're demanding time with our patients!'” Artificial intelligence and machine learning, he says, can help us turn things around. However, adds Dr. Topol, this is far from a consensus opinion. “I think the idea that technology could enhance humanity in medicine is alien in this country.” In a spirited discussion with our own Jonathon Swersey, the good doctor touches on “the gift of time,” the role of patients and caregivers in the AI revolution, and how data should figure in healthcare's future (“Right now, we have only fragments of people's data about their health. Whereas we should have every part of their data from when they were in the womb up to the present moment”). We even learn, from this dialogue, which book we should read after finishing *Deep Medicine*. Tune in and download the latest chapter in the story of digital health.
Will artificial intelligence help humanize healthcare and get medicine back on track? Signs point to yes—with a few caveats. Rock Health Managing Director and CEO Bill Evans spoke with leading cardiologist and digital medicine researcher Dr. Eric Topol about his new book, Deep Medicine, which provides a wide-ranging overview of the current state of AI in healthcare. Together with Dr. Topol, we explore the fundamental shift from “shallow medicine” to “deep medicine,” the data and privacy issues at hand, and why there has never been a more urgent window of opportunity to fix healthcare with the transformative potential of AI.