POPULARITY
Peter Lee and his coauthors, Carey Goldberg and Dr. Zak Kohane, reflect on how generative AI is unfolding in real-world healthcare, drawing on earlier guest conversations to examine what's working, what's not, and what questions still remain.
Send us a textIn this episode of Connemara Pony Tales, we chat to Bridget and Peter Lee of Clover Hill Connemara Ponies, the breeders behind some of the top ponies in recent years. From their base in Ballyconneely, Co. Galway, the Lees have built an internationally respected breeding programme since founding the Clover Hill prefix in 2007.We explore their journey from beginnings to producing champions like Clover Hill Magic, the 2024 Supreme Champion at the British Connemara Breeders' Society show. Bridget and Peter share insights into the importance of dam lines, the breeding of their standout ponies—including Clover Hill Princess Jade and Clover Hill Troy's Beauty—and their hopes for the future of the Connemara breed.Whether you're a breeder, rider, or fan of the breed, this is an episode you won't want to miss.https://www.facebook.com/ConnemaraPonyWorld
In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes ACM Fellow Peter Lee, President of Microsoft Research. As leader of Microsoft Research, Peter incubates new research-powered products and lines of business in areas such as AI, computing foundations, health, and life sciences. Before Microsoft, he established a new technology office that created operational capabilities in ML, data science, and computational social science at DARPA, and before that he was head of the CS department at CMU. Peter served on President Obama's Commission on Enhancing National Cybersecurity and has testified before both the US House Science and Technology Committee and the US Senate Commerce Committee. He coauthored the bestselling book The AI Revolution in Medicine: GPT-4 and Beyond. In 2024, he was named by Time magazine as one of the 100 most influential people in health and life sciences. In the interview, Peter reflects on his 40+ years in computer science, from working on PDP-11s and Commodore Amigas to modern AI advancements. He highlights how modern technologies, built on decades of research, have become indispensable. He also talks about his healthcare journey, including work that earned him election to the National Academy of Medicine, and the potential (and limitations) of AI in medicine. Peter and Scott touch on the impact of LLMs, the lack of ethics education in traditional CS curricula, the challenges posed by growing AI complexity. Peter also highlights some important Microsoft Research work in AI for Science and Quantum Computing.
Two years ago, OpenAI's GPT-4 kick-started a new era in AI. In the months leading up to its public release, Peter Lee, president of Microsoft Research, cowrote a book full of optimism for the potential of advanced AI models to transform the world of healthcare. What has happened since? In this special podcast series—The AI Revolution in Medicine, Revisited—Lee revisits the book, exploring how patients, providers, and other medical professionals are experiencing and using generative AI today while examining what he and his coauthors got right—and what they didn't foresee.In this episode, Dr. Christopher Longhurst and Dr. Sara Murray, leading experts in healthcare AI implementation, join Lee to discuss the current state and future of AI in clinical settings. Longhurst, chief clinical and innovation officer at UC San Diego Health and executive director of the Jacobs Center for Health Innovation, details his healthcare system's collaboration with Epic and Microsoft to integrate GPT into their electronic health record system, offering clinicians support in responding to patient messages. Dr. Murray, chief health AI officer at UC San Francisco Health, discusses AI's integration into clinical workflows, the promise and risks of AI-driven decision-making, and how generative AI is reshaping patient care and physician workload.Learn more:Large Language Models for More Efficient Reporting of Hospital Quality MeasuresGenerative artificial intelligence responses to patient messages in the electronic health record: early lessons learnedThe Chief Health AI Officer — An Emerging Role for an Emerging TechnologyAI-Generated Draft Replies Integrated Into Health Records and Physicians' Electronic Communication Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum The AI Revolution in Medicine: GPT-4 and Beyond
Matt and Femi talk to Peter Lee about how he has managed to found and maintain a school which not only achieves outstanding results, but also has not had any supply bought in for over 8 years! Among many other things they discuss workload, behaviour, timetabling and lesson structures.
S7 E10 - Peter Lee by MusicFirst
Κόκκινο. Βαθύ σαν το αίμα. Πορφυρό. Σαν βασιλικό βελούδο. Κεχριμπάρι. Σαν λάβα που ξεπηδάει από τα έγκατα της γης. Για τον Lee, αυτά τα χρώματα είναι η ζωή του. Χρώματα που γεννιούνται από ένα μάτσο σπίρτα και όσο χορεύουν αγκαλιασμένα, σκορπούν φρικτό θάνατο σε μια μικρή πόλη της Αγγλίας. Για τον Lee, μια φλόγα δεν είναι ποτέ αρκετή.
Episode 205: Judgement and Justice (Ezekiel 4-5) After an extensive prologue and call, we now get to hear God's plan for Jerusalem, and it is very dark indeed. Peter Lee walks us through the ancient siege and the history of Jerusalem's fall; Gray reflects on the nature of God's justice; Paul and Tommy talk to us about preaching and bearing witness to the judgement of God. Also one of the faculty makes a startling admission related to Lord of the Rings.
This week, we imagine the possibility of Uber buying Expedia Group, based on a report by the Financial Times this week that the ride-hailing company has considered a bid for the online travel giant. It would be notable in part because Uber CEO Dara Khosrowshahi was Expedia Group's CEO from 2005 to 2017 and remains on the company's board. In the second segment, we enjoy highlights from the "60 Minutes" profile of former Microsoft CEO Steve Ballmer, and a separate appearance by University of Washington historian and author Margaret O'Mara on an episode of the podcast "60 Minutes: A Second Look" about the rise of Silicon Valley in the early 1980s. And finally, we talk about our new Geekwire project, Microsoft @ 50, and get a window into what's next in AI from Peter Lee, the Microsoft Research president. He shares the top three technical challenges and opportunities on his mind when it comes to the future of artificial intelligence. This is bonus content from an interview for the first story in the series, published this week, tracing the evolution of AI inside Microsoft. With GeekWire co-founders Todd Bishop and John Cook.See omnystudio.com/listener for privacy information.
In a world of high-def images, the skincare industry is hotter than ever — and Asian skincare brands are one of today's biggest beauty trends. In this episode of Marketing Sucks, I sit down with husband-and-wife team Peter Lee and Andrea Mourad, co-founders of the amaaazing Korean skincare brands Saranghae Skincare and Orora Skin Science, to dive into the nuts and bolts of building a successful beauty brand, from launching new lines to upleveling your e-commerce game. During this juicy conversation, Peter and Andrea spill the story behind their incredible brands and the science behind their breakthrough bioactive formulations. Tune in to learn how they're facing the biggest obstacles faced by e-commerce brands today, and how they balance life and business as partners.Marketing sucks. Let's learn together. Hit play to discover: A simple hack for getting customer feedback that'll help you identify product opportunitiesHow to win awards and earn recognition for your productsThe mindset shift that can help you find more joy in your life and workAfter you've listened to the episode, subscribe to Andrea and Peter's podcast Love, Saranghae!Please rate, review, and subscribe! Guest bio: Andrea Mourad and Peter Lee are the co-founders of two award-winning Korean skincare companies: Saranghae Skincare and Orora Skin Science. Partners in life and business together, they are on a mission to share their journey through the exciting world of entrepreneurship, relationships, and the secrets behind achieving that coveted healthy glow.Timestamps: 00:12 Meet Peter Lee & Andrea Mourad01:15 Story behind the brands04:18 Brand evolution07:11 Identifying gaps11:35 Biggest e-comm challenges15:46 Support the good brands!17:50 HumaColl2121:32 Winning product awards24:39 Believe in your products26:41 Balancing life & business29:26 Working abroad & mindset shifts34:24 Favourite products37:21 Upcoming launches Andrea Mourad and Peter LeePeter Lee & Andrea Mouradlovesaranghae.com [20% off the entire website with promo code GRINDSOCIAL20]Instagram: @lovesaranghaeskin Our favourite products:—
Joined by Col Peter Lee, a F-35A Lightning II & former F-15C Eagle Pilot, we talked about the integration of the F-35 with our allies, the vast differences in mission sets, an officer's role as a political strategist in D.C., and more. Be sure to subscribe so you don't miss the next episode! *Views expressed are not endorsed by the U.S. Department of Defense or its components.*
Drs. Peter Lee and Tommy Keene are finally getting to co-teach a class. In preparation for Covenant Theology, they sit down to talk about the nature of covenant in Scripture and why it's important.
Southwest Research Institute provides testing and problem solving for applications that literally reach from deep sea to deep space (and everything in between). Amongst all of that, the Tribology department at SwRI has been helping professional racing teams solve problems for years. During this in-depth episode, Dr. Peter Lee from SwRI shares his experiences and reveals some cutting edge new coating technology. #engine #pistonrings #ford #dyno #dynotested #nascar
Episode 133I spoke with Peter Lee about:* His early work on compiler generation, metacircularity, and type theory* Paradoxical problems* GPT-4s impact, Microsoft's “Sparks of AGI” paper, and responses and criticismEnjoy—and let me know what you think!Peter is President of Microsoft Research. He leads Microsoft Research and incubates new research-powered products and lines of business in areas such as artificial intelligence, computing foundations, health, and life sciences. Before joining Microsoft in 2010, he was at DARPA, where he established a new technology office that created operational capabilities in machine learning, data science, and computational social science. Prior to that, he was a professor and the head of the computer science department at Carnegie Mellon University. Peter is a member of the National Academy of Medicine and serves on the boards of the Allen Institute for Artificial Intelligence, the Brotman Baty Institute for Precision Medicine, and the Kaiser Permanente Bernard J. Tyson School of Medicine. He served on President Obama's Commission on Enhancing National Cybersecurity. He has testified before both the US House Science and Technology Committee and the US Senate Commerce Committee. With Carey Goldberg and Dr. Isaac Kohane, he is the coauthor of the best-selling book, “The AI Revolution in Medicine: GPT-4 and Beyond.” In 2024, Peter Lee was named by Time magazine as one of the 100 most influential people in health and life sciences.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:50) Basic vs. applied research* (05:20) Theory and practice in computing* (10:28) Traditional denotational semantics and semantics engineering in modern-day systems* (16:47) Beauty and practicality* (20:40) Metacircularity in the polymorphic lambda calculus: research directions* (24:31) Understanding the nature of difficulties with metacircularity* (26:30) Difficulties with reflection, classic paradoxes* (31:02) Sparks of AGI* (31:41) Reproducibility* (38:04) Confirming and disconfirming theories, foundational work* (42:00) Back and forth between commitments and experimentation* (51:01) Dealing with responsibility* (56:30) Peter's picture of AGI* (1:01:38) OutroLinks:* Peter's Twitter, LinkedIn, and Microsoft Research pages* Papers and references* The automatic generation of realistic compilers from high-level semantic descriptions* Metacircularity in the polymorphic lambda calculus* A Fresh Look at Combinator Graph Reduction* Sparks of AGI* Re-envisioning DARPA* Fundamental Research in Engineering Get full access to The Gradient at thegradientpub.substack.com/subscribe
While at the North Star Speed Summit, Peter Lee, Fuji Fang, and Aign "Money" Eel join us to talk about some fast Hondas and what it takes to get to the level they are at.
Send us a Text Message.Welcome back to another exciting episode of the Talking Pools Podcast! We've got a packed show today with our incredible co-hosts sharing their latest adventures and insights from the world of pool service.First up, we have Peter, who recently had an unexpected turn of events and broke his leg. But Peter being Peter, he didn't let a little thing like that slow him down. Instead, he's been using his recovery time to dive into some fascinating educational classes. He'll share how he's turning this setback into a major comeback with all the new knowledge he's soaking up.Then, we have Lee, our intrepid explorer, who's been roaming the outback to provide much-needed store coverage for weary shop owners. These hard-working folks need a break from their pool stores, and Lee's been stepping in to keep things running smoothly. He's got some great stories from the road and tips for anyone needing a helping hand.And finally, Shane is jumping into the mix with a cheeky suggestion. He's been listening to Lee's tales and thinks Lee should extend his travels a bit further – all the way to New Zealand to cover his store! Shane will tell us why he thinks Lee's outback adventures make him the perfect candidate for a Kiwi pool shop takeover.So, grab your favorite beverage, sit back, and get ready for another episode filled with humor, wisdom, and the ever-entertaining world of pool service. Let's dive in! BufferZoneBufferZone has been created by a frustrated pool maintenance companyDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the Show.Thank you so much for listening! You can find us on social media: Facebook Instagram Tik Tok Email us: talkingpools@gmail.com
“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe
Join comedians Rachel Fairburn and Kiri Pritchard-McLean as they explore a shared passion, serial killers. Each episode the pair will talk all things murder and macabre and have a right laugh doing it. Episode 107 is about one of Britain's worst yet least known serial killers: Bruce George Peter Lee. Born as Peter Dinsdale in Manchester 1960, Lee was a serial arsonist responsible for approximately 26 deaths. There's also diversions on personal trainers, hot tubs, and Rachel's undying love for the lasagne-loving, Monday-hating, ginger feline Garfield. This episode is not sponsored by HelloFresh or Gousto...yet.
Peter Lee, President of Microsoft Research, explains how AI is transforming every aspect of the medical industry, from research to diagnosis to how doctors communicate information. He also helps listeners grasp how what he's seeing in medicine is happening across all industries: ultimately creating systems that are less siloed and less confusing, more thorough, more efficient, more secure—and even more empathetic. WorkLab Subscribe to the WorkLab newsletter Discover and follow other Microsoft podcasts at news.microsoft.com/podcasts.
Welcome back to StartUp Health NOW! We think it's fair to say that when people think of StartUp Health, they think about entrepreneurs and founders. Over the last 12 or 13 years we've supported more than 500 health tech startups and nearly 1000 founders, many of whom have been featured on this show. Perhaps less well known is what happens behind the scenes at StartUp Health. In this episode we pull back that curtain a little bit, particularly as it pertains to our Health Moonshot Impact Board. We've got this advisory team of about 17 amazing individuals across multiple disciplines. These are people like Dr. Toby Cosgrove, former head of the Cleveland Clinic; Chuck Henderson, the CEO of the American Diabetes Association; and Sue Siegel, former head of GE Ventures – just to name three. You can see the whole Health Moonshot Impact Board here on our website. Recently, we brought together our Health Moonshot Impact Board in real life at the Lake Nona Impact Forum in Florida. Not only did the team get to learn from luminaries like Jeff Bezos, David Feinberg, and Peter Lee, but they got to go deeper on ideas with one another. In the spirit of encouraging a more radically collaborative impact board, we decided to flip the script a bit and have members of our board interview one another for this podcast. The hope was that this would lead to some unexpected lines of questioning and some uniquely candid moments. The first conversation in this series is between Esther Dyson, legendary angel investor and founder of Wellville, and Roger Jansen, PhD, the Chief Innovation & Digital Health Officer at Michigan State University Health Care. The conversation was just as wide-ranging and unstructured as we hoped it would be, and it touched on some incredibly powerful topics. We hope you enjoy. Innovating in Alzheimer's disease? Learn how you can join our new Alzheimer's Moonshot. Passionate about Type 1 diabetes? Learn how you can get one of the last spots in our T1D Moonshot. Want more content like this? Sign up for StartUp Health Insider™ to get funding insights, news, and special updates delivered to your inbox. Innovators: Health Transformer University fuels your health moonshot Funders: Become a Health Moonshot Champion
Rev. Peter Lee, Guest Preacher
CritRPG - A Podcast about LitRPG, Progression Fantasy, and their authors
Hey Hey everyone!This week we had the incredible Peter Lee on the show for an interview about his Book Series "Dear Spellbook!"On the show, we talk about faking Memoirs, handling feedback, choosing a publisher, and the most dreaded topic for an RR Author: Rewriting!! Dun Dun Dun!!!Oh, and also angry Greek gods.If you want to vote on the next round of questions, please consider joining the critRPG Patreon.
Scott talks with Peter Lee about the deployment of U.S. special forces to the island of Kinmen. Although it's controlled by Taiwan, the island is right on China's coast. Lee recounts the history of the military presence on the island and explains Taiwan's reasons for stationing troops there. He and Scott also talk about the likelihood of a Chinese invasion of all of Taiwan in the near future. Discussed on the show: Peter Lee on Twitter @chinahand “US Military Advisors Deployed to Taiwan-Controlled Islands on China's Coast” (Antiwar.com) Peter Lee writes on East and South Asian affairs and their intersection with US global policy. Follow his work on Patreon and on Twitter @chinahand. This episode of the Scott Horton Show is sponsored by: Moon Does Artisan Coffee; Roberts and Robers Brokerage Incorporated; Tom Woods' Liberty Classroom; Libertas Bella; ExpandDesigns.com/Scott. Get Scott's interviews before anyone else! Subscribe to the Substack. Shop Libertarian Institute merch or donate to the show through Patreon, PayPal or Bitcoin: 1DZBZNJrxUhQhEzgDh7k8JXHXRjY Learn more about your ad choices. Visit megaphone.fm/adchoices
Download Episode. Scott talks with Peter Lee about the deployment of U.S. special forces to the island of Kinmen. Although it's controlled by Taiwan, the island is right on China's coast. Lee recounts the history of the military presence on the island and explains Taiwan's reasons for stationing troops there. He and Scott also talk about the likelihood of a Chinese invasion of all of Taiwan in the near future. Discussed on the show: Peter Lee on Twitter @chinahand “US Military Advisors Deployed to Taiwan-Controlled Islands on China's Coast” (Antiwar.com) Peter Lee writes on East and South Asian affairs and their intersection with US global policy. Follow his work on Patreon and on Twitter @chinahand. This episode of the Scott Horton Show is sponsored by: Moon Does Artisan Coffee; Roberts and Robers Brokerage Incorporated; Tom Woods' Liberty Classroom; Libertas Bella; ExpandDesigns.com/Scott. Get Scott's interviews before anyone else! Subscribe to the Substack. Shop Libertarian Institute merch or donate to the show through Patreon, PayPal or Bitcoin: 1DZBZNJrxUhQhEzgDh7k8JXHXRjY
Listen to our latest podcast with Peter Lee, Corporate Vice President of Research and Incubations at Microsoft, and Brandon Pletsch, Practice Leader of Scientific Visualization at Real Chemistry, to learn about the applications and implications of generative AI, specifically GPT-4, in healthcare and other fields. They explore public perceptions of AI, its potential benefits and risks, the importance of ethical considerations and regulatory guidelines, and the concept of equitable access to AI in healthcare. Check out the episode to gain insight into how AI can revolutionize fields outside of human language and thought.
What if every doctor and every patient had an artificial intelligence copilot to help with medical decisions? Peter Lee joins the show to offer his expertise as co-author of The AI Revolution in Medicine: GPT-4 and Beyond and Corporate Vice President of Research and Incubations at Microsoft. Peter is working at the cutting edge of how to safely incorporate AI into medicine, training healthcare providers on the strengths and limitations of technology like GPT-4 in a medical setting. Reid, Aria, and Peter discuss how to regulate AI while encouraging innovation. Plus, Peter provides a fascinating live demonstration of GPT-4's ability to listen to a patient's symptoms and provide a second opinion, illustrating AI's potential to bridge healthcare gaps and revolutionize care. Read the transcript of this episode here. Read Peter's book here. For more info on the podcast and transcripts of all of the episodes, visit www.possible.fm/podcast. Topics: 03:47 - Hellos and intros 04:16 - Mapping Peter's career 07:00 - How patients can use AI in healthcare 08:54 - Peter's personal experience using AI in healthcare 11:35 - Demo of GPT-4 as a physician's copilot 15:30 - How best to educate about GPT-4's capabilities 19:33 - Demo of GPT-4's “critical thinking” skills 26:57 - How to safely incentivize innovation 30:37 - The importance of building effective technology around AI 31:47 - Exploring “hallucinations” in AI, and other limitations 35:28 - How to best use partnerships to foster innovation 38:21 - Example of new technology using AI in healthcare 40:37 - Rapidfire questions The award-winning Possible podcast is back with a new season that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. This season, hosts Reid Hoffman and Aria Finger are speaking with a new set of ambitious builders and deep thinkers about everything from art to geopolitics, and from healthcare to education. These conversations also showcase another kind of guest: AI. Whether it's Inflection's Pi, OpenAI's ChatGPT, or other AI tools, each episode will use AI to enhance and advance our discussion. Possible is produced by Wonder Media Network and hosted by Reid Hoffman and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard, Sara Schleede, and Paloma Moreno Jimenez. Jenny Kaplan is our executive producer and editor. Special thanks to Katie Sanders, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Katie Zoller, Katy Halliday, Rhoades Clark, and Little Monster Media Company.
O caminho de uma MedTech passa pela sua própria transformação digital. Fabrício Campolina, presidente da Johnson & Johnson MedTech Brasil, é um entusiasta da transformação digital, das startups e dos avanços tecnológicos na saúde. Ele sabe que o digital tem que estar nos produtos, na empresa e no ecossistema. Com isso, cria uma teia de recursos digitais que favorecem os pacientes e a continuidade dos cuidados em casa, amplificando o sucesso do trabalho dos profissionais de saúde. Links do episódio:Fabricio Campolina no LinkedInO podcast Medtech Talks, da J&J MedTechO livro "Being You: A New Science of Consciousness", de Anil SethO livro "A Revolução da Inteligência Artificial na Medicina: GPT-4 e Além", de Peter Lee, Carey Goldberg e Isaac KohaneA série de reportagens "O filme das nossas vidas", disponível no GloboPlayO filme "Tempo de Despertar", dirigido por Penny Marshal, baseado no livro "Awakenings", de Oliver Sacks_____FALE CONOSCOEmail: news@theshift.info_____RECEBA AS NEWSLETTERS DA THE SHIFTwww.theshift.info
Noise History class is underway with a full-to-the-brim episode focusing on the Skin Crime s/t LP on Hospital Productions from 2003. We had a conversation looking back on this time with Pat, and appropriately (as you'll hear), Mark jumps on out of nowhere near the end. We talked with Dominick leading up to the episode and insert his recollections and thoughts on releasing the very first Hospital LP throughout. Part "in conversation," part "audio documentary," all SKIN CRIME. For the ExtraNOISEXTRA segment on the patreon, we hear a lot more background from Dominick including Peter Lee's involvement, the idea behind the center labels, and a story of laying out the LP with his teacher. This segment immediately follows the regular episode on the patreon feed.
As the podcast turn 3 years old the boys come together to ask if the original mission statement (or anything whatsoever!) has been achieved. We listen to Justin's opening words from the very first podcast and realise that we have achieved zero personal growth in the intervening period, even if Justin no longer sounds like Mickey Mouse!!We discuss the ever-decreasing size of modern shin pads and the kick-ability of Jack Grealish, very small hard hats, Dawson and the Dullards, Mayonnaise on Chinese food, why we don't think Blue Cheese Brett should use power tools, Pete spends £20 million on an injured striker whilst Carl suggested Andy Caroll!!! Carl aspires to be thrown out of Euro Disney – prompting Pete to tell his wife's favourite joke, Peanut butter cost and texture are critiqued, Orange Juice with or without bits nearly starts a row, Pete bows to peer pressure, Carl confesses to unfairly incriminating his dog before Jason tries to make us all dip toast in tea – Carl confesses to strawberry milkshake fries and Pete remembers he has invented something similarly disgusting. Justin confesses to multiple visits to a “Garlic Farm” on the Isle of Wight – Pete and Carl interrogate this as Justin has been known to fabricate these things (see Opera singers for details).We go through peoples' various meetings with Watford players, Carl uses the word photosynthesis unnecessarily to Gifton Noel Williams, Pete remembers the day, Steve Harrison, former left back, coach and manager accused him of vandalism. Carl then remembers that he too was accused of crime by a footballer with a dodgy Russian accent! Tales involving Micah Hyde, Nicky Wright, Lloydinho, Marlon King and others included. Jason is back again racing to a urinal with not quite a Watford legend, before tequila drinking tales with a genuine Watford legend!We get on to instruments not mastered via Luther all to challenge Carl to play Z-cars on the recorder before revelations of cheese sandwiches and Elton John!! A great question from Peter Lee sees the guy's debate where genuine criticism ends and hysteria and nonsense starts – the guys answer in a way that can only described as hysterical nonsense.Thank you so much to everyone who has contributed in any way through the last 3 years – we are s grateful.COYH!This Podcast has been created and uploaded by Do Not Scratch Your Eyes. The views in this Podcast are not necessarily the views of talkSPORT.Huge thanks to all our Patreons:Chris Giannone,RichWFC2,Steve Holliman,Ian Ball,Paul Fiander Turner,Sean Gourley,Lee Anselmo,John Parslow,Mark von Herkomer,Neil Silverstein,Steve Brown,Dave Lavender,Kasey Brown,Nipper Harrison,Boyd Mayover,Colin Payne,Paul Riley,Gary Wood,Karl Campion,Kevin Kremen,The Big Le – Bofski,Greg Theaker,Malcolm Williams,Bryan Edwards,Peter Ryan,Jack Foster,Jason Rose,Michael Abrahams,Ian Bacon,Ken Green,Nick Nieuwland,Colin SmithAnt!!!!!& PDF Hosted on Acast. See acast.com/privacy for more information.
Game designer Peter Lee joins the Lorecast to discuss board game development, how to use playtest feedback, and the history of his career. We also take a look the new D&D Beyond Maps and dissect some D&D sales data! Back Aberration here: https://ghostfiregaming.com/GGYT_GHAB_2023_9_104 Find Teos' blog: https://alphastream.org/index.php/2023/09/25/how-dd-sells-and-what-it-means-for-the-hobby/ Email your questions to podcast@ghostfiregaming.com Ben: @TheBenByrne Dael: @DailyDael Shawn: @shawnmerwin Peter: @minipete Editor: @ZsDante Topics: 00:00 - Intro 01:24 - Favorite board games 05:59 - Peter Lee's game design career 15:28 - Aberration from concept to Gamefound 20:43 - The Great Beast of Grim Hollow 29:55 - Aberration expansions 32:54 - D&D Beyond: Maps 41:51 - Planescape 43:27 - Dissecting D&D sales data 50:56 - How to use playtest feedback
In this episode of That'll Preach we take an intriguing journey into the heart of biblical scholarship to scrutinize the reliability of the Old Testament. Join us as we engage in a captivating conversation with distinguished Old Testament scholar, Professor Peter Lee, from RTS (Reformed Theological Seminary). We delve deep into the questions that have sparked both curiosity and debate for centuries: "Can We Trust the Old Testament?" Professor Lee brings his extensive expertise to the forefront, shedding light on the historical context, authorship, and the concrete reliability of the Old Testament scriptures. Discover the answers to pressing questions, including: What is the role of oral tradition in the transmission of the Old Testament? Why are the Dead Sea Scrolls so significant? How do we understand authorship in the Old Testament? How do we gain confidence in the historicity of the Old Testament? Were details embellished? Whether you're a devout believer seeking to deepen your faith or a curious skeptic eager to explore the historical credibility of these sacred writings, this episode offers a balanced and well-informed examination that underscores the unwavering reliability of the Old Testament.
How can the first and second generations have unity in the body of Christ? How should we address the challenges faced by immigrant churches and help them have a healthy partnership with the next generations? In this important interview, SOLA editoral board member Daniel K. Eng interviews Dr. Peter Cha, a faculty member at Trinity Evangelical Divinity School and director of the Thriving Immigrant Congregations Initiative (TICI). The TICI aims to help immigrant congregations reshape their congregational culture and ministries so that they can help first-generation and second-generation leaders build strong partnerships. During their conversation, Daniel K. Eng and Dr. Peter Cha touch upon several crucial topics: 1. The genesis and purpose of the TICI 2. The changing dynamics of second-generation Christians returning to their ethnic congregations 3. A case study church, Open Door Presbyterian Church in Northern Virginia, and its 25-year journey towards intergenerational reconciliation 4. The diverse representation within TICI cohorts, encompassing churches of various ethnic backgrounds and traditions 5. The application process for churches interested in joining the TICI program, with a deadline in September 6. Dr. Peter Cha's hopes and prayers for TICI, which revolve around strengthening generational reconciliation in immigrant churches, ultimately enabling them to be vibrant witnesses both locally and globally. This interview provides valuable insights into the Thriving Immigrant Church Initiative and the critical work being done to bridge generational gaps within immigrant congregations, ultimately contributing to the growth and vitality of these churches. (Note: If you are interested in the TICI program and its application, please visit their website) Watch the video on YouTube: https://youtu.be/BeQZiNrFk-o The links/organizations that he referred to in the interview: Thriving Immigrant Congregations Initiative: https://www.hiebertcenter.org/tici Open Door Presbyterian Church: https://opendoorpc.org/ Peter Lee: https://www.tiu.edu/divinity/faculty/peter-t-lee/ His info: Peter Cha: pcha@tiu.edu Dr. Cha's faculty page: https://www.tiu.edu/divinity/faculty/peter-t-cha/
Make a one-time or recurring donation on our Donor Box profile here. Join us in the mission of introducing Reformed Theology across the world! Interested in further study of the Bible? Join us at Logos Bible Software. Are you interested in a rigorous and Reformed seminary education? Call Westminster Seminary California at 888-480-8474 or visit www.wscal.edu! Please help support the show on our Patreon Page! SEASON 6 EPISODE 13 Join Nick and Peter of the Guilt Grace Gratitude Podcast as they continue Season 6, Introduction to Reformed Theology. This week they discuss Sanctification with Dr. Peter Lee! Peter Lee (PhD., The Catholic University of America) is a church planter, pastor, and expert in Ancient Near Eastern languages. Peter teaches Hebrew I, Hebrew II, Hebrew Exegesis, Genesis to Joshua and Judges to Esther at Reformed Theological Seminary. He graduated from Westminster Theological Seminary, California with an MDiv where he studied under Dr. Mark Futato, Dr. John Frame and Dr. Meredith Kline. He received the PhD in Semitic and Egyptian Languages and Literature from The Catholic University of America. Known for his easy-going, pastoral approach to teaching, Peter makes learning Hebrew and the Old Testament a joy. He resides in Columbia, Maryland with his wife and six children. This season is dedicated to Westminster Seminary California. Interested in applying? Go here. Have Feedback or Questions? Email us at: guiltgracepod@gmail.com Find us on Instagram: @guiltgracepod Follow us on Twitter: @guiltgracepod Find us on YouTube: Guilt Grace Gratitude Podcast Please rate and subscribe to the podcast on whatever platform you use! Looking for a Reformed Church? North American Presbyterian & Reformed Churches --- Support this podcast: https://podcasters.spotify.com/pod/show/gggpodcast/support
Harry's guest this week is Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School and co-author of the new book The AI Revolution in Medicine: GPT-4 and Beyond. Large language models such as GPT-4 are obviously starting to change industries like search, advertising, and customer service—but Dr. Kohane says they're also quickly becoming indispensable reference tools and office helpmates for doctors. It's easy to see why, since GPT-4 and its ilk can offer high-quality medical insights, and can also quickly auto-generate text such as prior authorization, lowering doctors' daily paperwork burden. But it's all a little scary, since there are no real guidelines yet for how large language models should be deployed in medical settings, how to guard against the new kinds of errors that AI can introduce, or how to use the technology without compromising patient privacy. How to manage those challenges, and how to use the latest generation of AI tools to make healthcare delivery more efficient without endangering patients along the way, are among the topis covered in Dr. Kohane's book, which was co-written with Microsoft vice president Peter Lee and journalist Carey Goldberg.For a full transcript of this episode, please visit our episode page at http://www.glorikian.com/podcast Please rate and review The Harry Glorikian Show on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:1. Open the Podcasts app on your iPhone, iPad, or Mac. 2. Navigate to The Harry Glorikian Show podcast. You can find it by searching for it or selecting it from your library. Just note that you'll have to go to the series page which shows all the episodes, not just the page for a single episode.3. Scroll down to find the subhead titled "Ratings & Reviews."4. Under one of the highlighted reviews, select "Write a Review."5. Next, select a star rating at the top — you have the option of choosing between one and five stars. 6. Using the text box at the top, write a title for your review. Then, in the lower text box, write your review. Your review can be up to 300 words long.7. Once you've finished, select "Send" or "Save" in the top-right corner. 8. If you've never left a podcast review before, enter a nickname. Your nickname will be displayed next to any reviews you leave from here on out. 9. After selecting a nickname, tap OK. Your review may not be immediately visible.That's it! Thanks so much.
The Food and Drug Administration has approved a new kind of immunization to protect babies from RSV, or respiratory syncytial virus. Dr. Celine Gounder explains who should use the drug, nirsevimab, and other ways to protect against infection.CBS News' chief medical correspondent Dr. Jon LaPook sat down with Peter Lee, Microsoft's vice president of research and innovation, to discuss artificial intelligence and medicine.Actors are taking to the picket lines with the SAG-AFTRA union, striking for better pay and working conditions. Actor Sean Gunn, known for his role in the hit series "Gilmore Girls," exclusively joins CBS Mornings to talk about the strike and his fight for better residual pay.Pulitzer Prize-winning author Colson Whitehead joins "CBS Mornings" to discuss his latest novel "Crook Manifesto."See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Welcome to the latest episode of Cult of the Old! Each episode your hosts Iain McAllister, Matt Thrower, and Nate Owens are going to turn back the clock to look at games that are at least 10 years old. They'll give their own critical impressions, tell you about the history of the game and its impact, and what relevance these games still have in the modern hobby landscape. Dungeons and Dragons has always been the dominant game in the tabletop RPG space. From its early days to the all encompassing 5th edition, there have always been attempts to leverage that name in other ways. T-shirts, mugs, TV shows, and of course board games. Lords of Waterdeep from designers Peter Lee and Rodney Thompson is a game that took many by surprise when it came out. Putting the worker placement mechanism into the world of DnD as you take on the role of one of the Lords of the city, it remains on shelves to this day and we delve into the reasons why. Show Links Below are links to the games and anything else that the team mention during the course of the cast. We will be linking to BGG for games, designers, artists, and publishers as that is a good place to start for raw information about the game in question. Games Mentioned Agricola Caylus Kingsburg Pillars of the Earth Alien Frontiers Dungeons and Dragons 4th Edition Old School Essentials Castle Ravenloft Wrath of Ashardalon Conquest of Nerath Nexus Ops Argent Consortium Lords of Vegas Ra Risk Legacy Assault of the GIants Tyrants of the Underdark Dune: Imperium Everdell Anachrony Other Mentions - Folded Space Inserts Team Links Websites you can find the cast on There Will Be Games The Cult of the Old Iain The Giant Brain Twitter Matt Twitter Nate Twitter Support us on Ko-Fi https://ko-fi.com/cultoftheold Join us on our Discord https://discord.gg/Ye6Bem5Q6x Email us at cultoftheolduk@gmail.com
Powerful new large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.In this Microsoft Research Podcast series, AI scientist and engineer Ashley Llorens hosts conversations with his collaborators and colleagues about what these new models—and the models that will come next—mean for our approach to creating, understanding, and deploying AI, its applications in areas such as health care and education, and its potential to benefit humanity.This episode features Senior Principal Researcher Emre Kiciman and Principal Researcher Amit Sharma, whose paper “Causal Reasoning and Large Language Models: Opening a New Frontier for Causality” examines the causal capabilities of large language models (LLMs) and their implications. Kiciman and Sharma break down the study of cause and effect; recount their respective ongoing journeys with GPT-3.5 and GPT-4—from their preconceptions to where they are now—and share their views of a future in which LLMs help bring together different modes of reasoning in the practice of causal inference and make causal methods easier to adopt.Learn moreCausal Reasoning and Large Language Models: Opening a New Frontier for Causality | Publication, April 2023The AI Revolution in Medicine: GPT-4 and Beyond | Book by Peter Lee, April 2023 AI and Microsoft Research | Learn more about the breadth of AI research at Microsoft
GUEST OVERVIEW: Peter Lee has been engaged with China as a businessman, analyst, and writer since 1979. He has been a featured correspondent for Asia Times and Newsbud and has written for CounterPunch magazine and the South China Morning Post. His motto is “the devil is in the details” and for the last 20 years he has been digging deep beneath the spin and narrative curation to reveal the good, bad, and ugly of America's agenda for Asia. From 2003 until 2019 Peter wrote the widely cited and circulated blog China Matters. Now he is the creator of Peter Lee's China Threat Report on patreon, providing real time fact checking and analysis on the New Cold War with China; and he still finds time to kill it on twitter under the handle @chinahand.
Link to the book: The AI Revolution in MedicineLink to my review of the bookLink to the Sparks of Artificial General Intelligence preprint we discussedLink to Peter's paper on GPT-4 in NEJMTranscript (with a few highlights in bold of many parts that could be bolded!)Eric Topol (00:00):Hello, I'm Eric Topol, and I'm really delighted to have with me Peter Lee, who's the director of Microsoft Research and who is the author, along with a couple of colleagues for an incredible book called The AI Revolution in Medicine, GPT-4 and Beyond. Welcome, Peter.Peter Lee (00:20):Hello Eric. And thanks so much for having me on. This is a real honor to be here.Eric Topol (00:24):Well, I think you are in the enviable position of having spent now more than seven months looking at GPT-4's S capability, particularly in the health and medicine space. And it was great that you recorded that in a book for everyone else to learn because you had such a nice head start. I guess what I wanted to start with is, I mean, it's, it's a phenomenal book. I [holding the book up], this prop. I can't resistPeter Lee (00:52):Eric Topol (00:53):When, when I got it, I, I couldn't, I stayed up most of the night because I couldn't put it down. It was, it is so engrossing. But when you, when you first got your hands on this and started testing it, what were, what were your initial thoughts?Peter Lee (01:09):Yeah. I, let me first start by saying thank you for the nice words about the book, but really, so much of the credit goes to the co-authors, Carey Goldberg and Zach Kohane and Corey in particular took my overly academic writing. I suspect you have the same kind of writing style as well as Zach's pretty academic writing and helped turn it into something that would be approachable to non-computer scientists and as she put it, as much as possible as a page turner. So I'm glad that her work helped make the, the book an easy read. I,Eric Topol (01:54):I want to just say you're very humble because the first three chapters that you wrote yourself were clearly the, the best ones for me. Anyway. I don't mean to interrupt, but it, it, it is an exceptional book, really.Peter Lee (02:06):Oh thank you very much. It means a lot. Hearing that from you. You know, my own view is that the, the best writing and the best analyses and the best ideas for applications or not of this type of technology in medicine are yet to come. But you're right that I did benefit from this seven-month head start. And so, you know, I think the timing is, is very good. but I'm hoping that much better books and much better writings and ideas will come, you know, when you start with something like this, I, I suspect, Eric, you had the same thing. you start off with a lot of skepticism and I, in fact, I sort of now made light with this. I talk about the nine stages of grief that you have to go through.(02:55): I was extremely skeptical. Of course, I was very aware of GPT 2, GPT 3 and GPT 3.5. I understand, you know, what goes into those models really deeply. and so some of the claims, when I was exposed to the early development, GPT-4 just seemed outlandish and impossible. So I, I was, you know, skeptical, somewhat quietly skeptical. We've all been around the block before and, you know, we've heard lots of AI claims and I was in that state for maybe more than two weeks. And then I started to become in that two weeks annoyed, because I know that some of my colleagues like falling into what I felt was the trap of getting fooled by this technology. And then that turned into frustration and fear. I actually got angry. And one colleague who I won't name I've since had to apologize because then I into the phase of amazement because you start to encounter things that you can't explain that this thing seems to be doing that turns into joy.(04:04): I remember the exhilaration of thinking, wow, I did not think I would live long enough to see a technology like this. and then intensity, There was a period of about three days when I didn't sleep, I was just experimenting. Then you run into some limits and some areas of puzzlement and that's a phase of chagrin. And then real dangerous missteps and mistakes that this system can make that you realize might end up really hurting people. and then, you know, ChatGPT gets released and to our surprise it catches fire with people. And we learn directly through communications that some clinicians are using it in clinical settings. And that heightens the concern. And I, I can't say I'm in the ninth stage of enlightenment yet, but you do become very committed to wanting to help the medical community get up to speed and to be in a position to take ownership of the question of whether, when, and how a technology like this should be used. and that was really the motivating force behind the book. And it, it was really that journey. And that journey also has given me patience with everyone else in the world, because I realize everyone else in the world has to go through those same nine, nine stages.Eric Topol (05:35):Well, those stages that you went through are actually a great way to articulate this pluripotent technology. I mean, I think you, you touched on that chat. ChatGPT was released November 30th and within 90 days had a billion distinct users, which is beyond anything in history. And then of course, this transcended that quite a bit as you showed in the book coming out in you know, just a very short time in March. right. And I think a lot of people want access to GPT-4 because they know that there is this jump in its capabilities. But the book starts off after Sam Altman's forward, which was also nice because he said, you know, this is just an early, as you pointed out there, there's a lot more to come in the large language model space.(06:30):But the grabber to me was this futuristic, this second year medical resident who's using an app on the phone to get to the latest GPT to help manage her patient, and then all the other things that it's doing to check on her patients and do all the things that are the tasks that clinicians don't really want to do, that they need help with. And that just grabs you as to the futuristic potential, which may not be so far away. And I think then you get into the nuts and bolts, but one of the things that I think is a misnomer that you really nailed is how you say it isn't just that it generates, but it really is great at editing and analyzing. And here it's, it's called generative AI. Can you, can you expound on that? And it's unbelievable conversationalist capability.Peter Lee (07:23):Yeah. you know, the term Generative AI, I tried for a while to push back on this, but I think it's just caught on and I've given up on that. And I get it. You know, I, I think especially with ChatGPT it's of course reasonable for the public to be, you know infatuated with a thing that can write love letters, write poetry and that generative capability. and of course, you know school children writing their essays and so on this way. But as you say one thing we have discovered through a lot of experimentation is it's actually somewhat of a marginal generator of text. I would not say at all. That is, it is not as good a poet as good human poets. It's not the, you know, people have programmed GPT-4 to try to write whole novels and it can do that,(08:24):they aren't great. and it's a challenge, you know within Microsoft, our Nuance division has been integrating GPT-4 to help write clinical and encounter notes. and you can tell it's just hitting at the very limits of the capabilities in and of the intelligence of GPT-4 to be able to do that well. But one area where it really excels is in evaluating or judging or reviewing things. And we've seen that over and over again. in chapter three. You know, I have this example of its analysis of some contemporary poetry which is just stunning in its kind of insights and its use of metaphor and allegory. And but then in other situations in interactions with the New England Journal Journal of Medicine experimentations with the use of GPT-4 as an adjunct to the review process for papers it is just incredibly insightful in spotting inconsistencies missing citations to precursor studies to understanding lack of inclusivity and diversity, you know, in approach or in terminology.(09:49):And these sorts of review things end up being especially intriguing for me when we think about the whole problem of medical errors and the possibility of using GPT-4 to look over the work of doctors, of nurses of insurance, adjudicators and others, just as a second set of eyes to check for errors check for kind of missing possibilities if there's a differential diagnosis. Is there a possibility that's been something that's been missed? If there's a calculation for an IV medication administration, well, it's a calculation done correctly or not. And it's in those types of applications of GPT-4 as a reviewer, as a second set of eyes that I think I've been especially impressed with. And we try to highlight that in the book.Eric Topol (10:43):Yeah. That's one of the very illuminating things about going well beyond what are the assumed utilities in a little bit, we'll talk about the liabilities, but certainly these are functions part of that flurry potent spectrum that I think a lot of people are not aware of. One, particularly of interest in the medical space is something I had not anticipated as, you know, when I wrote a Deep Medicine chapter, “Deep Empathy,” I said, well, we got to rely totally on humans for that. But here you had examples that were quite stunning of coaching physicians by going through their communication, their note and saying, you know, you could have been more sensitive with this. You could have done this, but you, you could be more empathic. And as you know, since the book was published, there was an interesting study that compared a couple hundred questions directed to physicians and then to ChatGPT, which of course isn't necessarily called, we wouldn't say it's state of the art at this point, right. But what was seen that chatbot exhibited, the more empathy, the more sensitive, higher quality responses. So do you think, ultimately that this will be a way we can actually use technology to foster a better communication between clinicians and patients?Peter Lee (12:10):Well I'll try to answer that, but then I want to turn the question to you because I'm just dying to understand how others especially leading thinkers like you think about this. Because as a human being and as a patient, there's something about this that doesn't quite sit right. You know I, I want the empathy to come from my doctor, my human doctor that's in my heart the way that I feel. And yet there's just no getting around the fact that GPT-4 and even weaker versions like GPT 3.5, CHatGPT can be remarkably empathetic. And as you say, there was that study that came out of UC, San Diego Medicine, Johns Hopkins Medicine that you know, was just another fairly significant piece of evidence to that point.Here's another example. You know, my colleague Greg Moore was assisting a patient who had late stage pancreatic cancer.(13:10):And there was a real struggle for both the specialists and for Greg to know what to say to this desperate patient how to support this patient. And the thing that was remarkable Greg decided to use GPT-4 to get advice and they had a conversation and there was very detailed advice to Greg on what to say and how to support this patient. And at the end when Greg said, thank you, GPT-4 said, and you're welcome, Greg, but what about you? You know, do you have all the support that you need? This must be very difficult for you. So the empathy just goes remarkably deep. And, you know, if you just look at how busy good doctors and especially nurses are, you can start to realize that people don't necessarily have the time to think about that.(14:02):And also that what GPT-4 is suggesting ends up being a prompt to the human doctor or the human nurse to actually take the time to reflect on what the patient might need to hear, right. What might be going through their minds. And, and so there is some empathy aid going on here. At the same time, I think as a society, we have to understand how comfortable we are with the idea of this concept of empathetic care being assisted by a machine. and this is something that I'm very keen and curious about just in the medical community. And, and that's why I wanted to turn the question back around to you. how do you see this?Eric Topol (14:46):Yeah, I didn't foresee this, but I, and I also recognize that we're talking about a machine vector of it. I mean, it's a pseudo-empathy of sort. But the fact that it can process where it can be improved and it can help foster essentially are features that I think are extraordinary. I, I wouldn't have predicted that. And I've seen now, you know, many good examples in the book and, and even beyond. So it's a welcome thing and it adds another capability which is partly isn't that, that physicians and nurses are lacking empathy, but because their biggest issue, I think is lacking time. Yes. And the fact that someday there's a rescue in the works, hopefully, that a lot of that time of tasks that are, you know, the data clerk functions and other burdens right, will be alleviated the keyboard liberation that has been a fantasy of mine for some years, maybe ultimately will be achieved.(15:52):And the other thing I think that's really special in the book that I wanted to comment, there is a chapter by I think Carey Goldberg. And that was about the patient side, right? And this is what we, we all, the talk is about, you know, doctors and clinicians, but it's the patients who could derive the most. And out of those first billion people that used ChatGPT, many were of course health and medical question conversations. But these are patients, we're all patients. And the idea that you could have a personal health advisor, a concept which was developed in that chapter, and the whole idea that that as opposed to a search today, that you could get citations and it would be at the, at the literacy level of the person asking them, making the prompts. Yeah. Could you comment about that? Because that seems to be very much underemphasized, this democratization of this high level capability of getting you know, very useful information and conversation.Peter Lee (16:56):Yeah. And I think also this is also where some of the most difficult societal and regulatory questions might come, because while the medical community knows how to abide by regulations, and there is a regulatory framework, the same is much less true for a doctor in your pocket, which is what GPT-4 and, you know, other large language models that are emerging can, can become. And you know, I think for me personally I have come to depend on GPT-4. I use it through the Bing search engine. sometimes it's simple things that previously weren't mysterious. Like I received an explanation of benefits notice from my insurance company, and it is this notice it has some dollar figures in it. It has some CPT codes, and I have no idea. And sometimes it's things that my son or my wife got treated for.(17:55):It's, it's just mysterious. It's great to have an AI that can decode these things and can answer questions. similarly, when I go for a medical checkup and I get my blood test results just decoding those CBC lab test numbers, it, it's, again, something that is just incredible convenience. But then even more you know, my father recently passed away. He was 90 years old, but he was very ill for the last year or so of his life seeing various specialists. I, my two sisters and I all lived far away from him. And so we were struggling to take care of him and to understand his medical care. and it's a situation that I found all too common in our world right now. And it actually creates stress and phrase of relationships amongst siblings and so on.(18:56):And so just having an AI that can take all of the data from the three different specialists and, you know, have it all summed up and be able to answer questions, be able to summarize and communicate efficiently from one specialist to the next to really provide kind of some sound advice ends up being a godsend. Not so much for my father's health, because he was on a trajectory that was really not going to be changed, but just for the peace of mind and the relationships between me and my two sisters and my mother-in-law. And so it's that kind of empowerment. you know, in corporate speak at Microsoft, we would say that's empowerment of a consumer, but it is truly empowerment. I mean, it's for real. And you know, that kind of use of these technologies, I think is spreading very, very rapidly and I think is is incredibly empowering.(19:57):Now the big question is can the medical community really harness that kind of empowered patient? I think there's a desire to do that. That's always been one of the big dreams, I think in medicine today. and then the other question is, the assistants are fallible. They make mistakes. and so, you know, what is the regulatory or legal or, you know, ethical disposition of that? And so these are still big questions I think we have to answer. But the, you know, overall big picture is that there's an incredible potential to empower patients with a, a new tool and also to kind of democratize access to really expert medical information. and I, I just think it's, you're absolutely right. It doesn't get enough attention even in our book we only devoted one chapter to this, right?Eric Topol (21:00):Right. But at Least it was in there though. That's good. At least you had it because I think it's so critical to figure that out. And as you say, the ability to discriminate bad information, confabulation hallucination among people without medical training is, is, is much more challenging. Yes. but I also liked in the book how you could go to go back to another conversation to audit the first one or a third one, so that if you ever are suspicious that you might not be getting the best information you could do, like double data entry or triple data entry, you know, I thought that was really interesting. Now Microsoft made a humongous investment in open AI yesterday Sam Altman was getting grilled, not again, not really in a much more friendly sense, I'm sure about what should we do. We have this, we have this two edge sword likes of which we've never seen.(21:59):Of course, you get in the book about does it really matter if it's AGI or some advanced intelligence? If it's working well, it's kind of like the explainability-- black box story. But of course, it, it can get off the tracks. We know that. And there isn't that much difference perhaps between ChatGPT and GPT-4 established so far. So in that discussion, he said, well, we got to have regulatory oversight and licensing. And it's very complex. I mean, what, what are your thoughts as to how to deal with the potential limitations that are still there that may be difficult to eradicate that are the worries?Peter Lee (22:43):Right. You know, at, at, at least when it comes to medicine and healthcare. I personally can't imagine that this should not be regulated. it, it just and it just seems also more approachable to think about regulation because the whole practice of medicine has grown up in this regulated space. if there's any part of life and of our society that knows how to deal with regulation and can actually make regulations actually work it is medicine. And so now having said that I do understand coming from Microsoft, and even more so for Sam Altman coming from open eye, it can sometimes be interpreted as being self-serving. You're wanting to set up regulatory barriers against others. I would say in Sam Almond's defense that at back to 2019 prior, just prior to the release of GPT-2 Sam Altman made public calls for thinking about regulation for need for external audit and, you know, for the world to prepare for the possibility of AI technologies that would be approaching AGI..(24:05): and in fact just a month before the release of GPT-4, he made a very public call saying even at greater length, asking for the for the world to, to do the same things. And so I think one thing that's misunderstood about Sam is that he's been saying the same thing for years. It isn't new. And so I think that that should give people who are suspicious of Sam's motives in calling for regulation, that it should give them pause because he basically has not changed his tune, at least going back to 2019. But if we just put that aside you know, what I hope for most of all is that the medical community, and I really look at leading thinkers like you, particularly in our best medical research institutions would quickly move to take assertive ownership of the fundamental questions of whether, when, and how a technology like this should be used would engage in the research to create the foundations for you know, for sensible regulations with an understanding that this isn't about GPT-4 this is about the next three or four or five even more powerful models.(25:31):And so, you know, ideally, I think it's going to take some real research, some real inventiveness. What we explain in chapter nine of the book is that I don't believe we have a workable regulatory framework no, right now in that we need to develop it. But the foundations for that, I think have to be a product of research and ideally research from our best thinkers in the medical research field. I think the race that we have in front of us is that regulators will rightfully feel very bad if large nervous people start to get injured or, or worse because of the lack of regulation. and so there, you know, and, and you can't blame them for wanting to intervene if that starts to happen. And so, so we do have kind of an urgency here. whereas normally our medical research on say, methods for clinical validation of large language models might take, you know, several years to really come to fruition. So there's a problem there. But at the, I think the medical field can very quickly come up with codes of contact guidelines and expectations and the education so that people can start to understand the technology as well as possible.Eric Topol (26:58):Yeah. And I think the tricky part here is that, as you know, there's a lot of doomsayers and existential threats that have been laid out by people who I respect, and I know you do as well, like Geoffrey Hinton who is concerned, but you know, let's say you have a multimodal AI like GPT-4, and you want to put in your skin rash or skin lesion to it. I mean, how can you regulate everything? And, you know, if you just go to Bing and you go to creative mode and you're going get all kinds of responses. So this is a new animal, this is a new alien, the question is that as you say, we don't have a framework and we should move to, to get one. To me, the biggest question that you, you, you really got to in the book, and I know you continue, of course, it was with within two days of your book's publishing, the famous preprint came out, the Sparks preprint from all your team at Microsoft Research, which is incredible.(27:54):169 page preprint downloaded. I don't how many millions of times already, but that is a rich preprint we'll, we'll put in the link, of course. But there, the question is, what are we seeing here? Is this really just a stochastic parrot a JPEG with, you know, loose stuff and juxtaposition of word linguistics, or is this a form of intelligence that we haven't seen from some machines ever before? Right. and, you get at that in so many ways, and you point out, does it matter? I I wonder if you could just expound on this, because to me, this really is the fundamental question.Peter Lee (28:42):Yeah. I think I get into that in the book in chapter three. and I think chapter three is my expression of frustration on this, because it's just a machine, right? And in that sense, yes, it is just a stochastic parrot, you know, it's a big probabilistic machine that's making guesses on the next word that it should spit out, or that you will spit out. It, it, and it's making a projection for a whole conversation. And you know, in that, the first example I use in chapter three is the analysis of this poem. And the poem talks about being splashed with cold water and feeling fever. And the machine hasn't felt any of those things And so when it's opining about those lines in the poem, it can't possibly be authentic. And so you know, so we can't say it understands these things.(29:39):It it hasn't experienced these things, but the frustration I have is as a scientist, and here's now where I have to be very disciplined to be a scientist, is the inability to prove that. Now, there has been some very, very good research by researchers who I really respect and admire. I mean, there was Josh Tenenbaum's team, whole team, and his colleagues at MIT or at Harvard, the University of Washington, and the Allen Institute, and many, many others who have just done some really remarkable research and research that's directly relevant to this question of does the large language model, quote unquote, understand what it's hearing and what it's saying? And often times providing tests that are grounded in the foundational theories about why these things can't possibly be understanding what they're saying. And therefore, these tests are designed to expose these shortcomings in large language models. But what's been frustrating is, but also kind of amazing is GPT-3tends to pass most, if not all of these tests!(31:01):And, and so it, it leaves you kind of, if we're really honest, as scientists, it and even if we know this thing, you know, is not sentient, it leaves us in this place where we're, we're without definitive proof of that. And the arguments from some of the naysayers who I also deeply respect, and I've really read so much of their work don't strike me as convincing proof either, you know, because if you say, well, here's a problem that I can use to cause GPT-4 to get tripped up, I, I have no shortage of problems. I, I think I could get you to trip, get tripped up , Eric. And yet that does not prove that you are not intelligent. And so, so I think we're left with this kind of set of two mysteries. One is we see GPT-4 doing things that we can't explain given our current understanding of how a neural transformer operates.(32:09):And then secondly we're lacking a test that's derived from theory and reason that consistently shows a limitation of GPT-4's understanding abilities. and so in my heart, of course, I, I understand these things as machines and I actively resist anthropomorphizing these machines. But as it, I, maybe I'm fooling myself, but as a discipline scientist, I, I'm, I'm trying to stay grounded in proof and evidence. and right at the moment, I don't believe the world has that I, we'll get there. We're understanding more and more every day, but at the moment we don't have it.Eric Topol (32:55):I think hopefully everyone who's listening is getting some experience now in these large language models and realizing how much fun it is and how we're in a new era in our lives. This is a turning point.Peter Lee (33:13):Yeah. That's stage four of amazement and joyEric Topol (33:16):Yeah. No, there's no question. And you know, I think about you, Peter, because you know, at one point you were in a high level academic post at Carnegie Mellon, one of our leading computer science institutions in the country, in the world, and now you're at this enviable spot of having helped Microsoft to get engaged with a, a risk, I mean a big, big bet. And one that's fascinating, and that is obviously just an iteration for many things to come. So I wonder if you could just give us your sense about where you think we'll be headed over the next few years, because the velocity that this is moving. Not only is it this new technology that is so different than anything previously, but to go, you know, from a few months to get to where things are now and to know that this road is still a long ways in front of us. What, what's your sense of, you know, are we going to get hallucinations under control? Are we going to start to see this pluripotency rollout particularly in the health and medicine arena?Peter Lee (34:35):Yeah. You know, I think first off, I can't say enough good things about the team at OpenAI. You know, I think their dedication and their focus and I think it'll come out eventually also, the, the care that they've taken in understanding the potential risks and, and really trying to create a model for how to cope with those things. I, I think as those stories come out, I think it it will it'll be quite impressive. at the same time, it's also incredibly disruptive, even for us as researchers, it just disrupts everything. Right. You know, I was having interaction after I read Sid Muhkerjee's's new book, the Song of the Cell. Because in that book on cellular biology one of the prime characters historically Rudolph Virchow who confirmed the cell mitosis and the you know, the thing that was disruptive about Virchow is that well, first off, the whole theory of cell mitosis was debunked.(35:44): that didn't invalidate the scientists who were working on cell mitosis, but it certainly debunks many of their scientific legacies. And the other is after Virchow, to call yourself a biology researcher, you had to have a microscope and you had to know how to use it. and in a way, there's a scientific disruption similar here, where there are now new tools and new computing infrastructure that you need, if you want to call yourself a com, a computer science researcher. And that's really incredibly disruptive. so I, I see kind of two bifurcation, I think that's likely to happen. I, I think the team at Open AI and with Microsoft's support and collaboration will continue to push the boundaries and the frontiers with the idea of seeing how close to AGI can truly be achieved and largely through scale. And you know, there, there will be tremendous focus of attention on improving its abilities in mathematics and in planning and being able to use tools and, and so on there. and in that, there's a strong suspicion and belief that as greater and greater levels of general cognitive intelligence are achieved, that issues around things like hallucination will be, become much more manageable. Or at least manageable to the same extent that they're manageable in human beings.(37:25):But then I, I think there's going to be an explosion of activity in much smaller, more specialized models as well. I think there's going be a gigantic explosion in, say, in open-source smaller models, and those models probably will not be as steerable and alignable, so they might have more uncontrollable hallucination might go off the rails more easily, but for the right applications --integrated into the right settings--that might not matter. And so exactly then how these models will get used and also what dangers they might pose, what negative consequences they might bring is hard to predict. But I, I do think we're going to see those two different flavors of these large AI systems coming really very, very quickly, much less in the next year.Eric Topol (38:23):Well, that's an interesting perspective, an important one in the book you wrote in this sentence that I thought was particularly notable “the neural network here is so large that only a handful of organizations have enough computing power to train it.” we're talking about 20 or 30,000 GPUs, something like that. We're lucky to have two here or four. this is something that I think again, if you were sitting at Carnegie Mellon right now versus sitting with at Microsoft or some of the tech titan companies that have this capabilities, can you comment about this? Because this sets off a very, you know, distinct situation we've not seen before,Peter Lee (39:08):Right? First off you know, I can't really comment on the size of the compute infrastructure for training these things, but, but it is, as we wrote in the book, is at a size that very, very few organizations at this point. This has got to change at some point in the future. and even on the inference side, forgetting about training you know, GPT-4 is much more power hungry than the human brain. So it is just the human brain is an existence proof that there must be much more efficient architectures for accomplishing the same tasks. So I think there's really a lot yet to discover and a lot of headroom for, for improvement. but you know, what I think is ultimately the, the kind of challenge that I see here is a technology like this could become as essential infrastructure of life as the mobile phone in your pocket.Peter Lee (40:18):And, and so then the question is, can the cost of this technology, how quickly can the cost of this technology, if it should also become as necessary to modern life as the technology's in your pocket how quickly can the costs of this be get to a point where that's, you know, where that is can be reasonably accomplished, right? If we don't accomplish that, then we risk creating new digital divides that would be extremely destructive to society. And what we want to do here is to really empower everybody if it does turn out that this technology becomes as empowering as we think it could be.Eric Topol (41:04):RIght I, I think your point about the efficiency the drain on electricity and no less water for cooling. I mean, these are big, big-ticket things and, you know hopefully simulating the human brain will become, and it's less power-hungry state will become part of the future as well.Peter Lee (41:24):You, well, and hopefully these technologies will solve problems like you know, a clean energy, right? Fusion containment, all better lower energy production of fertilizers, better nanoparticles for more efficient lubricants. There's all a new catalyst for carbon capture. we, if you think about it in terms of making a bet to kind of invent our way out of climate disaster this is one of the tools that you would consider betting on.Eric Topol (42:01):Oh, absolutely. You know, I'm going to be talking soon with Al Gore about that, and I know he's quite enthusiastic about the potential. This is engrossing having this conversation, and I would like to talk to you for many hours, but I know you have to go. But I, I just want to say, as I wrote in my review of the book, talking with you is very different than talking with, you know, somebody with bravado. You're, you know, you have great humility and you're so balanced that when, when I hear something from you or read something that you've written, it's a very different perspective because I don't know anybody who's more balanced, who is more trying to say it like it is. And so, you know, I just, not everybody knows you a lot of people do that might be listening. I just want to add that and just say thank you for taking the effort, not just that you obviously wanted to experiment with GPT-4, but you also, I think, put this together in a great package so others can learn from it, and of course, expand from that as we move ahead in this new era.(43:06):So, Peter, thank you. It's really a privilege to have this conversation.Peter Lee (43:11):Oh thank you, Eric. You're really really too kind. But it, it means a lot to me to hear that from you. So thank you.Thanks for listening and or reading Ground Truths. If you found it as interesting a conversation as I did, please share it.Much appreciation to paid subscribers—you've already helped fund many high school and college students at our summer intern program at Scripps Research and all proceeds from Ground Truths go to Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
Powerful new large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.In this new Microsoft Research Podcast series, AI scientist and engineer Ashley Llorens hosts conversations with his collaborators and colleagues about what these new models—and the models that will come next—mean for our approach to creating, understanding, and deploying AI, its applications in areas such as health care and education, and its potential to benefit humanity.The second episode features Peter Lee, head of Microsoft Research. Lee was among a group within Microsoft to have early access to GPT-4 for evaluation and experimentation. Here, he applies his philosophy of tackling research from what will be inevitably true at a future point in time to this current moment. He also explores the differences that may make integrating today's AI advancements into health care more attainable, a topic he expands on in the soon-to-be-released book The AI Revolution in Medicine: GPT-4 and Beyond and the New England Journal of Medicine article "Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine."https://www.microsoft.com/research
As destination organizations increasingly take on a leadership role in sustainability and regeneration, it's essential to learn from the successes of our peers in the meetings, incentives, conferences, and exhibitions (MICE) industry. By leveraging their expertise and applying it to our destinations, we can drive progress toward a more sustainable future for our communities and the planet. Simpleview's David Peacock talks with Peter Lee, Ph.D., and Matthias Schultze. Lee is the managing director of the Goyang Convention and Visitors Bureau. Schultze is the managing director of the German Convention Bureau.
Dr. Peter Lee has shaped computer science from academia, government, and industry. He has chaired a major computer science department, built a new technology office at DARPA, and now serves as Corporate Vice President at Microsoft, where he leads Microsoft Research and its nine worldwide laboratories. In this episode, Peter reveals Microsoft's interest in health care and the origins of the OpenAI and Microsoft partnership, and he speculates on how large language models like ChatGPT will transform medicine. Transcript
Today (December 15th, 2022) marks the 100th birthday of Dr. Meredith Kline, professor emeritus at Westminster Seminary California and likely the most influential Reformed OT theologian you haven't read. Join us as we talk to Dr. John Muether and Dr. Peter Lee as they reflect on Kline's teaching and influence on Reformed theology.
In this Library Release of Fight Back!, we talk with Peter Lee, Executive Director for California's health insurance marketplace, Covered California. Here in California, the open enrollment period ends at the end of the year. Join us as we talk about the key things you need to know about enrolling in healthcare. In our conversation, Peter provides valuable insight into the many facets of the Covered California program, including what options consumers have for obtaining affordable health insurance even if they lose their employment, how middle and low-income families can access subsidies for premiums and co-pays, how the open and special enrollment windows operate, the value of shopping around for the best possible plan, the status of the COVID-19 vaccination program, and much more.
Episode 136: Today's episode is a conversation between Tommy Keene and Peter Lee regarding wisdom. Topics include wisdom (duh!), the fear of the Lord and faith, wisdom and law, wisdom and elves, and more. Enjoy! Ask a question: rts.edu/washington/podcast About RTS Washington: rts.edu/washington Upcoming classes: https://rts.edu/campuses/washington-dc/students/upcoming-classes/
Peter Lee shares his journey between startups and consulting, before landing in venture capital. He also discusses the true value proposition of robotics companies and the reality in investing in hard technology.Guests - Peter LeeWebsite - https://www.embark.vc/ Follow Us!LinkedIn: www.linkedin.com/company/coefficient-labsInstagram: www.instagram.com/demodaypodcast/Facebook: www.facebook.com/coefficientlabs/Twitter: https://twitter.com/coefficientlabsContact Information:social@coefficientlabs.com
Peter Lee, Executive Director of Covered California, talks about the many facets of the Covered California program, including who qualifies, how the program works, what options consumers have if they lose their job, and how to navigate the open enrollment process.
***Content Warning***This episode contains descriptions of a sensitive nature, including the death of children, and adult language at times***Peter Sutcliffe, better known as the "Yorkshire Ripper." Many of us recognise this name - The serial killer who terrorised the North of England in the 1970s and early 1980s was all over the news as the public lived in fear while the police tried to put an end to his horrific killing spree that left over 20 people dead. But there was someone else that was rarely mentioned in the British press, even though they allegedly killed just as many, or even more than the "Yorkshire Ripper." Unlike his fellow peer in depravity, a man named Bruce George Peter Lee has always evaded widespread notoriety, even though his actions put him on the list as one of the country's most prolific serial killers.Hull: Hell and FireBruce George Peter LEESerial arsonist 'Bruce Lee' loses appeal over 1970s fatal fire spree in HullBruce George Peter LeeHull's worst serial killer 'Bruce Lee' had low IQ and was 'highly unreliable' in police interviewsBruce Lee: The notorious Hull arsonist whose youngest victim was just a babyOne of Yorkshire's worst serial killers who murdered 15 people launches bid to be released from prisonMan behind deadly arson attacks that killed 12 fails in bid to clear his nameUK's worst serial killer you've never heard of – how Bruce Lee's crimes flew under radarThe Jordan Harbinger ShowApple Best of 2018-Learn the stories, secrets & skills of the world's most fascinating pplListen on: Apple Podcasts Spotify The Enthusiasm ProjectDeep dives exploring the world of what it means to be an independent creator.Listen on: Apple Podcasts SpotifySupport the show
Amid covid-19, the potential overturn of Roe v. Wade, and a war in Europe, the Affordable Care Act has been flying under the radar in 2022. But this will be a pivotal year for the federal health law. Unless Congress acts, millions of Americans could see their costs for coverage rise dramatically as expanded subsidies expire. At the same time, the end of the public health emergency could boost the uninsured rate as states disenroll people from Medicaid. Peter Lee, who recently stepped down as the first executive director of the largest state-run ACA insurance marketplace, Covered California, has thought long and hard about how the ACA came to be, how it's been implemented, and what should happen to it now. He joins host and KHN chief Washington correspondent Julie Rovner for a wide-ranging discussion on the state of the ACA.Click here for a transcript of the episode. See acast.com/privacy for privacy and opt-out information.