Podcasts about large language models

  • 1,095PODCASTS
  • 2,330EPISODES
  • 42mAVG DURATION
  • 2DAILY NEW EPISODES
  • Jul 11, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about large language models

Show all podcasts related to large language models

Latest podcast episodes about large language models

The Future of Everything presented by Stanford Engineering

Law professor Daniel Ho says that the law is ripe for AI innovation, but a lot is at stake. Naive application of AI can lead to rampant hallucinations in over 80 percent of legal queries, so much research remains to be done in the field. Ho tells how California counties recently used AI to find and redact racist property covenants from their laws—a task predicted to take years, reduced to days. AI can be quite good at removing “regulatory sludge,” Ho tells host Russ Altman in teasing the expanding promise of AI in the law in this episode of Stanford Engineering's The Future of Everything podcastHave a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Daniel HoConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces Dan Ho, a professor of law and computer science at Stanford University.(00:03:36) Journey into Law and AIDan shares his early interest in institutions and social reform.(00:04:52) Misconceptions About LawCommon misunderstandings about the focus of legal work.(00:06:44) Using LLMs for Legal AdviceThe current capabilities and limits of LLMs in legal settings.(00:09:09) Identifying Legislation with AIBuilding a model to identify and redact racial covenants in deeds.(00:13:09) OCR and Multimodal ModelsImproving outdated OCR systems using multimodal AI.(00:14:08) STARA: AI for Statute SearchA tool to scan laws for outdated or excessive requirements.(00:16:18) AI and Redundant ReportsUsing STARA to find obsolete legislatively mandated reports(00:20:10) Verifying AI AccuracyComparing STARA results with federal data to ensure reliability.(00:22:10) Outdated or Wasteful RegulationsExamples of bureaucratic redundancies that hinder legal process.(00:23:38) Consolidating Reports with AIHow different bureaucrats deal with outdated legislative reports.(00:26:14) Open vs. Closed AI ModelsThe risks, benefits, and transparency in legal AI tools.(00:32:14) Replacing Lawyers with Legal ChatbotWhy general-purpose legal chatbots aren't ready to replace lawyers.(00:34:58) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jul 9, 2025 60:29


Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year's CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm's on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant. The complete show notes for this episode can be found at https://twimlai.com/go/738.

Let's Talk AI
#214 - Gemini CLI, io drama, AlphaGenome, copyright rulings

Let's Talk AI

Play Episode Listen Later Jul 4, 2025 93:32 Transcription Available


Our 214th episode with a summary and discussion of last week's big AI news! Recorded on 06/27/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Meta's hiring of key engineers from OpenAI and Thinking Machines Lab securing a $2 billion seed round with a valuation of $10 billion. DeepMind introduces Alpha Genome, significantly advancing genomic research with a model comparable to Alpha Fold but focused on gene functions. Taiwan imposes technology export controls on Huawei and SMIC, while Getty drops key copyright claims against Stability AI in a groundbreaking legal case. A new DeepMind research paper introduces a transformative approach to cognitive debt in AI tasks, utilizing EEG to assess cognitive load and recall in essay writing with LLMs. Timestamps + Links: (00:00:10) Intro / Banter (00:01:22) News Preview (00:02:15) Response to listener comments Tools & Apps (00:06:18) Google is bringing Gemini CLI to developers' terminals (00:12:09) Anthropic now lets you make apps right from its Claude AI chatbot Applications & Business (00:15:54) Sam Altman takes his ‘io' trademark battle public (00:21:35) Huawei Matebook Contains Kirin X90, using SMIC 7nm (N+2) Technology (00:26:05) AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance (00:31:21) Amazon joins the big nuclear party, buying 1.92 GW for AWS (00:33:20) Nvidia goes nuclear — company joins Bill Gates in backing TerraPower, a company building nuclear reactors for powering data centers (00:36:18) Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation (00:41:02) Meta hires key OpenAI researcher to work on AI reasoning models Research & Advancements (00:49:46) Google's new AI will help researchers understand how our genes work (00:55:13) Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks (01:01:54) Farseer: A Refined Scaling Law in Large Language Models (01:06:28) LLM-First Search: Self-Guided Exploration of the Solution Space Policy & Safety (01:11:20) Unsupervised Elicitation of Language Models (01:16:04) Taiwan Imposes Technology Export Controls on Huawei, SMIC (01:18:22) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Synthetic Media & Art (01:23:41) Judge Rejects Authors' Claim That Meta AI Training Violated Copyrights (01:29:46) Getty drops key copyright claims against Stability AI, but UK lawsuit continues

The Dissenter
#1119 Anna Ivanova: Language and Large Language Models

The Dissenter

Play Episode Listen Later Jul 4, 2025 43:27


******Support the channel******Patreon: https://www.patreon.com/thedissenterPayPal: paypal.me/thedissenterPayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuyPayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9lPayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpzPayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9mPayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on******Website: https://www.thedissenter.net/The Dissenter Goodreads list: https://shorturl.at/7BMoBFacebook: https://www.facebook.com/thedissenteryt/Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Anna Ivanova is Assistant Professor in the School of Psychology at Georgia Tech. She is interested in studying the relationship between language and other aspects of human cognition. In her work, she uses tools from cognitive neuroscience (such as fMRI) and artificial intelligence (such as large language models). In this episode, we talk about language from the perspective of cognitive neuroscience. We discuss how language relates to all the rest of human cognition, the brain decoding paradigm, and whether the brain represents words. We talk about large language models (LLMs), and we discuss whether they can understand language. We talk about how we can use AI to study human language, and whether there are parallels between programming language and natural languages. Finally, we discuss mapping models in cognitive neuroscience.--A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, GEORGE CHORIATIS, VALENTIN STEINMANN, ALEXANDER HUBBARD, BR, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, MAURO JÚNIOR, 航 豊川, TONY BARRETT, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, TED FARRIS, ROBINROSWELL, KEITH RICHARDSON, AND HUGO B.!A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, NICK GOLDEN, CHRISTINE GLASS, IGOR NIKIFOROVSKI, AND PER KRAULIS!AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, ROSEY, AND GREGORY HASTINGS!

The Data Exchange with Ben Lorica
Unlocking Unstructured Data with LLMs

The Data Exchange with Ben Lorica

Play Episode Listen Later Jul 3, 2025 27:46


Shreya Shankar is a  PhD student at UC Berkeley in the EECS department. This episode explores how Large Language Models (LLMs) are revolutionizing the processing of unstructured enterprise data like text documents and PDFs. It introduces DocETL, a framework using a MapReduce approach with LLMs for semantic extraction, thematic analysis, and summarization at scale.Subscribe to the Gradient Flow Newsletter

Neurology® Podcast
July 2025 Recall: Practice of Neurology

Neurology® Podcast

Play Episode Listen Later Jul 2, 2025 70:54


The July 2025 recall features four episodes on systems and innovation in delivering neurologic care. The episode begins with Dr. Scott Friedenberg discussing challenges faced by neurologists in balancing financial productivity with optimal patient care. The episode leads into a conversation with Dr. Marisa Patryce McGinley discussing the utilization of telemedicine in neurology, particularly focusing on disparities in access among different demographic groups. The conversation transitions to Dr. Lidia Moura talking about the implications of large language models for neurologic care.  The episode concludes with Dr. Ashish D. Patel discussing headache referrals and the implementation of a design thinking approach to improve access to headache care.  Podcast links:  Empowering Health Care Providers Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Large Language Models for Quality and Efficiency of Neurologic Care Using Design Thinking to Understand the Reason for Headache Referrals Article links: Empowering Health Care Providers: A Collaborative Approach to Enhance Financial Performance and Productivity in Clinical Practice Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Implications of Large Language Models for Quality and Efficiency of Neurologic Care: Emerging Issues in Neurology Using Design Thinking to Understand the Reason for Headache Referrals and Reduce Referral Rates Disclosures can be found at Neurology.org. 

Content Creatives Podcast
Basic AI Terms For Content Creators

Content Creatives Podcast

Play Episode Listen Later Jul 1, 2025 13:44


New to AI? Feeling overwhelmed by all the buzzwords? You're not alone—and this episode is here to help.Today on the Creative Edition Podcast, we're breaking down six essential AI terms every content creator should know to confidently navigate the AI-powered world of content creation. Whether you're already using AI to brainstorm captions, outline podcast episodes, or streamline video edits—or you're just starting to explore AI tools—this episode is packed with practical insights to help you become an AI-native creator.You'll learn:What Prompt Engineering is and how to craft better prompts that lead to higher-quality resultsHow to spot and avoid AI hallucinations (and why fact-checking still matters)The power behind Large Language Models (LLMs) like ChatGPT and ClaudeWhat Fine-Tuning is and how to train AI tools to match your unique voiceAnd what it means to be an AI-Native Creator—plus how early adoption can give your brand a serious edgeWhether you're scaling your content or simply want to stay relevant, this episode will give you the vocabulary and confidence to integrate AI into your creative workflow—no tech background required.Follow us on Instagram: @creativeeditionpodcast Follow Emma on Instagram: @emmasedition | Pinterest: @emmaseditionAnd sign up for our email newsletter.

The CyberWire
U.S. braces for Iranian cyber intrusions.

The CyberWire

Play Episode Listen Later Jun 30, 2025 40:16


CISA warns organizations of potential cyber threats from Iranian state-sponsored actors.Scattered Spider targets aviation and transportation. Workforce cuts at the State Department raise concerns about weakened cyber diplomacy. Canada bans Chinese security camera vendor Hikvision over national security concerns.Cisco Talos reports a rise in cybercriminals abusing Large Language Models. MacOS malware Poseidon Stealer rebrands.Researchers discover multiple vulnerabilities in Bluetooth chips used in headphones and earbuds. The FDA issues new guidance on medical device cybersecurity. Our guest is  Debbie Gordon, Co-Founder of Cloud Range, looking “Beyond the Stack - Why Cyber Readiness Starts with People.” An IT worker's revenge plan backfires. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today's Industry Voices segment, Debbie Gordon, Co-Founder of Cloud Range, shares insights on looking “Beyond the Stack - Why Cyber Readiness Starts with People.” Learn more about what Debbie discusses in Cloud Range's blog: Bolstering Your Human Security Posture. You can hear Debbie's full conversation here. Selected Reading CISA and Partners Urge Critical Infrastructure to Stay Vigilant in the Current Geopolitical Environment (CISA) Joint Statement from CISA, FBI, DC3 and NSA on Potential Targeted Cyber Activity Against U.S. Critical Infrastructure by Iran (CISA, FBI, DOD Cyber Crime Center, NSA)  Prolific cybercriminal group now targeting aviation, transportation companies (Axios) U.S. Cyber Diplomacy at Risk Amid State Department Shakeup (GovInfo Security) Canada Bans Chinese CCTV Vendor Hikvision Over National Security Concerns (Infosecurity Magazine) Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos (Hackread) MacOS malware Poseidon Stealer rebranded as Odyssey Stealer (SC Media) Airoha Chip Vulnerabilities Expose Headphones to Takeover (SecurityWeek) FDA Expands Premarket Medical Device Cyber Guidance (GovInfo Security) 'Disgruntled' British IT worker jailed for hacking employer after being suspended (The Record) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Anti-Dystopians
An Intellectual History of LLMs

The Anti-Dystopians

Play Episode Listen Later Jun 30, 2025 81:29


In this episode, Alina Utrata interviews Amira Moeding, a PhD Candidate in History at the University of Cambridge where they held fellowships with Cambridge Digital Humanities and the Cluster of Excellence “Matters of Activity” at Humboldt Universität zu Berlin. They talked all about Amira's research on the intellectual history of Large Language Models, and other types of AI. They began by asking: why is it so shocking to begin with a history and philosophy of linguistics when talking about LLMs? Why did IBM want these natural language processors to be so energy intensive (hint: to make money)? What is machine empiricism, how does it relate to the invention of Big Data, and why does it limit the way we see and understand the world around us? Amira has worked on critical theory, philosophy of science, feminist philosophy, post-colonial theory and the history of law in settler colonial contexts before turning to data and Big Data, and their paper “Machine Empiricism” together with Professor Tobias Matzner is forthcoming. Until June they were employed as an Research Assistant at the Computer Science Department (Computerlab) at the University of Cambridge in this project. For a complete reading list from the episode, check out the Anti-Dystopians substack at bit.ly/3kuGM5X.You can follow Alina Utrata on Bluesky at @alinau27.bsky.socialAll episodes of the Anti-Dystopians are hosted and produced by Alina Utrata and are freely available to all listeners. To support the production of the show, subscribe to the newsletter at bit.ly/3kuGM5X.Nowhere Land by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/4148-nowhere-landLicense: http://creativecommons.org/licenses/by/4.0/ Hosted on Acast. See acast.com/privacy for more information.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 556: Choosing the Right AI:  Agents, LLMs, or Algorithms?

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 27, 2025 32:27


Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.

AI + a16z
AI's Unsung Hero: Data Labeling and Expert Evals

AI + a16z

Play Episode Listen Later Jun 27, 2025 46:48


Labelbox CEO Manu Sharma joins a16z Infra partner Matt Bornstein to explore the evolution of data labeling and evaluation in AI — from early supervised learning to today's sophisticated reinforcement learning loops.Manu recounts Labelbox's origins in computer vision, and then how the shift to foundation models and generative AI changed the game. The value moved from pre-training to post-training and, today, models are trained not just to answer questions, but to assess the quality of their own responses. Labelbox has responded by building a global network of “aligners” — top professionals from fields like  coding, healthcare, and customer service, who label and evaluate data used to fine-tune AI systems.The conversation also touches on Meta's acquisition of Scale AI, underscoring how critical data and talent have become in the AGI race. Here's a sample of Manu explaining how Labelbox was able to transition from one era of AI to another:It took us some time to really understand like that the world is shifting from building AI models to renting AI intelligence. A vast number of enterprises around the world are no longer building their own models; they're actually renting base intelligence and adding on top of it to make that work for their company. And that was a very big shift. But then the even bigger opportunity was the hyperscalers and the AI labs that are spending billions of dollars of capital developing these models and data sets. We really ought to go and figure out and innovate for them. For us, it was a big shift from the DNA perspective because Labelbox was built with a hardcore software-tools mindset. Our go-to market, engineering, and product and design teams operated like software companies. But I think the hardest part for many of us, at that time, was to just make the decision that we're going just go try it and do it. And nothing is better than that: "Let's just go build an MVP and see what happens."Follow everyone on X:Manu SharmaMatt Bornstein Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

SlatorPod
#255 The Rise of Voice Productivity with Krisp CEO Davit Baghdasaryan

SlatorPod

Play Episode Listen Later Jun 27, 2025 36:36


Davit Baghdasaryan, Co-Founder & CEO of Krisp, joins SlatorPod to talk about the platform's journey from a noise cancellation tool to a comprehensive voice productivity solution.Originally built to eliminate background noise during calls, Krisp has expanded into real-time accent conversion, speech translation, agent assistance, and note-taking — technologies being rapidly adopted in enterprise environments.The company's accent conversion is now deployed by tens of thousands of call center agents, significantly improving metrics like customer satisfaction and call handling time.Davit details Krisp's live AI speech translation feature which uses a multi-step pipeline of speech-to-text, text translation, and text-to-speech. The majority of use cases are in high-resource language pairs such as English, Spanish, French, and German, although Davit recognizes the importance of expanding to lower-resource languages.He shares how Krisp is also seeing increased demand in human-to-AI voice communication. AI voice agents are highly sensitive to background noise, which disrupts turn-taking and response accuracy. Krisp's noise isolation tech plays a foundational role in enabling smoother voice AI interactions, with billions of minutes processed monthly by major AI labs and startups.The CEO discusses LLMs' impact, noting how they enable advanced features like note-taking, call summaries, and real-time agent guidance. He sees Krisp as a platform combining proprietary technologies with third-party AI to serve both B2B and B2C markets.Davit advises startups to explore the vast and still underdeveloped voice AI landscape, noting the current era is ripe for innovation due to AI advancements. He highlights Krisp's roadmap priorities: expanding accent packs, refining voice translation, and building more AI Agent Assist tools focused on voice workflows.

Many Minds
Science, AI, and illusions of understanding

Many Minds

Play Episode Listen Later Jun 26, 2025 59:47


AI will fundamentally transform science. It will supercharge the research process, making it faster and more efficient and broader in scope. It will make scientists themselves vastly more productive, more objective, maybe more creative. It will make many human participants—and probably some human scientists—obsolete… Or at least these are some of the claims we are hearing these days. There is no question that various AI tools could radically reshape how science is done, and how much science is done. What we stand to gain in all this is pretty clear. What we stand to lose is less obvious, but no less important. My guest today is Dr. Molly Crockett. Molly is a Professor in the Department of Psychology and the University Center for Human Values at Princeton University. In a recent widely-discussed article, Molly and the anthropologist Dr. Lisa Messeri presented a framework for thinking about the different roles that are being imagined for AI in science. And they argue that, when we adopt AI in these ways, we become vulnerable to certain illusions. Here, Molly and I talk about four visions of AI in science that are currently circulating: AI as an Oracle, as a Surrogate, as a Quant, and as an Arbiter. We talk about the very real problems in the scientific process that AI promises to help us solve. We consider the ethics and challenges of using Large Language Models as experimental subjects. We talk about three illusions of understanding the crop up when we uncritically adopt AI into the research pipeline—an illusion that we understand more than we actually do; an illusion that we're covering a larger swath of a research space than we actually are; and the illusion that AI makes our work more objective. We also talk about how ideas from Science and Technology Studies (or STS) can help us make sense of this AI-driven transformation that, like it or no, is already upon us. Along the way Molly and I touch on: AI therapists and AI tutors, anthropomorphism, the culture and ideology of Silicon Valley, Amazon's Mechanical Turk, fMRI, objectivity, quantification, Molly's mid-career crisis, monocultures, and the squishy parts of human experience. Without further ado, on to my conversation with Dr. Molly Crockett. Enjoy!   A transcript of this episode will be posted soon.   Notes and links 5:00 – For more on LLMs—and the question of whether we understand how they work—see our earlier episode with Murray Shanahan. 9:00 – For the paper by Dr. Crockett and colleagues about the social/behavioral sciences and the COVID-19 pandemic, see here.  11:30 – For Dr. Crockett and colleagues' work on outrage on social media, see this recent paper.  18:00 – For a recent exchange on the prospects of using LLMs in scientific peer review, see here. 20:30 – Donna Haraway's essay, 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective', is here. See also Dr. Haraway's book, Primate Visions. 22:00 – For the recent essay by Henry Farrell and others on AI as a cultural technology, see here. 23:00 – For a recent report on chatbots driving people to mental health crises, see here. 25:30 – For the already-classic “stochastic parrots” article, see here. 33:00 – For the study by Ryan Carlson and Dr. Crockett on using crowd-workers to study altruism, see here. 34:00 – For more on the “illusion of explanatory depth,” see our episode with Tania Lombrozo. 53:00 – For the more about Ohio State's plans to incorporate AI in the classroom, see here. For a recent essay by Dr. Crockett on the idea of “techno-optimism,” see here.   Recommendations More Everything Forever, by Adam Becker Transformative Experience, by L. A. Paul Epistemic Injustice, by Miranda Fricker   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.  For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

Mind & Matter
Computational Neuroscience, Machine Learning vs. Biological Learning, Large Language Models | Marius Pachitariu | 235

Mind & Matter

Play Episode Listen Later Jun 26, 2025 110:04


Send us a textHow brains compute and learn, blending neuroscience with AI insights.Episode Summary: Dr. Marius Pachitariu discusses how the brain computes information across scales, from single neurons to complex networks, using mice to study visual learning. He explains the differences between supervised and unsupervised learning, the brain's high-dimensional processing, and how it compares to artificial neural networks like large language models. The conversation also covers experimental techniques, such as calcium imaging, and the role of reward prediction errors in learning.About the guest: Marius Pachitariu, PhD is a group leader at the Janelia Research Campus, leading a lab focused on neuroscience with a blend of experimental and computational approaches.Discussion Points:The brain operates at multiple scales, with single neurons acting as computational units and networks creating complex, high-dimensional computations.Pachitariu's lab uses advanced tools like calcium imaging to record from tens of thousands of neurons simultaneously in mice.Unsupervised learning allows mice to form visual memories of environments without rewards, speeding up task learning later.Brain activity during sleep or anesthesia is highly correlated, unlike the high-dimensional, less predictable patterns during wakefulness.The brain expands sensory input dimensionality (e.g., from retina to visual cortex) to simplify complex computations, a principle also seen in artificial neural networks.Reward prediction errors, driven by dopamine, signal when expectations are violated, aiding learning by updating internal models.Large language models rely on self-supervised learning, predicting next words, but lack the forward-modeling reasoning humans excel at.Related episode:M&M 44: Consciousness, Perception, Hallucinations, Selfhood, Neuroscience, Psychedelics & "Being You" | Anil Seth*Not medical advice.Support the showAll episodes, show notes, transcripts, and more at the M&M Substack Affiliates: KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription (cancel anytime) Lumen device to optimize your metabolism for weight loss or athletic performance. Code MIND for 10% off Readwise: Organize and share what you read. 60 days FREE through link SiPhox Health—Affordable at-home blood testing. Key health markers, visualized & explained. Code TRIKOMES for a 20% discount. MASA Chips—delicious tortilla chips made from organic corn & grass-fed beef tallow. No seed oils or artificial ingredients. Code MIND for 20% off For all the ways you can support my efforts

This Week in Health IT
Keynote: From the Operating Room to the Boardroom with Michael Han

This Week in Health IT

Play Episode Listen Later Jun 26, 2025 29:05 Transcription Available


June 26, 2025: Michael Han, MD, Enterprise CMIO and VP of MultiCare Health System, discusses his transition from the operating room to the boardroom. He argues that the path forward isn't through automating clinical decisions, but through revolutionizing call centers, scheduling, prior authorizations, and referrals. Michael reveals how ambient clinical documentation must evolve beyond simple note-taking into a treasure trove of unstructured data that can drive actions across the entire care continuum—from pre-visit chart preparation to post-visit care coordination. The conversation explores how leaders establish credibility as they transition from the operating room to the boardroom. Key Points: 02:43 Impact of Ambient Clinical Documentation 05:51 AI and Large Language Models in Healthcare 10:27 The Role of CMIO in Digital Transformation 15:05 Leadership and Credibility in Healthcare 24:51 Speed Round: Personal Insights X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer

Spiderum Official
Cẩm nang về LLMs dành cho những người không muốn tối cổ về AI | Minh Triết

Spiderum Official

Play Episode Listen Later Jun 26, 2025 32:06


Video này được chuyển thể từ bài viết gốc trên nền tảng mạng xã hội chia sẻ tri thức Spiderum

Developer Tea
Using LLMs To Expand Your Working Vocabulary

Developer Tea

Play Episode Listen Later Jun 25, 2025 13:26


This episode explores the fundamental mindset of building your vocabulary, extending beyond literal words to conceptual understanding and mental models, and how Large Language Models (LLMs) can be a powerful tool for expanding and refining this crucial skill for career growth, clarity, and navigating disruptions.Uncover why building your vocabulary is a fundamental skill that can help you navigate career transitions, disruptions (such as those caused by AI), and changes in roles.Understand that "vocabulary" goes beyond literal words to include mental models, understanding your own self, specific diagrams (like causal loop diagrams or C4 diagrams), and programming paradigms or design patterns. This conceptual vocabulary provides access to nuanced and powerful ways of thinking.Learn how LLMs can be incredibly useful for refining and expanding your conceptual vocabulary, allowing you to explore new subjects, understand systems, and identify leverage points. They can help you understand the connotations, origins, and applications of concepts, as well as how they piece together with adjacent ideas.Discover why starting with fundamental primitives like inputs, outputs, flows, and system types can help you develop vocabulary, and how LLMs can suggest widely used tools or visualisations based on these primitives (e.g., a scatter plot for XY data).Explore why focusing on understanding the "why" and "when" of using a concept or tool is a much higher leverage skill than merely knowing "how" to use it, enabling you to piece together different vocabulary pieces for deeper insights.

The Great Simplification with Nate Hagens
Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

The Great Simplification with Nate Hagens

Play Episode Listen Later Jun 25, 2025 97:47


Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.  What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025)     About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.   Show Notes and More Watch this video episode on YouTube   Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.   ---   Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners  

Stationery Adjacent
Episode 195 - Social Media F&cked the World

Stationery Adjacent

Play Episode Listen Later Jun 25, 2025 89:56


Justin finally discovers a use for Large Language Models.Show notes at: https://stationeryadjacent.com/episodes/195

JACC Speciality Journals
LLMonFHIR: A Physician-Validated, Large Language Model–Based Mobile Application for Querying Patient Electronic Health Data | JACC: Advances

JACC Speciality Journals

Play Episode Listen Later Jun 25, 2025 2:29


Darshan H. Brahmbhatt, Podcast Editor of JACC: Advances, discusses a recently published original research paper on LLMonFHIR: A Physician-Validated, Large Language Model–Based Mobile Application for Querying Patient Electronic Health Data.

Book Overflow
Developing a Mental Model for AI - Thinking Like a Large Language Model by Mukund Sundararajan

Book Overflow

Play Episode Listen Later Jun 23, 2025 66:35


In this episode of Book Overflow, Carter and Nathan discuss Thinking Like a Large Language Model by Mukund Sundararajan. Join them as they discuss the different mental models for working with AI, the art of prompt engineering, and some exciting developments in Carter's career!-- Books Mentioned in this Episode --Note: As an Amazon Associate, we earn from qualifying purchases.----------------------------------------------------------Thinking Like a Large Language Model by Mukund Sundararajanhttps://amzn.to/466v89G (paid link)----------------Spotify: https://open.spotify.com/show/5kj6DLCEWR5nHShlSYJI5LApple Podcasts: https://podcasts.apple.com/us/podcast/book-overflow/id1745257325X: https://x.com/bookoverflowpodCarter on X: https://x.com/cartermorganNathan's Functionally Imperative: www.functionallyimperative.com----------------Book Overflow is a podcast for software engineers, by software engineers dedicated to improving our craft by reading the best technical books in the world. Join Carter Morgan and Nathan Toups as they read and discuss a new technical book each week!The full book schedule and links to every major podcast player can be found at https://www.bookoverflow.io

Eye On A.I.
#264 Amr Awadallah: Vectara's Mission To Make AI Hallucination-Free & Enterprise Ready

Eye On A.I.

Play Episode Listen Later Jun 22, 2025 38:52


AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support.     What's stopping large language models from being truly enterprise-ready?   In this episode, Vectara CEO and co-founder Amr Awadallah breaks down how his team is solving one of AI's biggest problems: hallucinations.   From his early work at Yahoo and Cloudera to building Vectara, Amr shares his mission to make AI accurate, secure, and explainable. He dives deep into why RAG (Retrieval-Augmented Generation) is essential, how Vectara detects hallucinations in real-time, and why trust and transparency are non-negotiable for AI in business.   Whether you're a developer, founder, or enterprise leader, this conversation sheds light on the future of safe, reliable, and production-ready AI.   Don't miss this if you want to understand how AI will really be used at scale.     Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI

a16z
The State of AI & Education

a16z

Play Episode Listen Later Jun 20, 2025 29:20


How is AI actually being used in classrooms today? Are teachers adopting it, or resisting it? And could software eventually replace traditional instruction entirely?In this episode of This Week in Consumer AI, a16z partners Justine Moore, Olivia Moore, and Zach Cohen explore one of the most rapidly evolving — and widely debated — frontiers in consumer technology: education.They unpack how generative AI is already reshaping educational workflows, enabling teachers to scale feedback, personalize curriculum, and reclaim time from administrative tasks. We also examine emerging consumer behavior — from students using AI for homework to parents exploring AI-led learning paths for their children. Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsFind Zach on X: https://x.com/zachcohen25 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

AI + a16z
AI, Data Engineering, and the Modern Data Stack

AI + a16z

Play Episode Listen Later Jun 20, 2025 35:07


In this episode of AI + a16z, dbt Labs founder and CEO Tristan Handy sits down with a16z's Jennifer Li and Matt Bornstein to explore the next chapter of data engineering — from the rise (and plateau) of the modern data stack to the growing role of AI in analytics and data engineering. As they sum up the impact of AI on data workflows: The interesting question here is human-in-the-loop versus human-not-in-the-loop. AI isn't about replacing analysts — it's about enabling self-service across the company. But without a human to verify the result, that's a very scary thing.Among other specific topics, they also discuss how automation and tooling like SQL compilers are reshaping how engineers work with data; dbt's new Fusion Engine and what it means for developer workflows; and what to make of the spate of recent data-industry acquisitions and ambitious product launches.Follow everyone on X:Tristan HandyJennifer LiMatt Bornstein Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

The Daily Zeitgeist
MAGA vs MAGA, US = Cause Of/Solution To All Earth's Problems? 06.19.25

The Daily Zeitgeist

Play Episode Listen Later Jun 19, 2025 73:17 Transcription Available


In episode 1883, Jack and Miles are joined by writer, comedian, and co-host of Yo, Is This Racist?, Andrew Ti, to discuss… America’s Cold War Strategy Is Coming Home To Roost Huh? Our Information Environment Is So F**ked, Couple Wild Stories About People Not Knowing How To Act Around AI and more! Tucker Vs. Ted Smackdown They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. Father of man killed in Port St. Lucie officer-involved shooting: 'My son deserved better' LISTEN: Husk by Men I TrustSee omnystudio.com/listener for privacy information.

PodRocket - A web development podcast from LogRocket
Gemini 2.5 with Logan Kilpatrick

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jun 19, 2025 30:25


Logan Kilpatrick from Google DeepMind talks about the latest developments in the Gemini 2.5 model family, including Gemini 2.5 Pro, Flash, and the newly introduced Flashlight. Logan also offers insight into AI development workflows, model performance, and the future of proactive AI assistants. Links Website: https://logank.ai LinkedIn: https://www.linkedin.com/in/logankilpatrick X: https://x.com/officiallogank YouTube: https://www.youtube.com/@LoganKilpatrickYT Google AI Studio: https://aistudio.google.com Resources Gemini 2.5 Pro Preview: even better coding performance (https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance) Building with AI: highlights for developers at Google I/O (https://blog.google/technology/developers/google-ai-developer-updates-io-2025) We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Logan Kilpatrick.

Seller Sessions
The Blueprint Show - Unlocking the Future of E-commerce with AI

Seller Sessions

Play Episode Listen Later Jun 19, 2025 52:35


The Blueprint Show - Unlocking the Future of E-commerce with AI   Summary   In this episode of Seller Sessions, Danny McMillan and Andrew Joseph Bell explore the intersection of AI and e-commerce, with a focus on Amazon's technological advancements. They examine Amazon science papers versus patents, discuss challenges with large language models, and highlight the importance of semantic intent in product recommendations. The conversation explores the evolution from keyword optimization to understanding customer purchase intentions, showcasing how AI tools like Rufus are transforming the shopping experience. The hosts provide practical strategies for sellers to optimize listings and harness AI for improved product visibility and sales.   Key Takeaways     Amazon science papers predict future e-commerce trends.       AI integration is accelerating in Amazon's ecosystem.       Understanding semantic intent is crucial for product recommendations.       The shift from keywords to purchase intentions is significant.       Rufus enhances the shopping experience with AI planning capabilities.       Sellers should focus on customer motivations in their listings.       Creating compelling product content is essential for visibility.       Custom GPTs can optimize product listings effectively.       Inference pathways help align products with customer goals.       Asking the right questions is key to leveraging AI effectively.     Sound Bites     "Understanding semantic intent is crucial."       "You can bend AI to your will."       "Asking the right questions opens doors."     Chapters   00:00 Introduction to Seller Sessions and New Season   00:33 Exploring Amazon Science Papers vs. Patents   01:27 Understanding Rufus and AI in E-commerce   02:52 Challenges in Large Language Models and Product Recommendations   07:09 Research Contributions and Implications for Sellers   10:31 Strategies for Leveraging AI in Product Listings   12:42 The Future of Shopping with AI and Amazon's Innovations   16:14 Practical Examples: Using AI for Product Optimization   22:29 Building Tools for Enhanced E-commerce Experiences   25:38 Product Naming and Features Exploration   27:44 Understanding Inference Pathways in Product Descriptions   30:36 Building Tools for AI Prompting and Automation   38:58 Bending AI to Your Will: Creativity and Imagination   48:10 Practical Applications of AI in Business Automation

Smart Software with SmartLogic
Nx and Machine Learning in Elixir with Sean Moriarity

Smart Software with SmartLogic

Play Episode Listen Later Jun 19, 2025 44:21


Today on Elixir Wizards, hosts Sundi Myint and Charles Suggs catch up with Sean Moriarity, co-creator of the Nx project and author of Machine Learning in Elixir. Sean reflects on his transition from the military to a civilian job building large language models (LLMs) for software. He explains how the Elixir ML landscape has evolved since the rise of ChatGPT, shifting from building native model implementations toward orchestrating best-in-class tools. We discuss the pragmatics of adding ML to Elixir apps: when to start with out-of-the-box LLMs vs. rolling your own, how to hook into Python-based libraries, and how to tap Elixir's distributed computing for scalable workloads. Sean closes with advice for developers embarking on Elixir ML projects, from picking motivating use cases to experimenting with domain-specific languages for AI-driven workflows. Key topics discussed in this episode: The evolution of the Nx (Numerical Elixir) project and what's new with ML in Elixir Treating Elixir as an orchestration layer for external ML tools When to rely on off-the-shelf LLMs vs. custom models Strategies for integrating Elixir with Python-based ML libraries Leveraging Elixir's distributed computing strengths for ML tasks Starting ML projects with existing data considerations Synthetic data generation using large language models Exploring DSLs to streamline AI-powered business logic Balancing custom frameworks and service-based approaches in production Pragmatic advice for getting started with ML in Elixir Links mentioned: https://hexdocs.pm/nx/intro-to-nx.html https://pragprog.com/titles/smelixir/machine-learning-in-elixir/ https://magic.dev/ https://smartlogic.io/podcast/elixir-wizards/s10-e10-sean-moriarity-machine-learning-elixir/ Pragmatic Bookshelf: https://pragprog.com/ ONNX Runtime Bindings for Elixir: https://github.com/elixir-nx/ortex https://github.com/elixir-nx/bumblebee Silero Voice Activity Detector: https://github.com/snakers4/silero-vad Paulo Valente Graph Splitting Article: https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1 Thomas Millar's Twitter https://x.com/thmsmlr https://github.com/thmsmlr/instructorex https://phoenix.new/ https://tidewave.ai/ https://en.wikipedia.org/wiki/BERT(language_model) Talk: PyTorch: Fast Differentiable Dynamic Graphs in Python (https://www.youtube.com/watch?v=am895oU6mmY) by Soumith Chintala https://hexdocs.pm/axon/Axon.html https://hexdocs.pm/exla/EXLA.html VLM (Vision Language Models Explained): https://huggingface.co/blog/vlms https://github.com/ggml-org/llama.cpp Vector Search in Elixir: https://github.com/elixir-nx/hnswlib https://www.amplified.ai/ Llama 4 https://mistral.ai/ Mistral Open-Source LLMs: https://mistral.ai/ https://github.com/openai/whisper Elixir Wizards Season 5: Adopting Elixir https://smartlogic.io/podcast/elixir-wizards/season-five https://docs.ray.io/en/latest/ray-overview/index.html https://hexdocs.pm/flame/FLAME.html https://firecracker-microvm.github.io/ https://fly.io/ https://kubernetes.io/ WireGuard VPNs https://www.wireguard.com/ https://hexdocs.pm/phoenixpubsub/Phoenix.PubSub.html https://www.manning.com/books/deep-learning-with-python Code BEAM 2025 Keynote: Designing LLM Native Systems - Sean Moriarity Ash Framework https://ash-hq.org/ Sean's Twitter: https://x.com/seanmoriarity Sean's Personal Blog: https://seanmoriarity.com/ Erlang Ecosystems Foundation Slack: https://erlef.org/slack-invite/erlef Elixir Forum https://elixirforum.com/ Sean's LinkedIn: https://www.linkedin.com/in/sean-m-ba231a149/ Special Guest: Sean Moriarity.

Empowered Patient Podcast
How Large Language Models Are Transforming Chart Review and Improving Patient Care with David Sontag Layer Health TRANSCRIPT

Empowered Patient Podcast

Play Episode Listen Later Jun 19, 2025


David Sontag, CEO and Co-Founder of Layer Health, describes the environment of chart reviews in healthcare and how AI and large language models can be used to analyze a patient's medical record to extract key clinical details. Applying natural language processing to medical records has been challenging due to the complexity of the language and the longitudinal nature of the data. This large language approach from Layer can enhance clinical decision-making, quality measurement, regulatory compliance, and patient outcomes. David explains, "Every patient will have experienced a chart review at some point. Whether it's when they've come home from a medical visit and go to their electronic medical record to look at the notes written by their providers. Or it's been experienced in the patient room, in the doctor's room, watching a clinician review the past medical records to try to get a better context of what's going on with that patient, so that's from the patient's perspective." "The same thing happens everywhere else in healthcare. So, chart review is the process of analyzing a patient's medical record to extract key clinical details. You can imagine going through clinical notes, lab results, imaging reports, and medical history, trying to create that complete and accurate picture of the patient's health. And it's used everywhere for measuring quality, for improving the financial performance of health system providers, and for regulatory compliance."   "Some aspects of natural language understanding from patients' medical records have been attempted for well over a decade, and these approaches have been typically very surface-level. So look at a single note, try to answer a relatively simplistic question from that note, but the grand challenge has always been one of how do we mimic the type of reasoning that a physician would do where they would be looking at a patient's longitudinal medical record across many notes trying to piece together data from not just from the unstructured but also the structured data."  #LayerHealth #ClinicalAI #AIinHealthcare #MedicalChartReview #PrecisionMedicine #ClinicalResearch #LLMsinHealthcare layerhealth.com Listen to the podcast here

Empowered Patient Podcast
How Large Language Models Are Transforming Chart Review and Improving Patient Care with David Sontag Layer Health

Empowered Patient Podcast

Play Episode Listen Later Jun 19, 2025 23:17


David Sontag, CEO and Co-Founder of Layer Health, describes the environment of chart reviews in healthcare and how AI and large language models can be used to analyze a patient's medical record to extract key clinical details. Applying natural language processing to medical records has been challenging due to the complexity of the language and the longitudinal nature of the data. This large language approach from Layer can enhance clinical decision-making, quality measurement, regulatory compliance, and patient outcomes. David explains, "Every patient will have experienced a chart review at some point. Whether it's when they've come home from a medical visit and go to their electronic medical record to look at the notes written by their providers. Or it's been experienced in the patient room, in the doctor's room, watching a clinician review the past medical records to try to get a better context of what's going on with that patient, so that's from the patient's perspective." "The same thing happens everywhere else in healthcare. So, chart review is the process of analyzing a patient's medical record to extract key clinical details. You can imagine going through clinical notes, lab results, imaging reports, and medical history, trying to create that complete and accurate picture of the patient's health. And it's used everywhere for measuring quality, for improving the financial performance of health system providers, and for regulatory compliance."   "Some aspects of natural language understanding from patients' medical records have been attempted for well over a decade, and these approaches have been typically very surface-level. So look at a single note, try to answer a relatively simplistic question from that note, but the grand challenge has always been one of how do we mimic the type of reasoning that a physician would do where they would be looking at a patient's longitudinal medical record across many notes trying to piece together data from not just from the unstructured but also the structured data."  #LayerHealth #ClinicalAI #AIinHealthcare #MedicalChartReview #PrecisionMedicine #ClinicalResearch #LLMsinHealthcare layerhealth.com Download the transcript here

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 549: Custom GPTs, Gems and Projects. What they are and why you need to use them

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 18, 2025 48:59


Before you hit that new chat button in ChatGPT, Claude or Gemini....You're already doing it wrong. I've run 200+ live GenAI training sessions and have taught more than 11,000 business pros and this is one of the biggest mistakes. Just blindly hitting that new chat button can end up killing any perceived productivity you think you're getting while using LLMs. Instead, you need to know the 101s of Gemini Gem, GPTs and Projects. This is one AI at Work Wednesdays you can't miss.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 673: Abhinav Kimothi on Retrieval-Augmented Generation

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Jun 18, 2025 55:55


In this episode of Software Engineering Radio, Abhinav Kimothi sits down with host Priyanka Raghavan to explore retrieval-augmented generation (RAG), drawing insights from Abhinav's book, A Simple Guide to Retrieval-Augmented Generation. The conversation begins with an introduction to key concepts, including large language models (LLMs), context windows, RAG, hallucinations, and real-world use cases. They then delve into the essential components and design considerations for building a RAG-enabled system, covering topics such as retrievers, prompt augmentation, indexing pipelines, retrieval strategies, and the generation process. The discussion also touches on critical aspects like data chunking and the distinctions between open-source and pre-trained models. The episode concludes with a forward-looking perspective on the future of RAG and its evolving role in the industry. Brought to you by IEEE Computer Society and IEEE Software magazine.

MacVoices Audio
MacVoices #25171: Road to Macstock - Mike Schmitz

MacVoices Audio

Play Episode Listen Later Jun 18, 2025 25:09


Mike Schmitz returns to the Road to Macstock Conference and Expo to discuss his session, “Think Different: Using AI as Your Creative Copilot.” He explains how he uses AI not to automate the end product, but to enhance the early stages of the creative process—particularly brainstorming and idea generation. By reframing “hallucinations” as sparks of inspiration, Mike shares how large language models help overcome creative bottlenecks, reduce friction, and support consistency without sacrificing originality. He also highlights specific tools that allow independent creators to scale their reach more effectively and thoughtfully.  http://traffic.libsyn.com/maclevelten/MV25171.mp3 This edition of MacVoices is brought to you by our Patreon supporters. Get access to the MacVoices Slack and MacVoices After Dark by joining in at Patreon.com/macvoices. Show Notes: Chapters: 00:07 Road to Mac Stock with Mike Schmitz 01:48 AI as Your Creative Copilot 03:48 Exploring AI's Creative Potential 09:52 Leveraging AI for Content Creation 13:59 Tools Revolutionizing Creativity 19:52 Removing Friction in Creation 21:04 Using AI to Ship More 21:54 MacStock Conference Details 22:44 Where to Find Mike Schmitz Links: stock Conference and Expo Save $50 with the Mike's discount code: practicalpkm Save $50 with Chuck's discount code: macvoices50 Guests: Mike Schmitz  is a nerd and an independent creator who talks about the intersection of faith, productivity, and tech He's a YouTuber, screencaster (ScreenCastsOnline), writer (The Sweet Setup), and co-hosts the Focused, Bookworm, and Intentional Family podcasts.His newest effort is PracticalPKM, where he teaches his personal approach to getting more done. Follow him on Twitter as _MikeSchmitz. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Pediatrics On Call
Atopic Dermatitis, Large Language Model Accuracy in Pediatric and Adult Medicine – Ep. 251

Pediatrics On Call

Play Episode Listen Later Jun 17, 2025 28:06


In this episode Jennifer Schoch, MD, FAAD, FAAP, discusses updated guidelines for the diagnosis and treatment of atopic dermatitis or eczema. Hosts David Hill, MD, FAAP, and Joanna Parga-Belinkie, MD, FAAP, also speak with Esli Osmanlliu, MD, and medical student Nik Jaiswal about the accuracy of large language models in pediatric and adult medicine. For resources go to aap.org/podcast.

The Treasury Update Podcast
Applying AI in Treasury: Use Cases, Cautions, and What's Next

The Treasury Update Podcast

Play Episode Listen Later Jun 16, 2025 43:11


In this episode, Craig Jeffery talks with Dave Robertson about how AI is being applied in treasury today. They cover key use cases, from forecasting and credit analysis to policy automation, and explore challenges like hallucinations and data privacy. What should treasurers be doing now to prepare for what's coming? Listen in to learn more. CashPath Advisors

(in-person, virtual & hybrid) Events: demystified
182: The Truth About AI Agents & Large Language Models: Insights ft. Vlad Iliescu

(in-person, virtual & hybrid) Events: demystified

Play Episode Listen Later Jun 13, 2025 65:50


In this episode of Events Demystified Podcast, host Anca Platon Trifan provides a deep dive into the world of AI agents, large language models, and practical applications for businesses. Featuring guest Vlad Iliescu, CTO of zenai.os and a Microsoft Most Valuable Professional in AI, the discussion covers how to build AI systems that support team goals, the step-by-step implementation of AI agents, the importance of strategy and process in AI adoption, and the real ROI of AI in businesses. The episode also includes a live demonstration and practical insights on safe AI development, making it a must-watch for leaders looking to integrate AI responsibly into their operations.

Smart Software with SmartLogic
LangChain: LLM Integration for Elixir Apps with Mark Ericksen

Smart Software with SmartLogic

Play Episode Listen Later Jun 12, 2025 38:18


Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic's Claude, Google's Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies. Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models. Key topics discussed in this episode: • Abstracting LLM APIs behind a unified Elixir interface • Building and managing conversation chains across multiple models • Exposing application functionality to LLMs through tool integrations • Automatic retries and fallback chains for production resilience • Supporting a variety of LLM providers • Tracking and optimizing token usage for cost control • Configuring API keys, authentication, and provider-specific settings • Handling rate limits and service outages with degradation • Processing multimodal inputs (text, images) in Langchain workflows • Extracting structured data from unstructured LLM responses • Leveraging “content parts” in v0.4 for advanced thinking-model support • Debugging LLM interactions using verbose logging and telemetry • Kickstarting experiments in LiveBook notebooks and demos • Comparing Elixir LangChain to the original Python implementation • Crafting human-in-the-loop workflows for interactive AI features • Integrating Langchain with the Ash framework for chat-driven interfaces • Contributing to open-source LLM adapters and staying ahead of API changes • Building fallback chains (e.g., OpenAI → Azure) for seamless continuity • Embedding business logic decisions directly into AI-powered tools • Summarization techniques for token efficiency in ongoing conversations • Batch processing tactics to leverage lower-cost API rate tiers • Real-world lessons on maintaining uptime amid LLM service disruptions Links mentioned: https://rubyonrails.org/ https://fly.io/ https://zionnationalpark.com/ https://podcast.thinkingelixir.com/ https://github.com/brainlid/langchain https://openai.com/ https://claude.ai/ https://gemini.google.com/ https://www.anthropic.com/ Vertex AI Studio https://cloud.google.com/generative-ai-studio https://www.perplexity.ai/ https://azure.microsoft.com/ https://hexdocs.pm/ecto/Ecto.html https://oban.pro/ Chris McCord's ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk Getting started: https://hexdocs.pm/langchain/gettingstarted.html https://ash-hq.org/ https://hex.pm/packages/langchain https://hexdocs.pm/igniter/readme.html https://www.youtube.com/watch?v=WM9iQlQSFg @brainlid on Twitter and BlueSky Special Guest: Mark Ericksen.

Eye On A.I.
#261 Jonathan Frankle: How Databricks is Disrupting AI Model Training

Eye On A.I.

Play Episode Listen Later Jun 12, 2025 52:47


This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved.   Try OCI for free at http://oracle.com/eyeonai   What if you could fine-tune an AI model without any labeled data—and still outperform traditional training methods?   In this episode of Eye on AI, we sit down with Jonathan Frankle, Chief Scientist at Databricks and co-founder of MosaicML, to explore TAO (Test-time Adaptive Optimization)—Databricks' breakthrough tuning method that's transforming how enterprises build and scale large language models (LLMs).   Jonathan explains how TAO uses reinforcement learning and synthetic data to train models without the need for expensive, time-consuming annotation. We dive into how TAO compares to supervised fine-tuning, why Databricks built their own reward model (DBRM), and how this system allows for continual improvement, lower inference costs, and faster enterprise AI deployment.   Whether you're an AI researcher, enterprise leader, or someone curious about the future of model customization, this episode will change how you think about training and deploying AI.   Explore the latest breakthroughs in data and AI from Databricks: https://www.databricks.com/events/dataaisummit-2025-announcements Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI  

Thoughts on the Market
How China Is Rewriting the AI Code

Thoughts on the Market

Play Episode Listen Later Jun 10, 2025 3:38


Our Head of Asia Technology Research Shawn Kim discusses China's distinctly different approach to AI development and its investment implications.Read more insights from Morgan Stanley.----- Transcript -----Welcome to Thoughts on the Market. I'm Shawn Kim, Head of Morgan Stanley's Asia Technology Team. Today: a behind-the-scenes look at how China is reshaping the global AI landscape. It's Tuesday, June 10 at 2pm in Hong Kong. China has been quietly and methodically executing on its top-down strategy to establish its domestic AI capabilities ever since 2017. And while U.S. semiconductor restrictions have presented a near-term challenge, they have also forced China to achieve significant advancements in AI with less hardware. So rather than building the most powerful AI capabilities, China's primary focus has been on bringing AI to market with maximum efficiency. And you can see this with the recent launch of DeepSeek R1, and there are literally hundreds of AI start-ups using open-source Large Language Models to carve out niches and moats in this AI landscape. The key question is: What is the path forward? Can China sustain this momentum and translate its research prowess into global AI leadership? The answer hinges on four things: its energy, its data, talent, and computing. China's centralized government – with more than a billion mobile internet users – possess enormous amounts of data. China also has access to abundant energy: it built 10 nuclear power plants just last year, and there are ten more coming this year. U.S. chips are far better for the moment, but China is also advancing quickly; and getting a lot done without the best chips. Finally, China has plenty of talent – according to the World Economic Forum, 47 percent of the world's top AI researchers are now in China. Plus, there is already a comprehensive AI governance framework in place, with more than 250 regulatory standards ensuring that AI development remains secure, ethical, and strategically controlled. So, all in all, China is well on its way to realizing its ambitious goal of becoming a world leader in AI by 2030. And by that point, AI will be deeply embedded across all sectors of China's economy, supported by a regulatory environment. We believe the AI revolution will boost China's long-term potential GDP growth by addressing key structural headwinds to the economy, such as aging demographics and slowing productivity growth. We estimate that GenAI can create almost 7 trillion RMB in labor and productivity value. This equals almost 5 percent of China's GDP growth last year. And the investment implications of China's approach to AI cannot be overstated. It's clear that China has already established a solid AI foundation. And now meaningful opportunities are emerging not just for the big players, but also for smaller, mass-market businesses as well. And with value shifting from AI hardware to the AI application layer, we see China continuing its success in bringing out AI applications to market and transforming industries in very practical terms. As history shows, whoever adopts and diffuses a new technology the fastest wins – and is difficult to displace. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.

Techmeme Ride Home
Mon. 06/09 – WWDC 2025

Techmeme Ride Home

Play Episode Listen Later Jun 9, 2025 21:37


All the headlines from WWDC. Microsoft unveils the first iteration of that handheld gaming strategy. Meta is considering its largest external AI investment yet. And did Apple researchers reveal that Large Language Model have a structural ceiling, and are we basically there?Sponsors:Acorns.com/rideLinks:Hands-On With the Xbox Ally X, the New Gaming Handheld from Asus and Microsoft (IGN)Meta in Talks for Scale AI Investment That Could Top $10 Billion (Bloomberg)A knockout blow for LLMs? (Gary Marcus On AI)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 542: Apple's controversial AI study, Google's new model and more AI News That Matters

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 9, 2025 46:47


↳ Why is Anthropic in hot water with Reddit? ↳ Will OpenAI become the de facto business AI tool? ↳ Did Apple make a mistake in its buzzworthy AI study? ↳ And why did Google release a new model when it was already on top? So many AI questions. We've got the AI answers.Don't waste hours each day trying to keep up with AI developments.We do that for you on Mondays with our weekly AI News That Matters segment.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's Advanced Voice Mode UpdateReddit's Lawsuit Against AnthropicOpenAI's New Cloud ConnectorsGoogle's Gemini 2.5 Pro ReleaseDeepSeek Accused of Data SourcingAnthropic Cuts Windsurf Claude AccessApple's AI Reasoning Models StudyMeta's Investment in Scale AITimestamps:00:00 Weekly AI News Summary04:27 "Advanced Voice Mode Limitations"09:07 Reddit's Role in AI Tensions10:23 Reddit's Impact on Content Strategy16:10 "RAG's Evolution: Accessible Data Insights"19:16 AI Model Update and Improvements22:59 DeepSeek Accused of Data Misuse24:18 DeepSeek Accused of Distilling AI Data28:20 Anthropic Limits Windsurf Cloud Access32:37 "Study Questions AI Reasoning Models"36:06 Apple's Dubious AI Research Tactics39:36 Meta-Scale AI Partnership Potential40:46 AI Updates: Apple's Gap Year43:52 AI Updates: Voice, Lawsuits, ModelsKeywords:Apple AI study, AI reasoning models, Google Gemini, OpenAI, ChatGPT, Anthropic, Reddit lawsuit, Large Language Model, AI voice mode, Advanced voice mode, Real-time language translation, Cloud connectors, Dynamic data integration, Meeting recorder, Coding benchmarks, DeepSeek, R1 model, Distillation method, AI ethics, Windsurf, Claude 3.x, Model access, Privacy and data rights, AI research, Meta investment, Scale AI, WWDC, Apple's AI announcements, Gap year, On-device AI models, Siri 2.0, AI market strategy, ChatGPT teams, SharePoint, OneDrive, HubSpot, Scheduled actions, Sparkify, VO3, Google AI Pro plan, Creative AI, Innovation in AI, Data infrastructure.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.

The John Batchelor Show
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author)

The John Batchelor Show

Play Episode Listen Later Jun 8, 2025 9:26


LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist by  Richard Munson  (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1752

The John Batchelor Show
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author)

The John Batchelor Show

Play Episode Listen Later Jun 8, 2025 8:24


LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist by  Richard Munson  (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1850

The John Batchelor Show
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author)

The John Batchelor Show

Play Episode Listen Later Jun 8, 2025 12:20


LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist by  Richard Munson  (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1906

The John Batchelor Show
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author)

The John Batchelor Show

Play Episode Listen Later Jun 8, 2025 7:20


LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist by  Richard Munson  (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1867 IN PARIS

The John Batchelor Show
Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.

The John Batchelor Show

Play Episode Listen Later Jun 4, 2025 1:43


Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.DECEMBER 1961

a16z
Fei-Fei Li: World Models and the Multiverse

a16z

Play Episode Listen Later Jun 4, 2025 22:56


What if the next leap in artificial intelligence isn't about better language—but better understanding of space?In this episode, a16z General Partner Erik Torenberg moderates a conversation with Fei-Fei Li, cofounder and CEO of World Labs, and a16z General Partner Martin Casado, an early investor in the company. Together, they dive into the concept of world models—AI systems that can understand and reason about the 3D, physical world, not just generate text.Often called the “godmother of AI,” Fei-Fei explains why spatial intelligence is a fundamental and still-missing piece of today's AI—and why she's building an entire company to solve it. Martin shares how he and Fei-Fei aligned on this vision long before it became fashionable, and why it could reshape the future of robotics, creativity, and computational interfaces.From the limits of LLMs to the promise of embodied intelligence, this conversation blends personal stories with deep technical insights—exploring what it really means to build AI that understands the real (and virtual) world.Resources: Find Fei-Fei on X: https://x.com/drfeifeiFind Martin on X: https://x.com/martin_casadoLearn more about World Labs: https://www.worldlabs.ai/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Why Is This Happening? with Chris Hayes
“Empire of AI” with Karen Hao

Why Is This Happening? with Chris Hayes

Play Episode Listen Later Jun 3, 2025 50:27


There's a good chance that before November of 2022, you hadn't heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they've become household names with the explosive growth of the generative AI tool called ChatGPT. What's been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.

Behind The Knife: The Surgery Podcast
Artificial Intelligence for the Clinician Ep. 3: Natural Language Processing and Large Language Models

Behind The Knife: The Surgery Podcast

Play Episode Listen Later Jun 2, 2025 45:28


Welcome back to our series on AI for the clinician! Large language models, like ChatGPT, have been taking the world by storm, and healthcare is no exception to that rule – your institution may already be using them! In this episode we'll tackle the fundamentals of how they work and their applications and limitations to keep you up to date on this fast-moving, exciting technology. Hosts: Ayman Ali, MD Ayman Ali is a Behind the Knife fellow and general surgery PGY-3 at Duke Hospital in his academic development time where he focuses on data science, artificial intelligence, and surgery. Ruchi Thanawala, MD: @Ruchi_TJ Ruchi Thanawala is an Assistant Professor of Informatics and Thoracic Surgery at Oregon Health and Science University (OHSU) and founder of Firefly, an AI-driven platform that is built for competency-based medical education. In addition, she directs the Surgical Data and Decision Sciences Lab for the Department of Surgery at OHSU.  Phillip Jenkins, MD: @PhilJenkinsMD Phil Jenkins is a general surgery PGY-3 at Oregon Health and Science University and a National Library of Medicine Post-Doctoral fellow pursuing a master's in clinical informatics. Steven Bedrick, PhD: @stevenbedrick Steven Bedrick is a machine learning researcher and an Associate Professor in Oregon Health and Science University's Department of Medical Informatics and Clinical Epidemiology. Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more.  If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen

The Lawfare Podcast
Lawfare Daily: Josh Batson on Understanding How and Why AI Works

The Lawfare Podcast

Play Episode Listen Later May 30, 2025 41:15


Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.