Podcasts about R1

  • 876PODCASTS
  • 2,385EPISODES
  • 29mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 6, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about R1

Show all podcasts related to r1

Latest podcast episodes about R1

Becker’s Healthcare Podcast
Peter D. Banko, President & CEO of Baystate Health

Becker’s Healthcare Podcast

Play Episode Listen Later Feb 6, 2026 13:19


This episode recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Peter D. Banko,  President & CEO of Baystate Health. Here, he discusses how Baystate Health is strengthening its commitment to people-centric care while navigating the realities of cost management and the evolving landscape of AI governance. He shares insights into building resilient, future-ready health systems that balance innovation with operational discipline.In collaboration with R1.

Becker’s Healthcare Podcast
John Mallia, Interim Chief Financial Officer at Methodist Le Bonheur Healthcare

Becker’s Healthcare Podcast

Play Episode Listen Later Feb 6, 2026 13:11


This episode recorded live at the Becker's 13th Annual CEO + CFO Roundtable features John Mallia, Interim Chief Financial Officer at Methodist Le Bonheur Healthcare. Here, he discusses how Methodist Le Bonheur Healthcare is navigating revenue cycle and payer dynamics while adapting financial strategies to meet evolving challenges. He shares insights into managing patient expectations and addressing concerns around data usage to create a more transparent and effective healthcare experience.In collaboration with R1.

Becker’s Healthcare Podcast
Abha Agrawal, President and CEO, NorthStar Hospitals

Becker’s Healthcare Podcast

Play Episode Listen Later Feb 4, 2026 13:05


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Abha Agrawal discusses how NorthStar Hospitals is reshaping rural healthcare by advancing the technology agenda and elevating the patient experience. She shares insights into building sustainable, tech-forward systems that improve access and outcomes for communities often left behind.In collaboration with R1.

Becker’s Healthcare Podcast
Dave Newman, MD, Chief Medical Officer of Virtual Care, Sanford Health

Becker’s Healthcare Podcast

Play Episode Listen Later Feb 4, 2026 6:24


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Dave Newman discusses how Sanford Health is expanding access by innovating virtual care—meeting patients where they are, including by phone. He shares insight into preventive strategies for chronic kidney disease and emphasizes how collaboration across teams and technologies serves as a powerful catalyst for progress in modern care delivery.In collaboration with R1.

Becker’s Healthcare Podcast
Michael Mutterer, RN, LCPC, NCC, CADC, LNHA, President & CEO, Silver Cross Hospital

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 30, 2026 15:55


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Michael Mutterer discusses leading growth in an increasingly competitive healthcare landscape and the strategies Silver Cross Hospital is using to elevate patient satisfaction. He also highlights how technology is helping strengthen payment and revenue cycle performance, creating a more sustainable and efficient operational model.In collaboration with R1.

Becker’s Healthcare Podcast
Michael Charlton, MHL, President & CEO, AtlantiCare Health System

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 30, 2026 16:56


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable Michael Charlton discusses the importance of serving underserved communities while supporting caregiver satisfaction. He shares how AtlantiCare is investing in technology across both the payer and provider landscape, and how these advancements are shaping the future workforce. He also explores the evolving role of AI and its potential impact on staffing and care delivery.In collaboration with R1.

Becker’s Healthcare Podcast
Dr. Mike Guertin, MD, MBA, CPE, FASA, Professor of Anesthesiology & Chief Perioperative Medical Director, The Ohio State University Wexner Medical Center

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 29, 2026 18:10


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Dr. Mike Guertin discusses how AI is creating new opportunities to advance patient care, particularly in perioperative settings. He highlights how emerging technologies are improving surgical efficiency by streamlining information gathering, reporting, and clinical decision support.In collaboration with R1.

Becker’s Healthcare Podcast
Shondra Williams, President & Chief Executive Officer at InclusivCare

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 29, 2026 14:34


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Dr. Shondra Williams, President & Chief Executive Officer at InclusivCare, as she shares how her organization is navigating Medicaid uncertainty, financial pressure, and patient access challenges. Dr. Williams also discusses leadership, culture, and the role of technology and AI in strengthening community health centers heading into 2026.In collaboration with R1.

¡Buenos días, Javi y Mar!
07:00H | 27 ENE 2026 | ¡Buenos días, Javi y Mar!

¡Buenos días, Javi y Mar!

Play Episode Listen Later Jan 27, 2026 60:00


Rodalies reabre las líneas R1, R2 y parte de la R4. La investigación apunta a la soldadura como causa del descarrilamiento en Adamuz. El Gobierno pacta la regularización de medio millón de inmigrantes irregulares, con requisitos de cinco meses de estancia y documentación; PP y Vox alertan de un efecto llamada. Europa investiga a X por su IA Grok, que genera fotos explícitas de menores. Francia prohíbe redes sociales a menores de 15 años y planea vetar móviles en institutos. En '¡Buenos días, Javi y Mar!', hablan de chapuzas domésticas y de la preferencia por una buena distribución de grasa corporal sobre la musculatura en hombres. CADENA 100 ofrece la 'Encuesta Absurda' y la mejor música.

La Linterna
19:00H | 23 ENE 2026 | La Linterna

La Linterna

Play Episode Listen Later Jan 23, 2026 60:00


La investigación del accidente de tren de Adamuz confirma que la causa es una vía rota, indetectable por alarmas debido a su leve fractura, lo que provoca que políticos pidan responsabilidades. Adif revisa tramos similares y reduce la velocidad en la línea Madrid-Valladolid. La borrasca Ingrid azota el noreste peninsular con nieve, afectando a Galicia con olas de diez metros, restricciones de tráfico y mil camiones embolsados. En Cataluña, los Rodalies reanudan servicio tras un accidente, pero un nuevo desprendimiento corta la línea R1. La Fiscalía archiva denuncias contra Julio Iglesias por falta de competencia. España y Europa enfrentan una crisis demográfica por la inestabilidad juvenil. La patronal propone que las empresas no paguen cotizaciones por bajas para frenar el absentismo, que ha aumentado un 50% y cuesta 33.000 millones. Madrid sufre bajas temperaturas y posibles nevadas; pide a la UE no activar el acuerdo con Mercosur para proteger a agricultores.

La Linterna
20:00H | 23 ENE 2026 | La Linterna

La Linterna

Play Episode Listen Later Jan 23, 2026 29:00


El accidente ferroviario de Adamuz se atribuye a una fractura en la vía según CIAF. Adif revisa carriles por fallos y el ministro Puente admite que sensores no detectan anomalías. Pedro Sánchez solicita comparecer en el Congreso por la situación ferroviaria. Se destaca la labor de profesionales y vecinos en la emergencia. La línea R1 de Rodalies sufre un corte. La Audiencia de A Coruña confirma la condena al maquinista del accidente de Angrois. El temporal de frío y nieve afecta a 80 carreteras y a un millar de camiones, con Galicia en alerta roja por olas y nieve. Internacionalmente, Ucrania, Rusia y Estados Unidos celebran su primera reunión trilateral en Emiratos Árabes, centrada en el Donbass. La Fiscalía archiva la denuncia contra Julio Iglesias. El juicio al hermano de Pedro Sánchez, David Sánchez, por prevaricación y tráfico de influencias, se realizará en mayo y junio. Se investiga a Xavier García Albiol por desalojo de migrantes en Badalona. Feijóo comparece el 2 de ...

Becker’s Healthcare Podcast
Mark Behl, President & CEO of NorthBay Health

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 22, 2026 16:30


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Mark Behl, President & CEO of NorthBay Health, discusses investing in new technology, collaborating with payers to enhance patient experience, creating tech solutions to improve efficiency, and advancing value-based care initiatives.In collaboration with R1.

Becker’s Healthcare Podcast
Matthew Love, President and CEO of Nicklaus Children's Health System

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 21, 2026 14:52


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Matthew Love, President and CEO of Nicklaus Children's Health System, discusses initiatives to develop a strong pediatric cancer program in Florida, navigating cost and reimbursement pressures, and handling AI governance, including insights from the “Ask Nick” program.In collaboration with R1.

Becker’s Healthcare Podcast
Garrick Stoldt, VP Finance and Chief Financial Officer at Saint Peter's Healthcare System

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 20, 2026 17:28


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Garrick Stoldt, VP Finance and Chief Financial Officer at Saint Peter's Healthcare System, discusses the effects of the Big Beautiful Bill, the rapid expansion of automation across healthcare finance, and why maintaining a strong human role remains essential as organizations modernize.In collaboration with R1.

The Mini-Break
2026 Australian Open: Day 3 Recap

The Mini-Break

Play Episode Listen Later Jan 20, 2026 68:35


Cracked Racquets Editor-in-Chief Alex Gruskin recaps Day 3 of the 2026 Australian Open. He reflects upon a relatively simple opening round for many of the tournament's top title contenders. He also breaks down a marvelous R1 of results for men with college tennis ties, looks at those players continuing to trend upwards, previews Day 4's action, plus SO much more!! Don't forget to give a 5 star review on your favorite podcast app! In addition, add your twitter/instagram handle to the review for a chance to win some FREE CR gear!! Episode Bookmarks Biggest Storylines - 6:00 Title Contenders cruise through R1: 6:25 Men w/College Tennis Ties: 22:20 Continuing to trend up: 37:53 Upsets - 46:24 Going the Distance - 48:25 Other Women's Results - 51:43 Other Men's Results - 53:05 American Update - 54:35 Players w/College Ties Update - 58:20 Forecast heading into R2/Day 4 Preview - 59:25 _____ Laurel Springs Ranked among the best online private schools in the United States, Laurel Springs stands out when it comes to support, personalization, community, and college prep. They give their K-12 students the resources, guidance, and learning opportunities they need at each grade level to reach their full potential. Find Cracked Racquets Website: https://www.crackedracquets.com Instagram: https://instagram.com/crackedracquets Twitter: https://twitter.com/crackedracquets Facebook: https://Facebook.com/crackedracquets YouTube: https://www.youtube.com/c/crackedracquets Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ran Out Of Talent
Episode 192

Ran Out Of Talent

Play Episode Listen Later Jan 20, 2026 91:55


On this one Zac and Joe talk about what is new at Donathen RC, the new B7.1, The new R1 buggy, and the discussion about having refs at large, expensive races.

Becker’s Healthcare Podcast
Brian Peters, CEO of the Michigan Health & Hospital Association

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 19, 2026 15:50


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Brian Peters, CEO of the Michigan Health & Hospital Association, discusses the push for fair and sustainable funding, evolving cybersecurity efforts across hospitals, and how leaders are navigating today's financial pressures with resilience and strategy.In collaboration with R1.

Gun Lawyer
Episode 273- Warning: Critical Gun Law Alert

Gun Lawyer

Play Episode Listen Later Jan 18, 2026 40:35


Episode 273-Warning: Critical Gun Law Alert  Also Available OnSearchable Podcast Transcript Gun Lawyer — Episode Transcript Gun Lawyer — Episode 273 Transcript SUMMARY KEYWORDS New Jersey gun laws, accidental discharge, criminalization, reckless discharge, felony consequences, gun ownership rights, self-defense, insurance coverage, Second Amendment, gun safety, gun dealers, international disarmament, gun control, gun owner education, legal challenges. SPEAKERS Speaker 2, Evan Nappen, Teddy Nappen Evan Nappen 00:17 I’m Evan Nappen. Teddy Nappen 00:19 And I’m Teddy Nappen. Evan Nappen 00:20 And welcome to Gun Lawyer. Well, folks, the New Jersey legislature has done it again. They have passed some atrocious gun laws, and I need to make all of you aware of one, particularly, that is very much a threat. It is something that’s going to affect many, many gun owners, and it is not being talked about in the general media, of course, because of how they write these laws in such a sneaky, underhanded way. But this law is going to impact all of us, frankly. And the potential is there, under this law, to not only take away gun owners’ rights to have guns, but to turn us all into felons at any time, simply based on an accident. That’s right, an accident. Evan Nappen 01:31 Because what New Jersey legislature’s both houses have passed, and I expect, very shortly, the governor will sign, is Assembly Bill, 4976. (https://pub.njleg.gov/Bills/2024/A5000/4976_R2.PDF) And what this bill does is it criminalizes Accidental Discharges (ADs). Now, an accidental discharge is when your gun goes off, accidentally, either by what some folks call an uncommanded discharge or an accidental discharge. But it is something that can happen, and although we have to always be very careful, circumstances can be such that a mistake can be made. I mean, we’re all human, and mistakes can happen. And unfortunately, you know, I see it in the practice, and I get accidental discharge cases all the time where individuals make a mistake and a gun goes off unintended. It happens. Now sometimes it happens because of the actual mechanical flaws to a firearm and that can be because of a gun’s design. It can even be due to circumstances where a firearm can go off from the slightest touch. Evan Nappen 03:08 Now you may not be aware of this, but years and years ago, I know of a case where an individual had a shotgun that this fellow had kept loaded. One of those single shot, top-breaker type shotguns. You know, like the old kind of like the toppers, the H and R Toppers, and what have you, similar to that. It Page – 1 – of 11may even have been one. I don’t know. But it’s one of those old single shot shotguns. And for probably 50 years, that gun had remained loaded with a shell in it. At one point, there were folks that were causing all kinds of problems in this guy’s yard, and he wanted to scare them off. He didn’t want to shoot them, and he put the gun out of, pointed the gun out the window, and boom. It went off, and he never pulled the trigger. He absolutely never pulled the trigger. There was no hit to the primer of the shell when it went off. And what has happened was, in this particular design of the gun, the firing pin had been pushing against the primer because the hammer was down and it didn’t have a firing pin block. And for like 50 years, this gun sat there, sensitizing the primer so that the slightest touch, you know, just the right jolt, without having to actually pull the trigger, made it go off. So, a gun can actually even do that under those extraordinary circumstances. Evan Nappen 04:57 But normally, an accidental discharge or uncommanded discharge that we encounter is because somebody believed, honestly believed, their gun was unloaded. And it ends up, of course, that it wasn’t. Now this can happen because somebody thought they checked it and maybe even did check. But then, with a magazine in and the slide going forward, a round loads, but they didn’t realize that it loaded, because they checked that it was unloaded. And sure enough, there’s a round there. I mean, I’ve seen every combination of error that could happen and a discharge can occur. And, of course, we know the rules, always point in a safe direction, etc. Make sure your gun is unloaded. Double, triple check to make sure that the chamber is empty. That there’s no magazine, and there’s no live ammo. I mean, all those things that we do. But accidents can happen, just like in a motor vehicle. We drive as safe as we possibly can, but people still have accidents. And what New Jersey has done in this bill is essentially criminalize an accident so that individuals will be looking at what is, in all likelihood, felony level charges. And they structured this bill in just a sneaky, evil, devious way. It’s going to have great impact, and it’s going to create, I think, unintended consequences. Evan Nappen 06:40 Now, as gun owners, we have to understand how we have to behave if any of us ever are so unfortunate as to have an uncommanded or accidental discharge. So, the law talks about “recklessly” having a discharge. “Reckless” in criminal law means, you know, with a conscious disregard of a known risk, okay? Criminal laws can have recklessly or reckless as a standard, as opposed to something being intentional, right? So, if you intentionally meant to pull the trigger, that’s intentional. Reckless could still be you didn’t intend to do it. But if there was that conscious disregard of that known risk and it ended up discharged, then you could argue that it’s reckless. So, reckless is kind of a standard where it’s not that traditional mens rea, your mental and your mental state of having that intention to have the gun fire. Reckless has been in our criminal law for a long time, and reckless conduct is something that’s out there, like reckless driving. I’m sure that you have heard of that. Evan Nappen 08:05 But what they’re doing here is even more devious by using the word “reckless”. So, what now is being prohibited? And I’m going to read this to you so you can see how they’ve done this. It says, a person commits a disorderly person’s offense. Now that sounds like, okay. A disorderly persons offense in New Jersey is equivalent to a misdemeanor. It’s not a felony. So, you’re saying, well, first of all, this is not creating a felony. It’s creating a disorderly persons offense, right? It sounds like it’s, you know, Page – 2 – of 11apparently, trying to be reasonable. But trust me, folks, it isn’t. I’m going to show you why. “A person commits a disorderly persons offense by recklessly discharging a firearm.” Okay, so at this point they’re saying, well, it’s just a low level offense, and it’s for recklessly discharge. You know, we’ve conscientiously disregarded a known risk. Okay, so it started out sounding, you know, not great, but okay. It’s not. It shouldn’t affect a lot of folks, and luckily, if it does, it’s still a misdemeanor. And, of course, it requires that recklessness. So, that sounds all good. Evan Nappen 09:22 Let me start again and read you, but wait until you hear the rest of it. A person commits a disorderly persons offense by recklessly discharging a firearm “using live ammunition rounds”. Well, okay, that’s good to know. A blank gun isn’t a reckless discharge, but you know you’re firing a blank. No live ammo. Okay. So, if I’m firing dead ammo or ammo that’s not live, then that’s not a reckless discharge. Well, good. How do I get a discharge with ammo that’s not live? I don’t know how that’s even going to happen. But okay, they throw that in, probably more as subterfuge and, you know, smoke and mirrors. But again, here we go. “A person commits a disorderly persons offense by recklessly discharging a firearm using live ammunition rounds unlawfully . . .” Okay, unlawfully. So, you’re unlawful. “. . . or without a lawful purpose.” Whoops, wait a minute. “Without a lawful purpose.” You commit a disorderly persons offense by recklessly discharging a firearm without lawful purpose. Evan Nappen 10:35 Except that a second conviction for such an offense constitutes a crime of the fourth degree. Well, a crime is a felony, and that’s a fourth degree. It’s a year and a half in jail. And a third or subsequent conviction is a third degree and that’s five years in State Prison. Okay. So, you may even read this part and say, well, it’s still arguably, weirdly reckless, maybe. But it’s for discharging a firearm without lawful purpose, but at least it’s a disorderly persons offense. And I, boy, if we do it once, I sure wouldn’t think I’d do it again. So, why is this such a problem, you know. Evan Nappen 11:09 But oh, well, wait, wait, wait. We’re not done yet. Because then it says, if a person commits a violation under this section, you’re charged with a crime one degree higher than what ordinarily would be charged for such an offense when the violation occurs within 100 yards, 100 yards, folks. Not 100 feet. A football field’s worth of distance of an occupied structure. Oh, what’s an occupied structure? Any building, room, ship, vessel, car, vehicle, or airplane, or a place adopted for overnight accommodations of persons or for carrying on business therein. Wait a minute, wait a minute. Wait a minute! An occupied structure includes a car or vehicle, and it doesn’t even mean it has to be occupied. It means even a vehicle or a building or a room, and it has to be within 100 yards, a football field, of a car. If there’s a car driving by within 100 yards where the accidental discharge takes place. If you’re in your own home? I mean, this is basically every accidental discharge. You will probably be within 100 yards of a car or a building or a room, or hotel or whatever, or an airplane. Man, even if the airplane is flying over the sky, I don’t know. I mean, this is nuts. Evan Nappen 12:55 So, if the violation occurs within 100 yards of a “structure”, guess what? It’s no longer that disorderly persons offense. It’s bumped instantly to the fourth degree, felony level offense. Up to a year and a half Page – 3 – of 11in State Prison, and now you’re going to be a convicted felon. That’s if your gun discharged for not having, without a lawful purpose. Oh, you mean like an accident? Yeah! Like an accident. An accident because you didn’t have a lawful purpose. Did you lawfully have a purposeful accident? No, that’s silliness in a nutshell. So, what it means now is essentially any accidental discharge is a felony in New Jersey, and you can face State Prison time of at least a year and a half, unless it’s going to be enhanced even more based on these other factors. And as a felon, you lose your gun rights for the entire United States. Evan Nappen 14:12 And even if it’s kept at the misdemeanor, a so-called disorderly persons level, they’re still going to go after your gun license and your gun rights. They’ll claim, under Chapter 58-3 of the licensing law, that you’re somehow a danger to public health, safety, welfare. You think if you’re going to have a criminal charge, a criminal offense charged here of accidental discharge, where they’re classifying it as reckless because it went for a “an unlawful purpose”. Like I said, I don’t know how you have a lawful purpose accident. And it was somehow within 100 yards of any car or room, which made it originally a felony even, right? Felony level in New Jersey. You’re getting your license and your guns confiscated and taken and face prosecution over this insane law. Evan Nappen 15:17 Now, this is the consequences of this bill, right? But that’s just the consequences in the law itself, like the penalty you may face and licensing problems. But what it also means is that upon any accidental discharge, folks, any, you immediately, now, immediately, have a Fifth Amendment right against self- incrimination, and you’re going to have to stand by that. Because I know in many of the cases we’ve seen, someone had an accidental discharge, and it may have gone through their wall. It may have gone to a neighbor’s house. It may have not whatever. But if you react, if you call the police, if you try to find out what happened, any type, you’re getting criminally charged. You have a right to say nothing. You have an absolute right, a Fifth Amendment right to remain silent, because you will end up incriminating yourself. This is going to mean that any New Jersey gun owner who has an AD or an uncommanded accidental discharge needs to immediately take the Fifth and seek counsel, the Sixth Amendment. Just call your attorney and don’t say anything to anyone. Do not make any statements to law enforcement or anybody. And, you know, this is a shame. Because what if that round actually caused injury to somebody? Teddy Nappen 16:59 Actually take it a step further. Evan Nappen 17:01 Think about it. You’re gonna incriminate yourself. You gotta absolutely. Go ahead, Teddy. Teddy Nappen 17:07 Take it a step further. Imagine instead of “gun”, this was “car”. I asked. I was in. I got into a car accident. So, therefore, all car accidents are felonies, where there is nowhere. Were you back? Were you 100 feet from your driveway? Was there a car driving by? Did you back into that car? Felony! You are now a felon because of that. And don’t tell me it’s the firearm versus the car! Because the car is a Page – 4 – of 11two ton steel death machine that kills more people than firearms do. So, it’s that level of argument, the utter draconian insanity that they have created here. Where from an accident, an actual accident, God forbid. Evan Nappen 17:54 An accident. That is right. Teddy Nappen 17:56 You are guilty until proven innocent. Evan Nappen 17:59 And wait. Let me say this. This has been put out there as a possible problem for self-defenders. And that’s actually not completely accurate, because there’s an exemption here that says it’s an affirmative defense, if you fired your gun in self-defense. Okay. Affirmative defense means the burden is on you to prove that you acted in self-defense. Then they’ll say, okay, that wasn’t a reckless discharge. But even the fact that the legislature has to put in there that if you act in self-defense, it’s an affirmative defense. Well, wait a minute. Why is it an affirmative defense? Because it wouldn’t have been reckless if it was intentional. Why do we even need that? So, in other words, the legislature itself knows that they’ve manipulated this law to simply be discharge for unlawful purpose, period. If you didn’t have a lawful purpose when your gun went off, it’s felony level if it’s within 100 yards of a car, or a room, or a building. Insanity. Evan Nappen 19:05 And as you say, Teddy, it would be like making every car accident, any fender bender that you have, you become charged with a felony. New Jersey has done that to gun owners now. Any accident, any accidental discharge, you’re going to face these criminal charges. This is going to, you’re going to end up in the system. If you have an AD, you’re getting charged. And now we’re going to have to fight this out on an offense that is essentially strict liability. That is the way they’ve set it up. Couching it and hiding it under so-called reckless, recklessly. But when they actually write it, they put the recklessly with the little bonus of having “without a lawful purpose”. This is nuts. Nuts. Nuts. Evan Nappen 19:58 I’m telling you right now the cases we get, it’s going to be crazy, crazy and a problem. So, folks, be extra careful. This is bad news. It has passed both houses, and the Governor, I’m sure, will sign it very shortly, if he hasn’t signed it already. And now gun owners are at extreme risk under this law. Teddy Nappen 20:24 I just thought of another one, too. What if you’re a first time shooter and you go to a range course, you’re in a range, a gun range learning, and the gun accidentally goes off because you’re brand new to firearms? You’re now a convicted felon. No discretion. Evan Nappen 20:44 Oh, well. It was near a room. That’s right, no discretion, and anybody that has that AD. So, again, it’s designed to disenfranchise gun owners of Second Amendment rights. And by the way, you may not be Page – 5 – of 11able to then get even insurance coverage. Because if it’s criminalized over what you did, it’s not anymore. Now, you’re talking about behavior where they can claim it’s a criminal act. It’s a criminal act, okay? And again, you may depending on your policy, depending on what actually the injuries and damage, you may not even have coverage. The insurance companies will use it to deny you coverage. I’m sure of that. That’s their job, as it normally is, anyway. To try to figure out how to deny coverage. Well, they’ve just given them that ability on the civil side to further make it harder for you. It’s jeopardizing gun rights, and it’s looking at creating incarceration at felony level for gun owners. It’s outrageous, and it really is something that I’m sure we’re going to see major, major impact. And then that’s not the only fun. Go ahead, Teddy. Teddy Nappen 22:04 I was just curious on the constitutionality of it? Because they’ve made, there must be some avenue. Because it’s very, like they’re giving no discretion? And just saying. Evan Nappen 22:16 Nope. Teddy Nappen 22:16 It’s automatic. There’s no constitutional challenge. Evan Nappen 22:20 Well, I guess there could always be a constitutional challenge. But what’s going to happen is it’s going to have to be the fight. The real fight is going to be over, maybe an argument of, was there a conscious, conscientious disregard, or conscious disregard of a known risk. But the other side will argue that as soon as you have a gun with ammo, you have a known risk. I mean, a firearm, and that’s their entire anti-Second Amendment strategy. Teddy Nappen 22:43 When you deal with guns, you do so at your peril. Evan Nappen 22:53 Right! And that’s New Jersey case law, right there. So, they’re saying, hey, you do it at your peril. You took a known risk because you possessed a gun, even. You can well see a New Jersey jury buying that argument. This is nuts, and gun owners, beware, beware, beware, beware. And like I said, this isn’t the only shenanigan that occurred in Trenton. They also signed S1425. (https://pub.njleg.gov/Bills/2024/S1500/1425_R1.PDF) Now, this is actually law. This law, real quick, specifically applies just to dealers. Just to New Jersey dealers. How nice. They have their own very special law now. This law says, “A licensed dealer who sells or transfers a firearm to a person when the dealer knows or reasonably should know that person intends to sell, transfer, assign, or otherwise dispose of that firearm to a person who is disqualified from possessing a firearm under State or federal law is guilty of a crime of the second degree.” That means up to 10 years in State Prison. They have a minimum mandatory period of three and a half years, and they made it a second degree. This is insane. Page – 6 – of 11Evan Nappen 24:03 If you’re a dealer in New Jersey, they can claim that you reasonably should have known that a gun you transferred to somebody was going to be transferred to somebody who was disqualified from possessing. Let me give you an example. You sell a Red Rider BB gun. That’s a firearm under New Jersey law. And if you reasonably should have known that that person was going to let their kid have that BB gun, you’re looking at a second degree charge here, Dealers. Yeah for that BB gun. Because as long as the state can show you reasonably should know that, that the person intended to transfer it to someone who was disqualified, who would be arguably that minor, unless it’s under a strict exemption. I mean, this is the kind of pathways being cut here. How do you know or reasonably should know? What is that reasonably should have known nonsense? Evan Nappen 25:03 I mean, that’s again, 12 people on a jury are the ones who’s going to decide whether reasonably you should know. All the law says, “. . . ‘reasonably should know’ means that a person reasonably should know a fact when, under the circumstances, a person of reasonable prudence and competence would ascertain or know that fact.” Oh, that’s a that’s so crystal clear. Huh? Real, crystal clear. Now what it means is 12 people who aren’t smart enough to avoid jury duty are going to decide whether the dealer should have known on that gun sale. And if they decide otherwise, the dealer is looking at a minimum mandatory sentence on a second degree crime, which carries up to 10 years in State Prison. Okay? That’s what they’re doing. Focused on New Jersey dealers. Do you think they want to put every dealer in New Jersey out of business? I do. And that’s the other bonus law that’s actually signed into law. It’s ripe for abuse, folks. Beware. It is just atrocious what’s going on in New Jersey. Evan Nappen 26:07 Let me tell you about our fight. You know, we are in this fight. We constantly, we’ve tried to fight these things. New Jersey is an extremely tough environment. We’re going to see court challenges, even more court challenges, and it’s our state Association that’s going to be heading the fight. I’m sure we’re going to see a constitutional challenge to this so-called Accidental Discharge bill and the same over what they’re looking to do to dealers. And it’s ANJRPC, the Association of New Jersey Rifle & Pistol Clubs at the forefront, fighting for our rights. They’re the umbrella organization of gun clubs in New Jersey, and you can join as an individual member. You really need to. You’ll be sent email alerts, and you’ll be told what’s going on. And you know, we’re able to get changes made with pressure, but most importantly, our salvation seems to be in the judicial fight in the courts. The Association is there as we speak. This is an extremely tough environment in New Jersey, the toughest in America, where the oppression of Second Amendment rights is second to none. New Jersey wins the prize for Second Amendment oppression, and it’s the Association there at the forefront. You need to be a member. Go to anjrpc.org and join today. Be part of the solution. It’s really important that you do that. Evan Nappen 27:43 I’d also like to talk about our good friends at WeShoot. WeShoot is an indoor range in Lakewood, easily accessible, off the Parkway. It’s where Teddy and I both shoot, and we both qualified. It’s where we got our CCARE and where we get our training. We love WeShoot. That’s the place to shoot. It’s a place you can shoot. They have a wonderful facility, a great pro shop, and great instructors. You’ve got to check out their website, magnificent photography there. And they run all kinds of great deals and Page – 7 – of 11specials, and they have all the top state of the art equipment. Check out weshootusa.com. weshootusa.com. You’ll be glad you did. It is a great resource for us to have a range right there in Central New Jersey that is as professional and modern as WeShoot. Go to weshootusa.com and check them out. You will be thrilled, just like Teddy and I. Well, that’s where we shoot. It’s what we love. You’ll love it too. Evan Nappen 29:00 Let me also mention my book, New Jersey Gun Law. It’s the Bible of New Jersey gun law. I’m working on the update from what I just told you today. So, the free update will be coming out, including the 2026 Comprehensive Update. We’re going to look at and add in all the new laws that’ll be coming out shortly. So, if you have the book, make sure you scan the QR cover. The QR code on the cover. Join my free private subscriber base, and you’ll get notice of the updates that are forthcoming. You can buy the book at EvanNappen.com. That’s right, www.EvanNappen.com. Go to EvanNappen.com and get the big orange book today. You’ll be glad you did. It’s over 500 pages, 120 topics, all Question and Answer, designed to make it as user friendly as possible. I try to make it so you can navigate these treacherous waters of Second Amendment oppression in New Jersey. So, go to EvanNappen.com and get your book. Teddy, what do you have for us today in Press Checks? Teddy Nappen 30:15 Well, as you know, Press Checks are always free. While you’re talking about the utter insanity that is New Jersey, there’s one positive bit of news. It’s kind of been, you know, from the entire news cycle of everything they try to cover. There’s one thing that kind of slipped under the cracks that some people did pick up on. And it caught my eye. I was like, wait a second, I remember this. So, President Trump has withdrawn from the UN Register of Conventional Arms. (https://gunrights.org/united-states- withdraws-from-united-nations-register-of-conventional-arms/) That treaty. Now, I remember growing up as a kid, Dad, you told me, always keep an eye out if there are blue helmets walking down the street. Evan Nappen 31:01 Yeah, that’s right, that blue helmet day came, if that ever was to come. Yep. Teddy Nappen 31:08 And oh, I remember you telling me about that treaty. And you know that stupid, you know, the UN has always been an anti-gun organization, with that stupid, bent revolver they have. Evan Nappen 31:20 Yeah, the revolver with a barrel and a pretzel knot. (https://dam.media.un.org/archive/Gift-of- Luxembourg-to-the-United-Nations-2AM9LOQORWK.html) I mean, look at folks. It’s a revolver, by the way. It’s not an AK, you know. It’s not an AR. It’s not in an “assault firearm”. No, no. It’s a freaking revolver with a barrel in a pretzel knot there. Gee, who are the primary possessors of revolvers? I wonder. Is that paramilitary organizations? No. Terrorist, radicalized wackos? No. A revolver. Let me see. Oh, you mean, like average citizens? Wow, hmm. Interesting. Page – 8 – of 11Teddy Nappen 32:02 But what I remember that being back, you know, where this was a big fear. Where it was the giant arms treaty, where they were trying, I think it wasn’t ratified by Obama, but that was that insane policy to try, even. The UN even actually has an Office of Disarmament. (https://disarmament.unoda.org/en/our- work/conventional-arms/legal-instruments/arms-trade-treaty) That’s actually their whole like deal. What they try to push for. Now, they cloak it in like militarily. If you actually go to the website, this was from the gunrights.org. (https://gunrights.org/united-states-withdraws-from-united-nations-register-of- conventional-arms/) The National Association of Gun Rights put out the article, and they provide the link where you can go on to the UN website. You can see their register of their whole charter on the UN, and it goes into they brag about it. We’ve recorded and captured 90% of the global arms trade. By the way, this was supposed to be about, you know, tanks, armored carriers. You know, stuff used in actual, like, large scale warfare. But then I love how they do this. In 2016 they adopted the international small arms and light weapons in parallel with the other seven categories, so we can keep track of all small arms. Hmm, 2016. What were they doing to try, what was the big anti-gun push to try to disarm us around that time? Thinking that they’re going to try to go around collecting our arms in the United States. Like it’s so disgusting. I love how they just cloak it. You actually can go on to their reports. I got bored. So, I clicked the arms report of 2023 and I was like, okay, armored carriers, all that . Small arms. I wanted to look and see who were like the top buyers. So, revolvers and self-loading pistols – Iraq. Apparently. Evan Nappen 33:57 Really? Teddy Nappen 33:58 Yeah, like 2,150 pistols from us to Iraq. Evan Nappen 34:03 Oh, from the U.S.? Teddy Nappen 34:05 Yeah, from the U.S. It keeps track of each country. Evan Nappen 34:07 Well, we’re making them. Teddy Nappen 34:09 Yeah. Evan Nappen 34:09 Of course. We’re a major industrial manufacturer. What we should be doing is making guns. Teddy Nappen 34:14 Yeah. And then rifles and carbines. They separate that from “assault firearms”. Rifles and carbines. 20,000 to Israel. So, there you go for that end. Page – 9 – of 11Evan Nappen 34:27 Yeah, Israel makes a lot of their own weapons, too, and they make really good ones. Teddy Nappen 34:32 Yeah, I know they have the Hebrew hammer. Evan Nappen 34:35 Oh, yeah! Teddy Nappen 34:35 The Tabor X95. (https://iwi.us/firearms/tavor-x95/) But with the sub-machine guns, Saudi Arabia, 550. Evan Nappen 34:41 This doesn’t even matter. This is so absurd, and it’s just trying to globalize Second Amendment oppression. You know, our country’s blessed with Second Amendment. And of course, New Jersey does everything it can to undermine it, but the majority of America doesn’t do that. But internationally, we, you know, they hate us. They hate our Constitution, and they want to see us disarmed. We are standing as a threat to their globalist intentions, right? Teddy Nappen 35:21 I mean, that was the famous line that Donald Trump said to the world. The world does not belong to globalists. And that’s a fact. And here, in their charter, they even say, such measures, as they’re describing the whole disarmament office, such measures can also encourage restraint in the transfer and production of armament and decelerate military build up. In words of, okay, we need to lower the amount of guns in the world and try to disarm the people. That’s the cover they run, but they dress it up. I will give the Left credit. Their ability to wordsmith their way into something else is crazy. Evan Nappen 36:06 Well, listen, man. It’s not every political group that can convince people, you know, that a man can be a woman. So, why can’t they convince the world about this with guns? Right? Teddy Nappen 36:17 Well, it’s the political group that has the. When they did the whole study on mental health of different groups, the vast majority of people that vote Democrat have mental illness. So, let that sink in. That was an actual study, and that was put out by, like, CNN! So. Evan Nappen 36:18 Really? Teddy Nappen 36:19 Yeah, they had to be like. No, I love it. If you are ever bored? Anyone who’s very bored, go on to CNN and catch Harry Enton, the statistics guy. He’s the golden retriever of CNN. He just talks about numbers, and he gets so excited. He’s like, oh my God, have you seen these numbers? I can’t believe Page – 10 – of 11it. He’s always, like, shocked every time. He sees like, you know, everyone keeps saying Trump’s numbers are going bad, but you go over to here. Six months ago, 84, and now, it’s 85. Oh, wow, amazing. Like, it’s just, it’s that energy. It’s crazy. Evan Nappen 37:13 Well, how old is he? Maybe he’s just trying to get excitement to statistics? Teddy Nappen 37:18 I know, but it’s just like, what are the numbers? Pretty good. He’s like, gad Zooks. He’s like, clapping. I know. It’s just like, what the heck is it? Like if anyone is bored? Just look up Harry Enten on CNN. He’s, it’s so fucking weird. Evan Nappen 37:37 Okay, I love it. All right, Teddy. Well, that is interesting to know, but I’m not surprised, not surprised at all. This is the moment, the moment when we discuss the GOFU, that is the Gun Owner Fuck Up. It is one of the most important aspects of what we do, because every day we deal with Gun Owner Fuck Ups. And when we can let the listeners know, you get to learn expensive lessons for free. And this week’s GOFU is real simple. It’s Accidental Discharge. Let me just make it real clear. Now, more than ever, more than ever, you’ve got to be extremely overly conscientious. You better triple check chambers. You’ve got to make sure. You cannot afford in any way to have any kind of Accidental Discharge in New Jersey, because you risk it all. You risk it all. You risk becoming a felon. You risk going to prison. You risk losing your gun rights for the entire United States. You risk not being covered, arguably, by insurance. It is an insane risk that New Jersey is imposing, and I’ve seen 80 cases throughout my entire practice. Unfortunately, they happen, and, you know, in hindsight, they’re all avoidable. But folks don’t be a GOFU. Please, please, please. Follow all the rules of safety, and make sure you treat every gun as loaded. Every gun, you treat as loaded. Do not for a second, not do that. It’s just that critical. They’re criminalizing those who make a simple mistake, and there is no tolerance. Evan Nappen 40:00 This is Evan Nappen and Teddy Nappen reminding you that gun laws don’t protect honest citizens from criminals. They protect criminals from honest citizens. Speaker 2 40:13 Gun Lawyer is a CounterThink Media production. The music used in this broadcast was managed by Cosmo Music, New York, New York. Reach us by emailing Evan@gun.lawyer. The information and opinions in this broadcast do not constitute legal advice. Consult a licensed attorney in your state. Page – 11 – of 11 Downloadable PDF TranscriptGun Lawyer S5 E273_Transcript About The HostEvan Nappen, Esq.Known as “America's Gun Lawyer,” Evan Nappen is above all a tireless defender of justice. Author of eight bestselling books and countless articles on firearms, knives, and weapons history and the law, a certified Firearms Instructor, and avid weapons collector and historian with a vast collection that spans almost five decades — it's no wonder he's become the trusted, go-to expert for local, industry and national media outlets. Regularly called on by radio, television and online news media for his commentary and expertise on breaking news Evan has appeared countless shows including Fox News – Judge Jeanine, CNN – Lou Dobbs, Court TV, Real Talk on WOR, It's Your Call with Lyn Doyle, Tom Gresham's Gun Talk, and Cam & Company/NRA News. As a creative arts consultant, he also lends his weapons law and historical expertise to an elite, discerning cadre of movie and television producers and directors, and novelists. He also provides expert testimony and consultations for defense attorneys across America. Email Evan Your Comments and Questions  talkback@gun.lawyer Join Evan's InnerCircleHere's your chance to join an elite group of the Savviest gun and knife owners in America.  Membership is totally FREE and Strictly CONFIDENTIAL.  Just enter your email to start receiving insider news, tips, and other valuable membership benefits.   Email (required) *First Name *Select list(s) to subscribe toInnerCircle Membership Yes, I would like to receive emails from Gun Lawyer Podcast. (You can unsubscribe anytime)Constant Contact Use. Please leave this field blank.var ajaxurl = "https://gun.lawyer/wp-admin/admin-ajax.php";

The Mini-Break
2026 Australian Open: Men's Singles Draw Preview

The Mini-Break

Play Episode Listen Later Jan 17, 2026 60:09


Cracked Racquets Editor-in-Chief Alex Gruskin previews the 2026 Australian Open Men's Singles Draw. He runs through each quarter of the draw and discusses the best R1 and potential matchups. He also offers his thoughts on the winners and losers of the draw configuration, shares his predictions for how he sees things unfolding, plus SO much more!! Don't forget to give a 5 star review on your favorite podcast app! In addition, add your twitter/instagram handle to the review for a chance to win some FREE CR gear!! Episode Bookmarks Alcaraz (1) Quarter - 4:45 Best R1 Matchups - 8:21 Best Prospective Matchups - 11:17 Sleepers in the section - 12:30 Projecting the best storylines to emerge - 14:25 Who likes this draw most - 17:01 Who likes this draw least - 17:35 Prediction - 19:20 Zverev (3) Quarter - 20:17 Best R1 Matchups - 21:55 Best Prospective Matchups - 24:33 Sleepers in the section - 26:03 Projecting the best storylines to emerge - 28:20 Who likes this draw most - 29:11 Who likes this draw least - 30:18 Prediction - 31:48 Djokovic (4) Quarter - 32:56 Best R1 Matchups - 34:22 Best Prospective Matchups - 37:15 Sleepers in the section - 39:20 Projecting the best storylines to emerge - 41:00 Who likes this draw most - 44:00 Who likes this draw least - 45:25 Prediction - 46:38 Sinner (2) Quarter - 48:09 Best R1 Matchups - 49:28 Best Prospective Matchups - 51:02 Sleepers in the section - 52:30 Projecting the best storylines to emerge - 53:30 Who likes this draw most - 54:15 Who likes this draw least - 56:28 Prediction - 56:51 Final predictions - 57:22 _____ Laurel Springs Ranked among the best online private schools in the United States, Laurel Springs stands out when it comes to support, personalization, community, and college prep. They give their K-12 students the resources, guidance, and learning opportunities they need at each grade level to reach their full potential. Find Cracked Racquets Website: https://www.crackedracquets.com Instagram: https://instagram.com/crackedracquets Twitter: https://twitter.com/crackedracquets Facebook: https://Facebook.com/crackedracquets YouTube: https://www.youtube.com/c/crackedracquets Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Becker’s Healthcare Podcast
Eric J. Price, Chief Financial Officer at Schoolcraft Memorial Hospital

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 16, 2026 14:39


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Eric J. Price, Chief Financial Officer at Schoolcraft Memorial Hospital, discusses navigating economic, political, and financial uncertainties, incorporating AI to improve efficiency while remaining cautious, the impact of AI on labor, and strategies for reducing inefficiencies in the healthcare system.In collaboration with R1

The Mini-Break
2026 Australian Open: Women's' Singles Draw Preview

The Mini-Break

Play Episode Listen Later Jan 16, 2026 58:02


Cracked Racquets Editor-in-Chief Alex Gruskin previews the 2026 Australian Open Women's Singles Draw. He runs through each quarter of the draw and discusses the best R1 and potential matchups. He also offers his thoughts on the winners and losers of the draw configuration, shares his predictions for how he sees things unfolding, plus SO much more!! Don't forget to give a 5 star review on your favorite podcast app! In addition, add your twitter/instagram handle to the review for a chance to win some FREE CR gear!! Episode Bookmarks Sabalenka (1) Quarter - 5:01 Best R1 Matchups - 8:00 Best Prospective Matchups - 10:01 Sleepers in the section - 11:29 Projecting the best storylines to emerge - 14:25 Who likes this draw most - 15:38 Who likes this draw least - 17:06 Prediction - 17:47 Gauff (3) Quarter - 18:25 Best R1 Matchups - 20:06 Best Prospective Matchups - 22:44 Sleepers in the section - 25:00 Projecting the best storylines to emerge - 28:04 Who likes this draw most - 30:02 Who likes this draw least - 31:32 Prediction - 32:45 Anisimova (4) Quarter - 34:10 Best R1 Matchups - 36:28 Best Prospective Matchups - 37:52 Sleepers in the section - 39:04 Projecting the best storylines to emerge - 41:05 Who likes this draw most - 42:19 Who likes this draw least - 43:34 Prediction - 45:38 Swiatek (2) Quarter - 46:15 Best R1 Matchups - 48:00 Best Prospective Matchups - 49:12 Sleepers in the section - 50:20 Projecting the best storylines to emerge - 51:47 Who likes this draw most - 52:39 Who likes this draw least - 54:00 Prediction - 54:40 Final predictions - 55:20 _____ Laurel Springs Ranked among the best online private schools in the United States, Laurel Springs stands out when it comes to support, personalization, community, and college prep. They give their K-12 students the resources, guidance, and learning opportunities they need at each grade level to reach their full potential. Find Cracked Racquets Website: https://www.crackedracquets.com Instagram: https://instagram.com/crackedracquets Twitter: https://twitter.com/crackedracquets Facebook: https://Facebook.com/crackedracquets YouTube: https://www.youtube.com/c/crackedracquets Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Black Spin Global: The Podcast
Australian Open 2026 Preview: draws, blockbuster first rounds and predictions

Black Spin Global: The Podcast

Play Episode Listen Later Jan 16, 2026 60:50


The first Grand Slam of 2026 is here, and so are we with our preview pod! Defending Australian Open champion Madison Keys, Coco Gauff and debutant Victoria Mboko are in contention in the women's draw, while Féllix Auger-Aliassime, Ben Shelton and the retiring Gaël Monfils will be looking to disrupt the order on the men's side.P.S.: We mistakenly forgot to mention Gabriel Diallo, who's facing Zverev in R1...Don't forget to rate, review and share on Apple Podcasts, Spotify and Audioboom. For daily tennis updates: Instagram: https://www.instagram.com/blackspinglobalTwitter:  https://twitter.com/BlackSpinGlobalTikTok: https://www.tiktok.com/@blackspinglobalGET OUR MERCH HERE: https://blackspinglobal.com/collections

Becker’s Healthcare Podcast
Kevin M. Spiegel, President & CEO of HSA Florida Medical Center

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 15, 2026 14:59


In this episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, Kevin M. Spiegel, President & CEO of HSA Florida Medical Center, discusses how healthcare organizations can leverage technological advancements to improve scheduling, streamline operations, and enhance patient experience. He dives into the governance of AI within hospital systems, sharing insights on balancing innovation with ethical and operational considerations.In collaboration with R1.

Becker’s Healthcare Podcast
Michelle Joy, President and CEO of Carson Tahoe Health

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 14, 2026 17:10


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Michelle Joy, President and CEO of Carson Tahoe Health. She shares strategies for expanding access to care in rural Nevada, enhancing workforce engagement, and implementing AI technology to improve clinical efficiency and revenue cycle operations while maintaining strong community trust.In collaboration with R1.

Becker’s Healthcare Podcast
Lynn Fulton, CEO of Maui Health System

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 14, 2026 13:24


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Lynn Fulton, CEO of Maui Health System. She discusses the unique challenges of providing care across Maui and Lanai, including workforce recruitment, technology integration, and payer partnerships, while highlighting strategies to enhance operational efficiency and community-centered care.In collaboration with R1.

Becker’s Healthcare Podcast
Jochen Reiser, MD PhD, President and CEO, UTMB

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 13, 2026 16:35


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Jochen Reiser, MD PhD, President and CEO, UTMB. He discusses how UTMB is embedding innovation and AI across research, education, and clinical care, creating new partnerships, advancing governance, and shaping the future of academic medicine.In collaboration with R1.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Artificial Analysis: Independent LLM Evals as a Service — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 8, 2026 78:24


Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b

Becker’s Healthcare Podcast
Amy Lee, MJ, MBA, MBHA, FACMPE, President and COO of Nantucket Cottage Hospital/Mass General Brigham

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 7, 2026 11:25


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Amy Lee, MJ, MBA, MBHA, FACMPE, President and COO of Nantucket Cottage Hospital/Mass General Brigham. She discusses the unique challenges of delivering care on an island, from workforce housing to telehealth partnerships, and how innovation and community support sustain high-quality care in a remote setting.In collaboration with R1.

Becker’s Healthcare Podcast
James Heilsberg, CFO, Tri State Health

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 7, 2026 14:17


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features James Heilsberg, CFO, Tri State Health. He discusses how his organization is using AI, automation, and strategic growth to strengthen care delivery, support clinicians, and sustain financial health in a rural community setting.In collaboration with R1.

Becker’s Healthcare Podcast
David Dunkle, CEO, Johnson Memorial Health

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 6, 2026 15:52


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features David Dunkle, CEO, Johnson Memorial Health. He discusses the financial challenges facing community hospitals, the struggle for fair reimbursement, and how strong culture and patient focus help sustain independent organizations in a difficult healthcare landscape.In collaboration with R1.

ceo becker dunkle r1 memorial health
The All Things Ansys Podcast
Episode 139: 2025 Year in Review & Looking Forward to What's Next

The All Things Ansys Podcast

Play Episode Listen Later Jan 6, 2026 61:40


In this episode your host and Co-Founder of PADT, Eric Miller is joined by members of our simulation support team to discuss their favorite updates from Ansys 2025 R1 & R2 in preparation for the 2026 release. Our outstanding staff has always been the foundation of what sets PADT apart from our competitors and we're excited to have so many of them on this episode. If you have any questions, comments, or would like to suggest a topic for the next episode, shoot us an email at podcast@padtinc.com we would love to hear from you!

Becker’s Healthcare Podcast
Lisa Tank, President and Chief Hospital Executive, Hackensack University Medical Center, Hackensack Meridian Health

Becker’s Healthcare Podcast

Play Episode Listen Later Jan 5, 2026 17:18


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable, features Lisa Tank, President and Chief Hospital Executive, Hackensack University Medical Center, Hackensack Meridian Health. She discusses how her team is using AI, governance, and culture to enhance clinical outcomes, streamline operations, and strengthen patient trust.In collaboration with R1.

Chasing Pars Golf Podcast
(Ep 198) Ffion Tynan

Chasing Pars Golf Podcast

Play Episode Listen Later Jan 1, 2026 82:33


In this episode I was pleased to be joined by Ffion Tynan who has just turned professional after gaining Ladies European Tour Category 19 status as well as a full status on LET Access Series for 2026. Ffion is from Llanharry in Wales and would pick up the game by chance at 8 years old during a holiday in Orlando where she would get spotted and asked to play more after. From age 11 Ffion has represented Wales in all forms something she is extremely proud of and would eventually go to the States at both University of Arkansas & University of Missouri. Ffion has had some great junior success with taking part in 2x Vagliano Trophy's as well as being the English U14 Champion, IMG World Challenge Champion, Scottish U18 Champion, Welsh Ladies Strokeplay Champion, Sir Henry Cooper Invitional Champion, Welsh Girls' Champion, Welsh Amateur Champion as well as this year's European Club Trophy Individual Champion in Slovakia.  At Q School on LET Ffion would navigate through to final stage where after R1 she would be near the top of the leaderboard before admittedly struggling but all learning experience as she gains status for next year!  Thanks Ffion, a bubbly, fun person with immense talent!     Download via Podbean, Apple & Spotify

Becker’s Healthcare Podcast
Patrick O'Shaughnessy, President and CEO, Catholic Health

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 19, 2025 16:06


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Patrick O'Shaughnessy, President and CEO, Catholic Health. He discusses how Catholic Health is embracing AI and digital transformation to enhance care delivery, strengthen patient and provider experiences, and stay true to its mission amid financial and industry challenges.In collaboration with R1.

Becker’s Healthcare Podcast
James Dover, President and CEO, Avera Health

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 18, 2025 18:09


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable features James Dover, President and CEO, Avera Health. He discusses Avera's strategic plan, Luminate, the organization's approach to AI and technology adoption, and how mission-driven leadership guides innovation, growth, and care delivery across its multi-state network.In collaboration with R1.

Becker’s Healthcare Podcast
Saad Ehtisham, President & CEO, Atlantic Health

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 17, 2025 16:06


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Saad Ehtisham, President & CEO, Atlantic Health. He shares his vision for sustainable growth, clinical excellence, and responsible AI adoption, emphasizing how technology and partnerships can drive efficiency, enhance care delivery, and improve patient outcomes across the system.In collaboration with R1.

Becker’s Healthcare Podcast
Nick Barcellona, Chief Financial Officer, WVU Medicine

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 16, 2025 17:16


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Nick Barcellona, Chief Financial Officer, WVU Medicine. He discusses the system's rapid expansion, its commitment to population health through integrated payer-provider models, and how AI is improving efficiency, reducing burnout, and enhancing the patient and provider experience.In collaboration with R1.

East Coast Breakfast with Darren Maule
KZN and Lottostar just raised R1 847 500 for The Big Favour!

East Coast Breakfast with Darren Maule

Play Episode Listen Later Dec 15, 2025 4:18


After a week and a half of non-stop wins, festive chaos, and jaw-dropping moments, Lottostar's Santa's Stash wrapped up in the BIGGEST way possible all in support of The Big Favour. East Coast Radio Managing Director Mzuvele Mthethwa joins Darren, Sky, and Carmen to reflect on just how powerful this year's Big Favour campaigns have been… before Lottostar CMO Maria Pavli drops the number that stopped everyone in their tracks. R1 847 500 raised because of YOU. This isn't just a campaign recap, it's proof of what happens when KZN shows up, shows love, and shows heart. Webpage

RSG Geldsake met Moneyweb
Die Funworld in Durban gaan herreis – groter en beter

RSG Geldsake met Moneyweb

Play Episode Listen Later Dec 11, 2025 7:17


Nic Steyn, voormalige eienaar van Durban Funworld, deel sy mening oor die R1 miljard-pretpark wat vir Durban se Golden Mile beplan word. Volg RSG Geldsake op Twitter

Becker’s Healthcare Podcast
Rob Purinton, Chief AI Officer, AdventHealth

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 9, 2025 22:24


This episode recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Rob Purinton, Chief AI Officer, AdventHealth. He shares how AdventHealth is accelerating innovation through AI governance, digital front door enhancements, and responsible adoption of AI agents to improve efficiency, clinical support, and overall patient experience.In collaboration with R1.

Becker’s Healthcare Podcast
Robert Chestnut, SVP & CFO, LMH Health

Becker’s Healthcare Podcast

Play Episode Listen Later Dec 1, 2025 18:06


This episode, recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Robert Chestnut, SVP & CFO, LMH Health. He shares how the organization is using AI and new technologies to enhance efficiency, strengthen competitiveness, and improve both provider experience and patient care. In collaboration with R1.

VR Download
Next Pico Headset Specs, Meta Horizon Hyperscape Social Update

VR Download

Play Episode Listen Later Nov 28, 2025 161:30


We discuss the best Black Friday XR deals including steep Quest 3S discounts with Horizon+ perks and PS VR2's limited-time price cut, as well as Meta's new smart glasses trade-in program that even gives credit for AirPods. We also cover Godot's upgraded XR support with a universal OpenXR APK, the latest Horizon OS v83 features like system-level positional TimeWarp and temporal dimming, and the reportedly upcoming 2026 Pico headset with 4K micro-OLED displays and an R1-style chip. Finally, we dive into the latest AI tools for 3D layout and reconstruction including Meta's WorldGen, systems that bring real-world objects and images into VR in seconds, and the new Horizon Hyperscape update that lets you invite friends as Meta avatars, along with our hands-on impressions of visiting Hyperscape worlds.

Podcast – AV Rant
AV Rant #993: Sofabaton X2 First Blush

Podcast – AV Rant

Play Episode Listen Later Nov 20, 2025 133:12


Not a review! But Tom has received Sofabaton X2 & R1 sample units! Valve announced a trio of new hardware products: Steam Machine, Steam Controller, and Steam Frame VR Headset. And Amazon will block piracy apps on Fire TV devices. The post AV Rant #993: Sofabaton X2 First Blush appeared first on AV Rant.

Podcast – AV Rant
AV Rant #993: Sofabaton X2 First Blush

Podcast – AV Rant

Play Episode Listen Later Nov 20, 2025 133:12


Not a review! But Tom has received Sofabaton X2 & R1 sample units! Valve announced a trio of new hardware products: Steam Machine, Steam Controller, and Steam Frame VR Headset. And Amazon will block piracy apps on Fire TV devices. The post AV Rant #993: Sofabaton X2 First Blush appeared first on AV Rant.

The EdUp Experience
How 1 Leader Plans to Transform 782 Institutions by Cutting Bureaucracy & Embracing Innovation - w/ Stephen Pruitt, President, Southern Association of Colleges & Schools Commission on Colleges-SACSCOC

The EdUp Experience

Play Episode Listen Later Nov 18, 2025 39:20


It's YOUR time to #EdUp In this episode, sponsored by the 2026 InsightsEDU Conference in Fort Lauderdale, Florida, February 17-19,YOUR guest is Stephen Pruitt, President, Southern Association of Colleges and Schools Commission on Colleges (SACSCOC)YOUR cohost is Dr. G. Devin Stephenson, President, Florida Polytechnic UniversityYOUR host is Dr. Joe SallustioHow does a new accreditation leader eliminate the phrase "that's the way we've always done it" & slash 52 substantive change categories by at least half while implementing a "students first always" philosophy?What happens when an accreditor moves from "gotcha" accountability to a "carrots & sticks" approach with a sandbox of innovation that lets institutions negotiate around standards in exchange for measurable outcomes?How does SACSCOC plan to embrace AI for firewalled tools, celebrate institutional successes with 60 categories of recognition, & create different pathways for R1 universities versus technical colleges?Listen in to #EdUpThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠ Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!P.S. Want to get early, ad-free access & exclusive leadership content to help support the show? Then ⁠​subscribe today​⁠ to lock in YOUR $5.99/m lifetime supporters rate! This offer ends December 31, 2025!

Becker’s Healthcare Podcast
Dan Liljenquist, JD, Chief Strategy Officer, Intermountain Health

Becker’s Healthcare Podcast

Play Episode Listen Later Nov 14, 2025 14:22


This episode recorded live at the Becker's 13th Annual CEO + CFO Roundtable features Dan Liljenquist, JD, Chief Strategy Officer, Intermountain Health. He discusses how Intermountain is advancing more than 300 AI projects to streamline operations, reduce costs, and reshape care delivery for both patients and providers.In collaboration with R1.

80,000 Hours Podcast with Rob Wiblin
Helen Toner on the geopolitics of AI in China and the Middle East

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 5, 2025 140:02


With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they'll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese governments are barely talking at all.”Links to learn more, video, and full transcript: https://80k.info/ht25In her role as a founder, and now leader, of DC's top think tank focused on the geopolitical and military implications of AI, Helen has been closely tracking the US's AI diplomacy since 2019.“Over the last couple of years there have been some direct [US–China] talks on some small number of issues, but they've also often been completely suspended.” China knows the US wants to talk more, so “that becomes a bargaining chip for China to say, ‘We don't want to talk to you. We're not going to do these military-to-military talks about extremely sensitive, important issues, because we're mad.'”Helen isn't sure the groundwork exists for productive dialogue in any case. “At the government level, [there's] very little agreement” on what AGI is, whether it's possible soon, whether it poses major risks. Without shared understanding of the problem, negotiating solutions is very difficult.Another issue is that so far the Chinese Communist Party doesn't seem especially “AGI-pilled.” While a few Chinese companies like DeepSeek are betting on scaling, she sees little evidence Chinese leadership shares Silicon Valley's conviction that AGI will arrive any minute now, and export controls have made it very difficult for them to access compute to match US competitors.When DeepSeek released R1 just three months after OpenAI's o1, observers declared the US–China gap on AI had all but disappeared. But Helen notes OpenAI has since scaled to o3 and o4, with nothing to match on the Chinese side. “We're now at something like a nine-month gap, and that might be longer.”To find a properly AGI-pilled autocracy, we might need to look at nominal US allies. The US has approved massive data centres in the UAE and Saudi Arabia with “hundreds of thousands of next-generation Nvidia chips” — delivering colossal levels of computing power.When OpenAI announced this deal with the UAE, they celebrated that it was “rooted in democratic values,” and would advance “democratic AI rails” and provide “a clear alternative to authoritarian versions of AI.”But the UAE scores 18 out of 100 on Freedom House's democracy index. “This is really not a country that respects rule of law,” Helen observes. Political parties are banned, elections are fake, dissidents are persecuted.If AI access really determines future national power, handing world-class supercomputers to Gulf autocracies seems pretty questionable. The justification is typically that “if we don't sell it, China will” — a transparently false claim, given severe Chinese production constraints. It also raises eyebrows that Gulf countries conduct joint military exercises with China and their rulers have “very tight personal and commercial relationships with Chinese political leaders and business leaders.”In today's episode, host Rob Wiblin and Helen discuss all that and more.This episode was recorded on September 25, 2025.CSET is hiring a frontier AI research fellow! https://80k.info/cset-roleCheck out its careers page for current roles: https://cset.georgetown.edu/careers/Chapters:Cold open (00:00:00)Who's Helen Toner? (00:01:02)Helen's role on the OpenAI board, and what happened with Sam Altman (00:01:31)The Center for Security and Emerging Technology (CSET) (00:07:35)CSET's role in export controls against China (00:10:43)Does it matter if the world uses US AI models? (00:21:24)Is China actually racing to build AGI? (00:27:10)Could China easily steal AI model weights from US companies? (00:38:14)The next big thing is probably robotics (00:46:42)Why is the Trump administration sabotaging the US high-tech sector? (00:48:17)Are data centres in the UAE “good for democracy”? (00:51:31)Will AI inevitably concentrate power? (01:06:20)“Adaptation buffers” vs non-proliferation (01:28:16)Will the military use AI for decision-making? (01:36:09)“Alignment” is (usually) a terrible term (01:42:51)Is Congress starting to take superintelligence seriously? (01:45:19)AI progress isn't actually slowing down (01:47:44)What's legit vs not about OpenAI's restructure (01:55:28)Is Helen unusually “normal”? (01:58:57)How to keep up with rapid changes in AI and geopolitics (02:02:42)What CSET can uniquely add to the DC policy world (02:05:51)Talent bottlenecks in DC (02:13:26)What evidence, if any, could settle how worried we should be about AI risk? (02:16:28)Is CSET hiring? (02:18:22)Video editing: Luke Monsour and Simon MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore

Emergency Medical Minute
Episode 981: Electrical Burns

Emergency Medical Minute

Play Episode Listen Later Nov 3, 2025 3:41


Contributor: Travis Barlock, MD Educational Pearls: Quick Statistics on Electrical Burns: Electrical burns compose roughly 2 to 9% of all burns that come into emergency departments. The majority of patients who receive electrical burns are male, typically aged 20's to 30's, accounting for 80 to 90% of all electrical burn victims. The majority of burns are linked to occupational exposure. The upper extremities are more commonly impacted by electrical burns, accounting for 70 to 90% of entry points into the body during an exposure. What are some of the key considerations in electrical burns? Unlike chemical or fire/heat related burns, electrical burns have the potential to cause significant internal damage that may not be physically appreciated externally. This damage can include, but is not limited to: Cardiac dysthymias (PVCs, SVT, AV block, to more serious ventricular dysrhythmias such as ventricular fibrillation or ventricular tachycardia). Deep tissue injury resulting in rhabdomyolysis from the initial surge of electricity Rare cases of compartment syndrome What are the treatment considerations for patients who suffer electrical burns? Remembering that cutaneous findings associated with burns may underestimate the severity of the injury, with deeper structures being more likely to be involved as the voltage of the burn injury is directly correlated to severity. Manage the patient's airway, breathing, and circulation as always, and conduct further workup into potential cardiac involvement with EKGs, as well as analysis of the extremities where entry occurred for muscle breakdown and compartment syndrome. Clinical Pearl on Voltage and Current: Voltage can be thought of being equivalent to pressure in a fluid/liquid system. Higher voltages are equivalent to higher pressures, but the ultimate damage delivered to the system is from the rate of delivery/speed of the electrical energy surging (current) through the body. Current is dependent on the tissue it is travelling through, with different tissues having differing electrical resistances. Tissues like the stratum corneum of the skin and the human bone confer the most resistance (thus lower current) whereas skeletal muscle confers lower electrical resistance (thus higher current) due to water and electrolyte content, which is why injuries like rhabdomyolysis are possible and increase with increasing voltage. References Khor D, AlQasas T, Galet C, et al. Electrical injuries and outcomes: A retrospective review. Burns. 2023;49(7):1739-1744. doi:10.1016/j.burns.2023.03.015 Durdu T, Ozensoy HS, Erturk N, Yılmaz YB. Impact of Voltage Level on Hospitalization and Mortality in Electrical Injury Cases: A Retrospective Analysis from a Turkish Emergency Department. Med Sci Monit. 2025;31:e947675. doi:10.12659/MSM.947675 Karray R, Chakroun-Walha O, Mechri F, et al. Outcomes of electrical injuries in the emergency department: epidemiology, severity predictors, and chronic sequelae. Eur J Trauma Emerg Surg. 2025;51(1):85. doi:10.1007/s00068-025-02766-1 Faes TJ, van der Meij HA, de Munck JC, Heethaar RM. The electric resistivity of human tissues (100 Hz-10 MHz): a meta-analysis of review studies. Physiol Meas. 1999;20(4):R1-10. doi:10.1088/0967-3334/20/4/201 Summarized by Dan Orbidan, OMS2 | Edited by Dan Orbidan and Jorge Chalit, OMS4 Donate: https://emergencymedicalminute.org/donate