POPULARITY
Categories
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown-google/id1684415169?i=1000749634701
As we mark the International Day of Women and Girls in Science, we welcome Avital Sharir-Ivry, Co-Founder and Chief Scientific Officer of ProPhet, an innovative Israeli startup launched in late 2024 from the AION Labs venture studio.With a PhD in computational biology and drug research, Avital brings deep expertise in structural biology, enzyme design, and evolutionary bioinformatics to her role leading ProPhet's scientific efforts.ProPhet itself is changing small molecule drug discovery by using advanced AI and machine learning to map proteins and compounds into a shared interaction space. This enables rapid, scalable screening of billions of molecules—even for so-called "undruggable" targets—without relying on solved structures or massive datasets, speeding up hit-finding and expanding the reachable therapeutic landscape.01:36 Meet Avital Sharir-Ivry08:55 How ProPhet emerged from AION Labs challenge12:12 Core AI technology for hit-finding at scale15:00 Benchmarks and collaborations17:54 ProPhet's differentiation from traditional drug discovery19:45 Importance of scaling small molecule exploration24:19 Pharma AI investments and emerging trends26:11 Future AI breakthroughs in drug discovery27:04 Challenges and progress for women in science31:15 Keep up with ProPhetInterested in being a sponsor of an episode of our podcast? Discover how you can get involved here! Stay updated by subscribing to our newsletterTo dive deeper into the topic: Seven influential women in biotech in 2026 Report: Adopting AI in biologics discoveryWebinar: How AI and LLMs are helping chemists design drugs faster and smarter
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
We dive into the latest paper from a team of researchers at IBM: "From Benchmarks to Business Impact: Deploying IBM Generalist Agent in Enterprise Production." We're excited to host several of the paper's authors, who walk us through the research and its implications. The paper reports IBM's experience developing and piloting the Computer Using Generalist Agent (CUGA), which has been open-sourced for the community. CUGA adopts a hierarchical planner–executor architecture with strong analytical foundations, achieving state-of-the-art performance on AppWorld and WebArena. Beyond benchmarks, it was evaluated in a pilot within the Business-Process-Outsourcing talent acquisition domain, addressing enterprise requirements for scalability, auditability, safety, and governance. CUGA code: https://github.com/cuga-project/cuga-agent Paper: https://arxiv.org/abs/2510.23856Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
On Episode 796 of The Core Report, financial journalist Govindraj Ethiraj talks to Prabhu Dhamodharan, Convenor of the Indian Texpreneurs Federation as well as Priyam Gandhi-Mody, Executive Director, Future Economic Cooperation Council (FECC).SHOW NOTES(00:00) Stories of the Day(01:00) Financials are powering the benchmarks as markets stay flat(03:27) Indigo says it has complied with norms it was supposed to in December(04:21) Indian negotiators score fresh wins in evolving India-US tariff deal(05:35) A deep dive into cotton economics behind the Bangladesh reciprocal deal for garment exporters and the India connect(19:40) Not just Delhi, Mumbai has several interesting conferences lined up next week tooRegister for India Finance and Innovation Forum 2026https://tinyurl.com/IFIFCOREhttps://fec-council.org/aboutFor more of our coverage check out thecore.inSubscribe to our NewsletterFollow us on:Twitter |Instagram |Facebook |Linkedin |Youtube
The hosts unpack the latest AI breakthroughs — from Opus 4.6 and AGI debates to robotics, energy innovation, and the future of AI personhood, privacy, and the workforce. Get notified once we go live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on February 6th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
I sit down with Morgan Linton, Cofounder/CTO of Bold Metrics, to break down the same-day release of Claude Opus 4.6 and GPT-5.3 Codex. We walk through exactly how to set up Opus 4.6 in Claude Code, explore the philosophical split between autonomous agent teams and interactive pair-programming, and then put both models to the test by having each one build a Polymarket competitor from scratch, live and unscripted. By the end, you'll know how to configure each model, when to reach for one over the other, and what happened when we let them race head-to-head. Timestamps 00:00 – Intro 03:26 – Setting Up Opus 4.6 in Claude Code 05:16 – Enabling Agent Teams 08:32 – The Philosophical Divergence between Codex and Opus 11:11 – Core Feature Comparison (Context Window, Benchmarks, Agentic Behavior) 15:27 – Live Demo Setup: Polymarket Build Prompt Design 18:26 – Race Begins 21:02 – Best Model for Vibe Coders 22:12 – Codex Finishes in Under 4 Minutes 26:38 – Opus Agents Still Running, Token Usage Climbing 31:41 – Testing and Reviewing the Codex Build 40:25 – Opus Build Completes, First Look at Results 42:47 – Opus Final Build Reveal 44:22 – Side-by-Side Comparison: Opus Takes This Round 45:40 – Final Takeaways and Recommendations Key Points Opus 4.6 and GPT-5.3 Codex dropped within 18 minutes of each other and represent two fundamentally different engineering philosophies — autonomous agents vs. interactive collaboration. To use Opus 4.6 properly, you must update Claude Code to version 2.1.32+, set the model in settings.json, and explicitly enable the experimental Agent Teams feature. Opus 4.6's standout feature is multi-agent orchestration: you can spin up parallel agents for research, architecture, UX, and testing — all working simultaneously. GPT-5.3 Codex's standout feature is mid-task steering: you can interrupt, redirect, and course-correct the model while it's actively building. In the live head-to-head, Codex finished a Polymarket competitor in under 4 minutes; Opus took significantly longer but produced a more polished UI, richer feature set, and 96 tests vs. Codex's 10. Agent teams multiply token usage substantially — a single Opus build can consume 150,000–250,000 tokens across all agents. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ Morgan Linton X/Twitter: https://x.com/morganlinton Bold Metrics: https://boldmetrics.com Personal Website: https://linton.ai
As he nears the end of his first 100 days at Nintex, Burt Chao is doing something many new CFOs resist: listening more than talking. Understanding the business, its people, and its real growth potential comes before dashboards or directives, he tells us.Chao describes Nintex as a company with a “long and rich history” of helping organizations automate mission-critical work, but one now entering a new season. That evolution centers on orchestration—whether AI-enabled, agent-based, or rooted in RPA—while remaining clear-eyed about identity. Nintex, he explains, will not “become an AI company.” Instead, it aims to help customers leverage AI deliberately, embedding it where it strengthens the foundation of their operations, he tells us.That emphasis on fundamentals shows up quickly in how Chao evaluates performance. In today's environment, “there's no more important number than growth,” he tells us. Margins, profitability, and even rule-of-40 metrics only make sense once leadership understands what growth is possible and how it can be accelerated. Benchmarks matter, but only as tools; every business must be understood on its own terms, he tells us.That discipline has shaped some of the most challenging moments of his career. Chao recalls “shrink to grow” decisions—walking away from investments that still produced revenue but no longer delivered the best return. Those moments are rarely spreadsheet problems alone. They are emotional, cultural, and deeply human, requiring influence rather than authority, he tells us. For Chao, that balance—grounding strategy in numbers while leading people through change—defines the modern CFO role.
Amazon, Google und OpenAI verschieben Advertising und Kaufprozesse zunehmend in KI-gestützte Systeme. Gleichzeitig legt Amazon mit einem neuen Benchmark-Report und erweiterten Brandstore-Insights nach. Florian ordnet die fünf wichtigsten Updates für Advertiser ein.Alle Themen der Episode im Überblick: News #1: Amazon steigt mit Buy for Me in den Agentic Commerce ein (00:21)News #2: Google launcht das Universal Commerce Protocol (01:52)News #3: OpenAI launcht Werbemodell und ChatGPT Ads (03:08)News #4: Amazon stellt neuen Benchmark Report zur Verfügung (04:03)News #5: Amazon erweitert die Auswertung für Brandstores (04:51)Links & Ressourcen:E-Commerce Kalender 2026Fragen & Anregungen:Hintergründe sowie weiterführende Informationen zum Podcast findest du unter: https://www.adference.com/podcast-vitamin-aFür Fragen und Feedback schreib mir auf LinkedIn: https://www.linkedin.com/in/florian-nottorf/ oder hinterlasse einen Kommentar auf YouTube: https://www.youtube.com/@ADFERENCEMail: vitamin-a@adference.com
Discover how AI is reshaping accessibility in coding and app development. Joe Devon and Eamon McGurlain reveal why benchmarking AI for accessibility is crucial to ensure inclusive technology in 2026.In this in-depth conversation, Steven Scott and Shaun Preece speak with Joe Devon, co‑founder of Global Accessibility Awareness Day, and Eamon McGurlain, VP and global head of accessibility at ServiceNow, about the urgent need for AI‑driven development to prioritise accessibility.The discussion explores the creation of AMAC (AI Model Accessibility Checker), a new benchmarking tool designed to hold AI model developers accountable for generating accessible code. Joe and Eamon share candid insights into the state of AI‑generated websites, the surprising benchmark results for major models like OpenAI, Anthropic, and Google Gemini, and the ongoing challenges of embedding accessibility into fast‑moving AI innovation. Relevant LinksGlobal Accessibility Awareness Day: https://accessibility.dayServiceNow Accessibility: https://www.servicenow.comAMAC Benchmark Project: https://www.gadfoundation.org/amac Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Welcome to Part 3 of our 2026 Med Spa Marketing and Growth Series. We are breaking down the exact Facebook and Instagram ad strategies, offers, and creative frameworks we use to consistently generate new patients for med spas across the country.If you want to download the free audit and planning resources, visit:https://go.medspamagicmarketing.com/2026-freebiesIn this episode, Ricky Shockley, owner of Med Spa Magic Marketing, is joined by Lauren, our Lead Digital Marketing Specialist, to walk through the math, strategy, and creative behind high-performing med spa Meta ads. Together, they break down how to choose the right services to advertise, structure offers that lower acquisition costs, and evaluate results with confidence instead of guesswork.In this episode, we cover:✅ The services that perform best in Meta ads and the ones that usually do not✅ The Botox, Dysport, lip filler, and facial offers that consistently drive new patients✅ Customer acquisition cost and initial visit revenue benchmarks for 2026✅ Why some popular treatments struggle in social ads even if they are profitable in-practice✅ How to structure combo offers without hurting conversion✅ The graphic and ad copy framework that lowers cost per lead✅ Why provider photos, trust signals, and bold design matter more than most people think✅ How to think about “good vs maybe vs no-go” services for Facebook and Instagram adsWe also share creative testing insights, including how small changes in provider images and design can dramatically impact lead costs, and how to balance proven static ads with experimental video and reel content.This episode builds directly on the marketing math and prioritization framework from earlier in the series and gives you the tactical playbook for Phase 1 ad spend in 2026. Make sure to subscribe so you do not miss the next episode, where we will walk through campaign setup and show how these offers and frameworks are implemented inside an actual Meta ad account.If you're ready to implement more efficient & effective marketing strategies for your practice, book your FREE strategy session & marketing plan: https://go.medspamagicmarketing.com/scheduleChapters:00:00 Introduction and Why Meta Ads Come First01:18 The Math Behind Offers, CAC, and Patient Quality 05:45 Evergreen Offers and Targeting the “Ready to Buy” Patient09:36 The “Good” Services to Advertise on Meta (Botox, Dysport, Lip, Facial Combo) 13:51 The “Maybe” Services and When They Can Work (Body Sculpting, Weight Loss, Filler) 20:42 The “No-Go” Services for Meta Ads and Why the Math Fails 25:30 The Benchmarks Med Spas Should Expect From Meta Ads 27:15 Botox and Dysport Offer Structures That Convert 33:39 Lip Filler, Sculpting, Weight Loss, and Facial Offer Benchmarks 40:00 The Static Image Ad Creative Framework That Stops the ScrollFollow us on social media: https://www.instagram.com/medspamagicmarketing/https://www.linkedin.com/company/med-spa-magic-marketing/https://www.facebook.com/MedSpaMagicMarketing/https://www.tiktok.com/@medspamagicmarketing
Roger Urwin, Global Head of Investment Content at Willis Towers Watson, reflects on how leading asset owners are rethinking strategic asset allocation amid faster regime change, rising systemic risk, and growing complexity. In a conversation hosted by Mona Naqvi, Managing Director of Research, Advocacy, and Standards at CFA Institute, he draws on decades of experience advising global funds to explain why a total portfolio mindset is gaining traction—and how it reframes goals, governance, and investment decision-making. The discussion explores what it means to invest through a truly holistic lens, why mindset and organizational design matter as much as models, and how the investment profession may need to evolve for a more uncertain world. Listen to the episode to hear Roger Urwin's perspective on the shift from strategic asset allocation to a total portfolio approach. Chapter Markers 00:00 Introduction and Welcome 01:12 Roger Irwin's Career and Industry Background 04:02 Why Total Portfolio Approach Matters Now 04:58 Origins of Strategic Asset Allocation (SAA) 09:32 Benchmarks, Universality, and Communication Challenges 12:01 How TPA Addresses Complexity 14:47 Accessibility vs Flexibility: SAA vs TPA 16:13 Governance Trade-offs and Organizational Design 17:53 Systems Thinking and Market Disruption 19:16 Ecosystem Thinking, Reflexivity, and Risk Models 21:21 Recalibrating Investment Frameworks 22:56 Is TPA More Resilient Than SAA? 24:27 People, Incentives, and Cultural Barriers 28:49 AI, Human Intelligence, and the Future Analyst 30:46 Human + Artificial Intelligence in Investing 34:48 Managing Systemic Risk and Long-Term Horizons 40:57 Value Creation in a World of Real-Time Information 43:55 Stewardship, System-Level Investing, and Externalities 45:00 Can SAA and TPA Coexist? 47:29 Industry Momentum and What Comes Next 49:36 Closing Thoughts and Series Preview
The Dentist Money™ Show | Financial Planning & Wealth Management
On this episode of the Dentist Money Show, Ryan, Matt, and Cody reflect on 2025's biggest themes in dentistry and analyze the results from a recent survey sent out to Dentist Advisors' clients. They unpack the results of the quantitative benchmark data and what it reveals about dentists' income, spending, savings, debt, net worth, and retirement readiness across different age brackets and career stages. They explore how specialization impacts earnings, how student loans and practice debt shape cash flow, and what dentists' savings and investment balances look like in practice. Tune in to hear how dentists are really doing financially and what these numbers mean for building long-term wealth and retirement confidence. Book a free consultation with a CFP® advisor who only works with dentists. Get an objective financial assessment and learn how Dentist Advisors can help you live your rich life.
Stop Guessing- Use Benchmarks to Fund Profitable Growth EP334 Profit With A Plan Podcast Released January 27, 2026 Guest: Jon Morris, CEO of Fiscal Advocate Host: Marcia Riner, CEO of Infinite Profit®, Business Growth Strategist
In this AMA-style episode, Nathan takes on listener questions about whether fine-tuning is really on the way out, what emergent misalignment and weird generalization results tell us, and how to think about continual learning. He talks candidly about how he's personally preparing for AGI—from career choices and investing to what resilience steps he has and hasn't taken. The discussion also covers timelines for job disruption, whether UBI becomes inevitable, how to talk to kids and “normal people” about AI, and which safety approaches are most neglected. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com MongoDB: Tired of database limitations and architectures that break when you scale? MongoDB is the database built for developers, by developers—ACID compliant, enterprise-ready, and fluent in AI—so you can start building faster at https://mongodb.com/build Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) Ernie cancer update (04:57) Is fine-tuning dead (Part 1) (12:31) Sponsors: Blitzy | MongoDB (14:57) Is fine-tuning dead (Part 2) (Part 1) (26:56) Sponsors: Serval | Tasklet (29:15) Is fine-tuning dead (Part 2) (Part 2) (29:16) Continual learning cautions (34:59) Talking to normal people (39:30) Personal risk preparation (49:59) Investing around AI safety (01:00:39) Early childhood AI literacy (01:08:55) Work disruption timelines (01:27:58) Nonprofits, need, and UBI (01:34:53) Benchmarks, AGI, and embodiment (01:47:30) AI tooling and platforms (01:57:01) Discourse norms and shaming (02:05:50) Location and safety funding (02:15:17) Turpentine deal and independence (02:24:19) Outro PRODUCED BY: https://aipodcast.ing
The 80/20 Principle of Running a Cash-Based PT Clinic In this episode of the PT Entrepreneur Podcast, Dr. Danny Matta breaks down the 80/20 principle for cash-based clinic owners and simplifies what you should track if you want to grow past yourself. Instead of obsessing over dozens of metrics, Danny argues there are three "dollar productive" KPIs that drive almost all clinic growth. He also explains why provider schedules either snowball fast or stall for a year and how to shorten that ramp from 12+ months to around six months with the right focus. In This Episode, You'll Learn: How Claire can save staff clinicians hours each week and translate that time into meaningful revenue What the 80/20 principle means inside a cash-based clinic The concept of "dollar productive activities" and why it matters The three KPIs Danny thinks drive the majority of clinic growth Why the owner should usually handle discovery calls during growth phases Benchmarks for conversion rates at different stages of scale Why recurring services are the "sneaky" variable that stabilizes schedules How to get a new provider productive faster so clinic growth compounds Claire: Turn Saved Time Into Revenue Without Burning Out Your Team Danny opens with a simple math breakdown clinic owners can understand quickly. Time is valuable, for you and for your staff clinicians. PT Biz has found that Claire, their AI scribe, saves staff clinicians about six hours per week on average. Even if you only reclaim half of that time and convert it into patient care, that is roughly three additional one-hour visits per week per clinician. Example Danny gives: 3 extra visits per week $200 average visit rate $600 more per week per clinician Roughly $30,000 per year in additional revenue per clinician The point is not to overload your team. The point is to use technology to remove the documentation burden so you can increase capacity without increasing burnout. Try Claire free for 7 days: https://meetclaire.ai The 80/20 Principle in a Cash Practice The 80/20 principle is the idea that 20% of your actions lead to 80% of your results. Danny applies this directly to clinic growth. When your clinic is small, it is easy to get busy doing "everything" and tracking a long list of numbers. The problem is most of those activities do not move the business. Instead, Danny recommends narrowing your focus to the most "dollar productive" activities. In other words, the actions and metrics that actually drive revenue and schedule utilization. The Goal: Get a Provider Productive Fast Danny frames the big objective clearly. You want to get your own schedule full enough to hire someone. Then you want any provider you hire to get productive as fast as possible. In PT Biz's world, once a provider reaches roughly 80 to 90 visits per month, it tends to snowball into 100+ pretty quickly. But getting to that point can take some clinics over a year. If you can shorten that ramp to six months, your growth compounds. In a year, you might be able to hire two people instead of one, because each provider becomes profitable faster. The Three Dollar-Productive KPIs Danny says there are three key metrics that drive the majority of growth in a cash-based clinic. Each one represents a drop-off point that can either accelerate growth or quietly crush it. 1) New Patient Volume and Discovery Call Conversion Many owners only track "how many evals we have." Danny says you need to go one step back and track conversion from lead to evaluation. There is often a major drop-off between someone becoming a lead and actually booking an evaluation. This is usually happening on discovery calls. Benchmarks Danny shares: During growth, aim for 8 to 10 new patients per provider per month Once stable, new patient volume can drop closer to 5 per month Discovery call to eval conversion should be 70%+ He also makes a strong recommendation: during growth phases, the owner should handle discovery calls. Why? In many clinics, admins convert around 45% to 50%. Owners often convert 80% to 90% because they carry authority and can handle objections better. Danny gives an example: 20 discovery calls at 50% conversion = 10 evals 20 discovery calls at 80% conversion = 16 evals That gap can be the difference between a provider staying empty and a provider getting busy quickly. He also points out that owners sometimes resist this because it feels like a step backward, but the time requirement is smaller than most people assume. If you have 20 calls at 20 minutes each, that is under 10 hours per month and it can dramatically impact growth. 2) Evaluation to Plan of Care Conversion The second KPI is how many evaluations convert into a plan of care. When people do not commit to a plan of care, Danny says many still come back a few times, often around three visits, until symptoms improve and then they disappear. That creates unpredictable revenue and inconsistent schedules. Plan-of-care conversion makes volume and revenue more predictable. Benchmarks Danny shares: Owner: 70% conversion from eval to plan of care Staff providers: 60% conversion is a strong benchmark at scale He emphasizes that this requires quality control and training. Staff clinicians need to be comfortable with diagnosis, prognosis, and presenting a clear plan. Otherwise close rates drift and schedules stall. 3) Recurring Services After Plan of Care Danny calls this the sneaky variable that people forget, but it can make the biggest difference in schedule stability. Hiring a clinician is usually a net negative for the business at first. You are paying salary, taxes, and benefits while they are still ramping up. What stabilizes and compounds a provider schedule is recurring volume. The goal is that roughly 40% of plan-of-care patients transition into some type of recurring service after discharge. Why this matters: Recurring visits fill a predictable chunk of the schedule New patient volume no longer has to carry the whole load Providers get to work with people they enjoy long term It is mentally easier than constant evaluations Danny also explains why this is often hard for staff clinicians. They may feel uncomfortable "selling" ongoing support because they never did it in insurance clinics They may not know what to do clinically once a plan of care ends So this requires two things: education on the clinical delivery of recurring services and training on how to present it confidently. Put It Together: How to Grow Faster Without Tracking Everything Danny's bigger point is that clinic owners often get lost in too many tasks and too many numbers. If you simplify down to these three KPIs and train your team around them, your odds of building provider schedules faster go up dramatically: Discovery call conversion (lead to eval) Eval to plan-of-care conversion Plan-of-care to recurring conversion When those are strong, growth compounds. You hire faster, providers get productive faster, and you get to choose what you want the clinic to become instead of being stuck trying to "just get busy." Resources Mentioned Try Claire free for 7 days: https://meetclaire.ai Talk with a PT Biz advisor: https://vip.physicaltherapybiz.com/discovery-call Join the free Part Time to Full Time 5-Day Challenge: https://physicaltherapybiz.com/challenge
MongoBleed and a recent OWASP CRS bypass show how parsing problems remain a source of security flaws regardless of programming language. We talk with Kalyani Pawar about how these problems rank against the Top 25 CWEs for 2025 and what it means for relying on LLMs to generate code. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-366
MongoBleed and a recent OWASP CRS bypass show how parsing problems remain a source of security flaws regardless of programming language. We talk with Kalyani Pawar about how these problems rank against the Top 25 CWEs for 2025 and what it means for relying on LLMs to generate code. Show Notes: https://securityweekly.com/asw-366
MongoBleed and a recent OWASP CRS bypass show how parsing problems remain a source of security flaws regardless of programming language. We talk with Kalyani Pawar about how these problems rank against the Top 25 CWEs for 2025 and what it means for relying on LLMs to generate code. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-366
MongoBleed and a recent OWASP CRS bypass show how parsing problems remain a source of security flaws regardless of programming language. We talk with Kalyani Pawar about how these problems rank against the Top 25 CWEs for 2025 and what it means for relying on LLMs to generate code. Show Notes: https://securityweekly.com/asw-366
Episode #365 - What Percentage of Revenue Should Email Marketing Drive? (Benchmarks for Ecommerce Brands) If you've ever wondered, "Is my email program doing what it's supposed to be doing?" — you're not alone. Because everyone throws around percentages like they're universal truth… "Email should drive 30%." "No, it should be 40%." "Wait, mine is 12%… is that bad?"
The Senior Care Industry Netcast w/ Valerie V RN BSN & Dawn Fiala
Send us a textMost home care agencies are not underperforming because they lack effort. They are underperforming because they lack benchmarks.In our January 14, 2026 GoCarePro™ Mastermind, we walked through what actually separates average agencies from exceptional ones and it is not hustle or luck _GCP Mastermind January 14, 2026It is measurement, systems, and consistency.This session focused on real, operational benchmarks across the entire agency:• Sales and marketing activity that drives predictable referrals • Scheduling and client services metrics that protect revenue • Caregiver recruiting and retention numbers that stabilize operations • Weekly KPIs that prevent surprises instead of reacting to them • Why dual-channel marketing matters: strong online presence plus boots on the groundWe also covered the most common traps that keep agencies stuck: • Not knowing true conversion rates • Relying too heavily on one referral source • Slow or inconsistent follow-up • Trying to do everything alone • Growing on broken systemsThe core message was simple but uncomfortable: You cannot improve what you do not measure.Top agencies are not guessing. They know their weekly numbers. They review them consistently. They fix systems before they scale.This Mastermind was recorded and is available as a video for members who want clarity on where their agency stands and what to fix first.Exceptional agencies are not lucky. They are intentional.If you want growth that does not collapse under pressure, benchmarks are not optional.#HomeCareBenchmarks #HomeCareAgencyGrowth #HomeCareSales #CaregiverRetention #OperationalExcellence #GoCarePro #HomeCareLeadershipContinuum Mastery Circle IntroVisit our website at https://asnhomecaremarketing.comGet Your 11 Free Home Care Marketing Guides: https://bit.ly/homecarerev
The Dentist Money™ Show | Financial Planning & Wealth Management
On this episode of the Dentist Money Show, Ryan, Matt, and Cody reflect on 2025's biggest themes in dentistry and analyze the results from a recent survey sent out to Dentist Advisors' clients. They unpack the results of the qualitative benchmark data and what it reveals about dentists' burnout, work schedules, vacations, and financial satisfaction. They discuss how dentists are defining their workload beyond clinical days, why leadership days can feel just as draining, and why burnout often persists even when dentists work fewer clinical days. Tune in to hear key takeaways about how dentists are feeling about their careers, work-life balance, vacations, savings, and more. And stay tuned for part two (coming soon!) which includes quantitative data like dentists' average savings rate, investment balance, net worth, and more! Book a free consultation with a CFP® advisor who only works with dentists. Get an objective financial assessment and learn how Dentist Advisors can help you live your rich life.
In this episode of Eye on AI, Craig Smith speaks with Jonathan Wall, founder and CEO of Runloop AI, about why AI agents require an entirely new approach to compute infrastructure. Jonathan explains why agents behave very differently from traditional servers, why giving agents their own isolated computers unlocks new capabilities, and how agent-native infrastructure is emerging as a critical layer of the AI stack. The conversation also covers scaling agents in production, building trust through benchmarking and human-in-the-loop workflows, and what agent-driven systems mean for the future of enterprise work. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI Agents Require a New Infrastructure Paradigm (01:38) Jonathan Wall's Journey: From Google Infrastructure to AI Agents (04:54) Why Agents Break Traditional Cloud and Server Models (07:36) Giving AI Agents Their Own Computers (Devboxes Explained) (12:39) How Agent Infrastructure Fits into the AI Stack (14:16) What It Takes to Run Thousands of AI Agents at Scale (17:45) Solving the Trust and Accuracy Problem with Benchmarks (22:28) Human-in-the-Loop vs Autonomous Agents in the Enterprise (27:24) A Practical Walkthrough: How an AI Agent Runs on Runloop (30:28) How Agents Change the Shape of Compute (34:02) Fine-Tuning, Reinforcement Learning, and Faster Iteration (38:08) Who This Infrastructure Is Built For: Startups to Enterprises (41:17) AI Agents as Coworkers and the Future of Work (46:37) The Road Ahead for Enterprise-Grade Agent Systems
The CPG Guys are joined in this episode by Sarah Marzano, Principal Analyst for Retail and Commerce Media at EMarketer, the go-to forecasts, data, and insights provider for marketing, advertising, andcommerce professionals.Follow Sarah on LinkedIn at: https://www.linkedin.com/in/sarahzmarzano/Follow EMarketer on LinkedIn at: https://www.linkedin.com/company/emarketer-inc/Follow EMarketer online at: http://emarketer.comLearn more about Sarah's research report “Retail Media Networks: Trends, Benchmarks, and Leadership in 2025” here: https://www.emarketer.com/content/retail-media-networks-trends-benchmarks-leadership-2025Sarah answers these questions:What led you to develop this new report on retail media networks. What were you hearing in the industry that made you believe this might resonate in terms of thought leadership?Your report highlights *strong strategic conviction but uneven operational maturity* across RMNs. Where do you see the biggest disconnect between ambition and enablement today—and what's driving that gap?Any thoughts on how organizations can choose a model that will drive success?Fewer than half of surveyed RMNs have cross-functional KPIs, and fewer than one-third tie incentives to merchandising teams. What's preventing incentive alignment, and what does “good” look like?Measurement and reporting ranked as *the most pressing challenge* for RMNs—especially proving incrementality. What innovations or methodological shifts do you expect will actually move the industry forward?Survey respondents anticipate future growth from a mix of in-store, onsite, and offsite channels. What formats or surfaces do you see emerging as the *next big accelerators* of RMN revenue?Respondents believe zero-click search and agentic AI will be *the most disruptive forces* shaping retail media over the next three years. How should brands and RMNs prepare for this shift?RMNs say that next year, their top priorities will shift toward *tech modernization, data infrastructure, and off-site media acceleration.* What will separate the networks that actually deliver from those that simply aspire?How do interested professionals learn more about this research report?CPG Guys Website: http://CPGguys.comFMCG Guys Website: http://FMCGguys.comSheCOMMERCE Website: https://shecommercepodcast.com/Rhea Raj's Website: http://rhearaj.comLara Raj in Katseye: https://www.katseye.world/DISCLAIMER: The content in this podcast episode is provided for general informational purposes only. By listening to our episode, you understand that no information contained in this episode should be construed as advice from CPGGUYS, LLC or the individual author, hosts, or guests, nor is it intended to be a substitute for research on any subject matter. Reference to any specific product or entity does not constitute an endorsement or recommendation by CPGGUYS, LLC. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent.CPGGUYS LLC expressly disclaims any and all liability or responsibility for any direct, indirect, incidental, special, consequential or other damages arising out of any individual's use of, reference to, or inability to use this podcast or the information we presented in this podcast.
HIRING BENCHMARKS FOR THE 16 MOST COMMON OCCUPATIONS IN THE USA We all want to be 'data driven' but most of the time the only data we have access to are those which we have recorded of ourselves and our own activity. This is vitally important for tracking your progress over time, but not the complete picture when it comes to how you are doing against the market and against the competition. In partnership with our friends Joveo, we are pulling together behaviour data from job candidates on the 16 most common job roles in the USA - book keeper, truck driver, construction worker, nurse practitioner, teacher - and finding out what the state of the Labour market is: what is the size of the labour pool, what is the average number of job applications per role in these sectors, what are the most important sources of hire, what is the CPA for the employer of each of these roles, what is the geographic distribution of the talent and so on. Essential viewing for any US recruiter for some of the most employed positions in the country. We're on Friday 9th January, 2pm GMT / 9am ET. Register by clicking on the green button (save my spot) and follow the channel here (recommended) Ep353 is sponsored by Joveo As the global leader in AI-powered, high-performance recruitment marketing, Joveo is transforming talent attraction and recruitment media buying for the world's largest employers, staffing firms, RPOs, and media agencies. The Joveo platform enables businesses to attract, source, engage, and hire the best candidates on time and within budget. Powering millions of jobs every day, Joveo's AI-led recruitment marketing platform uses advanced data science and machine learning to dynamically manage and optimize talent sourcing and applications across all online channels, while providing real-time insights at every step of the job seeker journey, from click to hire. For more information about Joveo's award-winning platform and solutions, visit www.joveo.com.
Episode Notes AI has become one of the biggest conversations in HR — but what's actually happening inside organizations today? In this session, we unpack Phenom's State of AI & Automation for HR: 2026 Benchmarks Report, sharing how organizations are using AI to automate hiring workflows, scale personalization, and augment human capability. We'll explore maturity patterns across industries, highlight the real-world impact already being achieved — even at early adoption stages — and map the journey toward intelligent agents that operate responsibly and measurably. This is a data-driven conversation about the future of work and the pivotal role HR leaders play in guiding their organizations toward scalable AI productivity.
US President Trump announced on Saturday that the US successfully carried out a large-scale strike against Venezuela, while he added that President Maduro and his wife were captured and flown out of Venezuela.US President Trump said they are ready to stage a second strike if necessary and had assumed a second wave was needed, but now probably not.US President Trump signalled the US could widen its focus in the region to Cuba, and he will be meeting with House Republicans in a closed-door meeting on Tuesday. Further, Trump said it “sounds good” to him regarding whether there will be an operation in Colombia.European bourses are broadly in the green; US equity futures are mixed, with outperformance in the NQ. ASML +3% named top pick at Bernstein.DXY firmer on haven appeal, G10s subdued across the board to various degrees; Global fixed income slightly firmer with non-geopolitical updates somewhat light, ISM ahead.Choppy price action in the crude complex as geopolitics remain in focus; XAU gain on safe-haven demand; Copper raises following strength in the semiconductor sector.Looking ahead, highlights include US ISM Manufacturing PMI (Dec).Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk
From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just "more repos," why Tau-bench's "impossible tasks" controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning. We discuss: John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: "we have a good number") SWE-bench Verified: the curated, high-quality split that became the standard for serious evals SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution The SWE-bench Pro controversy: independent authors used the "SWE-bench" name without John's blessing, but he's okay with it ("congrats to them, it's a great benchmark") CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization) SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations) AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation) The Tau-bench "impossible tasks" debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%) Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents) The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve — John Yang SWE-bench: https://www.swebench.com X: https://x.com/jyangballin Chapters 00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations 00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race 00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants 00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories 00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments 00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas 00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing 00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation 00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity 00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration 00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research
From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just “more repos,” why Tau-bench's “impossible tasks” controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning.We discuss:* John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks* The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: “we have a good number”)* SWE-bench Verified: the curated, high-quality split that became the standard for serious evals* SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution* The SWE-bench Pro controversy: independent authors used the “SWE-bench” name without John's blessing, but he's okay with it (”congrats to them, it's a great benchmark”)* CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization)* SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations)* AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation)* The Tau-bench “impossible tasks” debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%)* Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents)* The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve—John Yang* SWE-bench: https://www.swebench.com* X: https://x.com/jyangballinFull Video EpisodeTimestamps00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research Get full access to Latent.Space at www.latent.space/subscribe
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Matthew Fitzpatrick is the CEO at Invisible Technologies Learn about Invisible Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Matthew Linkedin Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 16th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Is a car that wins a Formula 1 race the best choice for your morning commute? Probably not. In this sponsored deep dive with Prolific, we explore why the same logic applies to Artificial Intelligence. While models are currently shattering records on technical exams, they often fail the most important test of all: **the human experience.**Why High Benchmark Scores Don't Mean Better AIJoining us are **Andrew Gordon** (Staff Researcher in Behavioral Science) and **Nora Petrova** (AI Researcher) from **Prolific**. They reveal the hidden flaws in how we currently rank AI and introduce a more rigorous, "humane" way to measure whether these models are actually helpful, safe, and relatable for real people.---Key Insights in This Episode:* *The F1 Car Analogy:* Andrew explains why a model that excels at the "Humanities Last Exam" might be a nightmare for daily use. Technical benchmarks often ignore the nuances of human communication and adaptability.* *The "Wild West" of AI Safety:* As users turn to AI for sensitive topics like mental health, Nora highlights the alarming lack of oversight and the "thin veneer" of safety training—citing recent controversial incidents like Grok-3's "Mecha Hitler."* *Fixing the "Leaderboard Illusion":* The team critiques current popular rankings like Chatbot Arena, discussing how anonymous, unstratified voting can lead to biased results and how companies can "game" the system.* *The Xbox Secret to AI Ranking:* Discover how Prolific uses *TrueSkill*—the same algorithm Microsoft developed for Xbox Live matchmaking—to create a fairer, more statistically sound leaderboard for LLMs.* *The Personality Gap:* Early data from the **Humane Leaderboard** suggests that while AI is getting smarter, it is actually performing *worse* on metrics like personality, culture, and "sycophancy" (the tendency for models to become annoying "people-pleasers").---About the HUMAINE LeaderboardMoving beyond simple "A vs. B" testing, the researchers discuss their new framework that samples participants based on *census data* (Age, Ethnicity, Political Alignment). By using a representative sample of the general public rather than just tech enthusiasts, they are building a standard that reflects the values of the real world.*Are we building models for benchmarks, or are we building them for humans? It's time to change the scoreboard.*Rescript link:https://app.rescript.info/public/share/IDqwjY9Q43S22qSgL5EkWGFymJwZ3SVxvrfpgHZLXQc---TIMESTAMPS:00:00:00 Introduction & The Benchmarking Problem00:01:58 The Fractured State of AI Evaluation00:03:54 AI Safety & Interpretability00:05:45 Bias in Chatbot Arena00:06:45 Prolific's Three Pillars Approach00:09:01 TrueSkill Ranking & Efficient Sampling00:12:04 Census-Based Representative Sampling00:13:00 Key Findings: Culture, Personality & Sycophancy---REFERENCES:Paper:[00:00:15] MMLUhttps://arxiv.org/abs/2009.03300[00:05:10] Constitutional AIhttps://arxiv.org/abs/2212.08073[00:06:45] The Leaderboard Illusionhttps://arxiv.org/abs/2504.20879[00:09:41] HUMAINE Framework Paperhttps://huggingface.co/blog/ProlificAI/humaine-frameworkCompany:[00:00:30] Prolifichttps://www.prolific.com[00:01:45] Chatbot Arenahttps://lmarena.ai/Person:[00:00:35] Andrew Gordonhttps://www.linkedin.com/in/andrew-gordon-03879919a/[00:00:45] Nora Petrovahttps://www.linkedin.com/in/nora-petrova/Event:Algorithm:[00:09:01] Microsoft TrueSkillhttps://www.microsoft.com/en-us/research/project/trueskill-ranking-system/Leaderboard:[00:09:21] Prolific HUMAINE Leaderboardhttps://www.prolific.com/humaine[00:09:31] HUMAINE HuggingFace Spacehttps://huggingface.co/spaces/ProlificAI/humaine-leaderboard[00:10:21] Prolific AI Leaderboard Portalhttps://www.prolific.com/leaderboardDataset:[00:09:51] Prolific Social Reasoning RLHF Datasethttps://huggingface.co/datasets/ProlificAI/social-reasoning-rlhfOrganization:[00:10:31] MLCommonshttps://mlcommons.org/
So much for OpenAI's triumphant return Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode of The Metrics Brothers, hosts Ray “Growth” Rike and Dave “CAC” Kellogg provide a critical deep dive into the 2025 SaaS Benchmark Report published by High Alpha. Known for their analytical, and sometimes "crusty" approach, the metrics brothers dissect the data behind 800+ SaaS companies to separate real market trends from report commentary.Key Highlights & BenchmarksThe brothers break down the report's most significant findings with their signature skepticism regarding "correlation vs. causation."The AI Growth Premium: Companies with AI at their core are growing significantly faster than those using AI as a supporting feature. For instance, in the $1–5M ARR band, AI-core companies achieved a median growth of 110%, compared to 40% for their peersThe "Lean Team" Era: Efficiency is surging as headcount falls. Median revenue per employee has jumped to $129K–$173K, with top-tier public companies hitting over $283K. The hosts note that engineering and support have seen the largest headcount reductions due to AI automationVenture Rebound (with a Caveat): While quarterly VC deal value has returned to near 2021 levels (~$80B), the capital is highly concentrated. Over half of all VC funding is currently flowing into AI startups, often in massive "mega-rounds."In-Office vs. Remote: For the second consecutive year, the data suggests that in-office or hybrid teams are growing faster (42% median) than fully remote teams (31% median).As always, Ray and Dave offer practical advice for founders and GTM leaders:"Read the data, but watch out for the commentary." While the data is good, some commentary and conclusions in the report imply causation where there is at best some level of correlation, such as why companies stay private longer or how AI "drives" growth.Retention is King: The strongest growth outcomes are found where high Net Revenue Retention (NRR) meets short CAC payback periods.Outcome-Based Pricing: The brothers highlight the shift toward outcome-based and hybrid pricing models as a primary driver for best-in-class NRR in 2025.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
US President Trump is to give an address to the nation on Wednesday night, live from the White House at 21:00EST (02:00GMT Thursday). White House Press Secretary said that Trump's address will be about accomplishments, while he will talk about what's to come and maybe tease new year policies.European bourses are mostly stronger this morning, with US equity futures also posting modest upside.DXY is firmer, the GBP has been hit after the UK's cooler-than-expected inflation report, which near-enough cements a BoE cut this week.Gilts outperform on the UK's data whilst USTs hold a downward bias.Crude benchmarks reverse Tuesday's losses following the blockade of Venezuelan oil tankers and reports of new Russian energy sanctions if Russia rejects the peace deal; XAU and Copper trading with slight gains.Looking ahead, highlights include Fed's Waller, Williams & Bostic, Supply from US, Earnings from Micron, New Zealand GDP (Q3).Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk
Every week brings a new AI benchmark. Higher scores. Bigger claims. Louder voices insisting this changes everything. And yet, when you put AI in front of a real business problem, none of that noise seems to help. In this episode, Rob and Justin dig into why AI benchmarks often feel strangely meaningless in practice and why that disconnect is the point. Benchmarks aren't useless. They're just answering a different question than the one most businesses are asking. This isn't just random conjecture either. Rob walks through what he's learned building actual AI workflows and why a twenty percent improvement on a leaderboard rarely translates into anything you can feel on the job. They talk about why model choice usually isn't the bottleneck, why swapping models should be easy if you've built things the right way, and why the most successful AI work rarely shows up as a flashy demo. Most of the value is happening quietly, off-screen, inside systems that look a lot more like normal software than artificial intelligence. Rob and Justin also talk about why explaining AI is often harder than building it. The first demo people see tends to stick, even when it's the wrong one. Consumer AI feels magical. Business AI face plants unless it's built with intent, structure, and real context. This episode gives leaders better language for that gap, without hype or panic. If you're done chasing benchmarks and just want a way to think about AI that survives contact with reality, this episode's for you.
This week on The Geek in Review, we sit down with Jennifer McIver, Legal Ops and Industry Insights at Wolters Kluwer ELM Solutions. We open with Jennifer's career detour from aspiring forensic pathologist to practicing attorney to legal tech and legal ops leader, sparked by a classic moment of lawyer frustration, a slammed office door, and a Google search for “what else can I do with my law degree.” From implementing Legal Tracker at scale, to customer success with major clients, to product and strategy work, her path lands in a role built for pattern spotting, benchmarking, and translating what legal teams are dealing with into actionable insights.Marlene pulls the thread on what the sharpest legal ops teams are doing with their data right now. Jennifer's answer is refreshingly practical. Visibility wins. Dashboards tied to business strategy and KPIs beat “everything everywhere all at once” reporting. She talks through why the shift to tools like Power BI matters, and why comfort with seeing the numbers is as important as the numbers themselves. You cannot become a strategic partner if the data stays trapped inside the tool, or inside the legal ops team, or inside someone's head.Then we get into the messy part, which is data quality and data discipline. Jennifer points out the trap legal teams fall into when they demand 87 fields on intake forms and then wonder why nobody enters anything, or why every category becomes “Other,” also known as the graveyard of analytics. Her suggestion is simple. Pick the handful of fields that tell a strong story, clean them up, and get serious about where the data lives. She also stresses the role of external benchmarks, since internal trends mean little without context from market data.Greg asks the question on everyone's bingo card, what is real in AI today versus what still smells like conference-stage smoke. Jennifer lands on something concrete, agentic workflows for the kind of repeatable work legal ops teams do every week. She shares how she uses an agent to turn event notes into usable internal takeaways, with human review still in the loop, and frames the near-term benefit as time back and faster cycles. She also calls out what slows adoption down inside many companies, internal security and privacy reviews, plus AI committees that sometimes lag behind the teams trying to move work forward.Marlene shifts to pricing, panels, AFAs, and what frustrates GCs and legal ops leaders about panel performance. Jennifer describes two extremes, rigid rate programs with little conversation, and “RFP everything” process overload. Her best advice sits in the middle, talk early, staff smart, and match complexity to the right team, so cost and risk make sense. She also challenges the assumption that consolidation always produces value. Benchmarking data often shows you where you are overpaying for certain work types, even when volume discounts look good on paper.We close with what makes a real partnership between corporate legal teams and firms, and Jennifer keeps returning to two themes, communication and transparency, with examples. Jennifer's crystal ball for 2026 is blunt and useful, data first, start the hard conversations now, and take a serious look at roles and skills inside legal ops, because the job is changing fast.Links:Jennifer McIver's LinkedIn pageWolters Kluwer ELM Solutions homepageLegalVIEW Insights reports homepageLegalVIEW DynamicInsights pageTyMetrix 360° pageListen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.]Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
[325] In 2024, Qnity's research arm, the Qnity Institute, conducted a study. At the heart of it was one core question: ‘What does a financially successful salon look like?' In this episode, Tom Kuhn, CEO of Qnity—an organisation offering education and tools for economic empowerment in the beauty industry—shares findings and insights from the subsequent 50-page report, published in June 2025. The ‘2024 Salon P&L Benchmark Study' set out to paint a clearer picture of the industry, establish reliable benchmarks, and deliver actionable insights that could help improve profitability, and this conversation aims to do the same, equipping salon owners with prompts for reflection or conversation starters about the financial health of their own business going into 2026. Find out more about Qnity, and the Qnity Institute. To participate in the 2025 version of the study, click here. Follow Tom Kuhn on Instagram: @tomhkuhn Learn more about the Salon Owners Summit: https://www.salonownersummit.com/ Enjoyed the episode? Leave a rating and review on Apple Podcasts! Click here to subscribe to the PhorestFM email newsletter or here to learn more about Phorest Salon Software. This episode was edited and mixed by Audio Z: Montreal's cutting-edge post-production studio for creative minds looking to have their vision professionally produced and mixed. Great music makes great moments.
Karl-Moritz Hermann, Gründer von reliant.ai, spricht über den Weg von DeepMind zum eigenen KI-Startup. Er teilt, warum sie bewusst B2B statt B2C wählten, wie sie durch Benchmarks echte Probleme identifizierten und warum manchmal 85% Genauigkeit hervorragend und manchmal katastrophal sein können. Was du lernst: Die richtige KI-Produktstrategie finden Wie man echte Probleme identifiziert Warum Benchmarks entscheidend sind Den richtigen Mix aus Forschung und Anwendung ALLES ZU UNICORN BAKERY: https://stan.store/fabiantausch Mehr zu Karl-Moritz: LinkedIn: https://www.linkedin.com/in/karlmoritz/ Reliant AI: https://www.reliant.ai/ Join our Founder Tactics Newsletter: 2x die Woche bekommst du die Taktiken der besten Gründer der Welt direkt ins Postfach: https://www.tactics.unicornbakery.de/
Justin Nielsen and David Saito-Chung walk through Monday's market action and discuss key stocks to watch in Stock Market Today. Learn more about your ad choices. Visit megaphone.fm/adchoices
MY NEWSLETTER - https://nikolas-newsletter-241a64.beehiiv.com/subscribeJoin me, Nik (https://x.com/CoFoundersNik), as I interview Peter Lohmann (https://x.com/pslohmann). In this episode, we dive into the unsexy but incredibly lucrative world of property management with Peter Lohmann, the founder of RL Property Management in Columbus, Ohio.Peter reveals how he scaled his business from zero to nearly 700 units and $3 million in revenue after leaving his job as an engineer. Peter also shares his controversial advice on why you should start, not buy, a management company if you are new to business, and details the "operational nightmare" you must navigate to reach success.We even discuss the massive opportunity for exits, where Private Equity firms are paying 1.5x to 2x top-line revenue for established companies.Questions This Episode Answers:1. What are the target benchmarks for revenue per door and profit margins in a successful property management business?2. Why does Peter recommend building a company from scratch rather than acquiring an existing one?3. What specific marketing strategies—from Google Business Profiles to meetups—would Peter use to get 100 doors in just six months?4. At what unit count does a property management company typically become self-sustaining enough to hire full-time staff?5. How are Private Equity groups valuing these businesses, and what multiples can you expect if you sell?Enjoy the conversation!__________________________Love it or hate it, I'd love your feedback.Please fill out this brief survey with your opinion or email me at nik@cofounders.com with your thoughts.__________________________MY NEWSLETTER: https://nikolas-newsletter-241a64.beehiiv.com/subscribeSpotify: https://tinyurl.com/5avyu98yApple: https://tinyurl.com/bdxbr284YouTube: https://tinyurl.com/nikonomicsYT__________________________This week we covered:00:00 Highlights00:27 Introduction to Property Management00:54 Revenue and Profit Margins02:29 Understanding Clients and Tenants03:06 Key Responsibilities of Property Managers03:37 Profiles of Property Owners04:29 Why Hire a Property Management Company?05:41 Payment Structures and Fees06:34 Company History and Growth10:50 Remote and Local Operations16:16 Starting a Property Management Business19:02 Finding Your First Clients19:29 Early Challenges and Strategies20:47 Benchmarks in Property Management21:50 The Importance of Community23:27 Scaling Your Business30:57 Operational Complexities32:21 Opportunities in Property Management35:38 Conclusion and Final Thoughts
In this episode, I get into all these viral “every man should be able to…” fitness challenges you see on Instagram and YouTube—the ones with the deep-voice narrator rattling off random benchmarks that aren’t age-graded, aren’t grounded in any science, and somehow declare you “top 10% of health.” I talk about why the subject is relevant, what these lists get wrong, and how to think more clearly about real fitness benchmarks—especially as we age. I share the simple 6%-per-decade decline metric from masters track, my own goals in the 400 meters and high jump, and the example my dad set as he gracefully slowed down from walking entire golf courses in his eighties to looping the backyard in his nineties. The big point: we all slow down, but we don’t have to fall apart. From there, I take you through the real markers that matter for longevity and everyday vitality: walking as the centerpiece of aerobic conditioning, maintaining functional muscle strength to stave off sarcopenia and “opia,” and why explosive power—being able to save yourself from a misstep—is literally a one-rep-max that can determine your fate. We get into the dangers of falling, the massive brain-health benefits seen in active seniors, why VO₂ max is just one slice of the picture, and why the best longevity challenge of all is simply doing something you love enough to keep doing it for life. I also make the case for the 400 meters as the ultimate full-body benchmark—an event that forces you to tap every major energy system and says way more about real-world fitness than shuffling through a half marathon or throwing around big weights. If you want a clearer blueprint for staying strong, powerful, mobile, and fully alive deep into your later decades, this one pulls all the pieces together. TIMESTAMPS: Brad questions the validity of some fitness challenges on the internet. They should be age graded. [01:02] The best longevity fitness challenge is doing something you personally enjoy doing and would like to continue doing for the rest of your life. [07:44] There are important benchmarks in the areas of mobility and flexibility as well as strength, power, explosive power, as you age. [12:32] In one study of seniors, it was found that the group who walked at least 4000 steps per day had bigger brains because of their walking habits. [16:47] You have to preserve that functional muscle strength throughout life to avoid the single most prominent marker of accelerated aging, which is sarcopenia, age-related muscle loss. [18:49] Running the 400 meter says volumes about your overall physical fitness and vitality. [21:49] Jogging 2.0 on the YouTube channel shows a typical morning session for Brad. where he shows a variety of fitness drills. [30:24] What is your mile time? Brad talks about a study that predicted longevity more accurately than blood work. [34:20] There are many other benchmarks to measure your fitness. [36:29] “Lift heavy things” means to engage in regular bouts of brief, explosive, high-intensity strength training. [38:22] LINKS: Brad Kearns.com BradNutrition.com B.rad Superdrink – Hydrates 28% Faster than Water—Creatine-Charged Hydration for Next-Level Power, Focus, and Recovery B.rad Whey Protein Superfuel - The Best Protein on The Planet! Brad’s Shopping Page BornToWalkBook.com B.rad Podcast – All Episodes Peluva Five-Toe Minimalist Shoes Outlive, by Peter Attia Jogging 2.0 video We appreciate all feedback, and questions for Q&A shows, emailed to podcast@bradventures.com. If you have a moment, please share an episode you like with a quick text message, or leave a review on your podcast app. Thank you! Check out each of these companies because they are absolutely awesome or they wouldn’t occupy this revered space. Seriously, I won’t promote anything that I don't absolutely love and use in daily life: B.rad Nutrition: Premium quality, all-natural supplements for peak performance, recovery, and longevity; including the world's highest quality whey protein! Peluva: Comfortable, functional, stylish five-toe minimalist shoe to reawaken optimal foot function. Use code BRADPODCAST for 15% off! Ketone-IQ Save 30% off your first subscription order & receive a free six-pack of Ketone-IQ! Get Stride: Advanced DNA, methylation profile, microbiome & blood at-home testing. Hit your stride the right way, with cutting-edge technology and customized programming. Save 10% with the code BRAD. Mito Red Light: Photobiomodulation light panels to enhance cellular energy production, improve recovery, and optimize circadian rhythm. Use code BRAD for 5% discount! Online educational courses: Numerous great offerings for an immersive home-study educational experience Primal Fitness Expert Certification: The most comprehensive online course on all aspects of traditional fitness programming and a total immersion fitness lifestyle. Save 25% on tuition with code BRAD! See omnystudio.com/listener for privacy information.
The USMNT isn't alone in feeling good about Group D. Jimmy Conrad, Charlie Davies and Tony Meola discuss how Australia and Paraguay reacted to the World Cup draw and Mike Grella's disparaging comments, and whether Team USA fared better than fellow co-hosts Mexico and Canada. Mauricio Pochettino has to whittle down his player pool, and Tyler Adams sets a lofty benchmark for the Stars and Stripes. A flu-stricken Christian Pulisic saves the day for AC Milan, but can he parlay his club form into an iconic World Cup performance? The Athletic's Paul Tenorio joins to talk about FIFA's water breaks as well as Inter Miami clinching the MLS Cup. What are the next steps for the Herons and the rest of the league? And Lionel Messi follows in Tony's footsteps by scooping the MLS MVP Award! Call It What You Want is available for free on the Audacy app as well as Apple Podcasts, Spotify and wherever else you listen to podcasts. Follow the Call It What You Want team on X: @JimmyConrad, @CharlieDavies9, @TMeola1 Visit the betting arena on CBSSports.com for all the latest in sportsbook reviews and sportsbook promos for betting on soccer For more soccer coverage from CBS Sports, visit https://www.cbssports.com/soccer/ To hear more from the CBS Sports Podcast Network, visit https://www.cbssports.com/podcasts/ Watch UEFA Champions League, UEFA Europa League, UEFA Europa Conference League, UEFA Women's Champions League, EFL Championship, EFL League Cup, Carabao Cup, Serie A, Coppa Italia, CONCACAF Nations League, CONCACAF World Cup Qualifiers, Lamar Hunt U.S. Open Cup, NWSL, Scottish Premiership, AFC Champion League by subscribing to Paramount+ Visit the betting arena on CBS Sports.com: https://www.cbssports.com/betting/ For all the latest in sportsbook reviews: https://www.cbssports.com/betting/news/sportsbook-promos/ And sportsbook promos: https://www.cbssports.com/betting/news/sportsbook-promos/ To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Fresh out of the studio, Karen Hao, investigative journalist and author of "Empire of AI" joined us in a conversation to unravel how companies like OpenAI, Anthropic, and xAI have become modern empires reshaping society, labor, and democracy itself. Karen traces her journey from mechanical engineering at MIT to becoming one of the tech industry's most critical voices, sharing how Silicon Valley's innovation ecosystem has distorted toward self-interest rather than the public good. She unpacks the four characteristics that make AI companies mirror colonial empires: resource extraction through data scraping, labor exploitation of annotation workers, knowledge monopolies where most AI researchers are industry-funded, and quasi-religious quests to build an "AI God." Throughout the conversation, Karen reveals OpenAI's governance dysfunction stemming from its contradictory non-profit-for-profit structure and shares the inspiring story of Chilean water activists who successfully blocked Google's data center from draining their community's freshwater resources. She explains how Sam Altman's plans for 250 gigawatts of data center capacity—equivalent to four dozen New York Cities—would be environmentally catastrophic, while demonstrating how China's export restrictions paradoxically spurred more efficient AI innovation. Last but not least, she argues that empathy-driven journalism remains irreplaceable and calls for global citizens to hold these companies accountable to the broader public interest."These empires are amassing extraordinary amounts of resources by dispossessing a majority of the world. That includes like the data that they're extracting from people by just scraping it from online or intellectual property that they're taking from artists and creators. Most AI researchers now work for the AI industry and/or are funded in part by the AI industry. Even academics that have stayed within universities are often funded by the AI industry, and the effect that that has had on knowledge production is akin to the effect we would imagine if most climate scientists were bankrolled by the fossil fuel industry. I cannot stress enough how much they genuinely believe that they are on the path to creating something akin to an AI god, and that this is going to have cataclysmic shifts on civilization." - Karen Hao, Author of Empire of AIEpisode Highlights:[00:00] Quote of the Day by Karen Hao[00:47] Introduction: Karen Hao, Author of "Empire of AI"[01:44] From MIT engineering to investigating AI journalism[02:51] Silicon Valley distorts innovation toward self-benefit[04:12] AI companies as modern empires of power[06:00] Four traits of Empire: extraction, exploitation, monopolies, ideology[09:01] Quasi-religious movements driving Silicon Valley AI development[10:04] AGI believers speak specialized fanatical vocabulary[11:16] OpenAI founding: nonprofit facade, profit ambitions[13:53] Sam Altman firing: board's failed governance attempt[17:13] Fragmentation: every billionaire building their own AI[19:06] China's export controls sparked efficient AI innovation[21:57] Silicon Valley lacks American democratic values entirely[25:06] Chilean activists successfully blocked Google's water extraction[28:51] Sam Altman's 250 gigawatts: four dozen New York cities[31:21] Scaling continues despite base model asymptote reached[32:53] Benchmarks faulty: training data unknown, results unreliable[39:11] Success: sparking conversation about AI's human costs[39:40] ClosingProfile: Karen Hao, Author of Empire of AI and Investigative Journalist LinkedIn: https://www.linkedin.com/in/karendhao/Personal Site: https://karendhao.com/Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format.
This special ChinaTalk cross-post features Zixuan Li of Z.ai (Zhipu AI), exploring the culture, incentives, and constraints shaping Chinese AI development. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. The discussion covers Z.ai's powerful GLM 4.6 model, their open weights strategy as a marketing tactic, and unique Chinese AI use cases like "role-play." Gain insights into the rapid pace of innovation, the talent market, and how Chinese companies view their position relative to global AI leaders. Sponsors: Google AI Studio: Google AI Studio features a revamped coding experience to turn your ideas into reality faster than ever. Describe your app and Gemini will automatically wire up the right models and APIs for you at https://ai.studio/build Agents of Scale: Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts Framer: Framer is the all-in-one platform that unifies design, content management, and publishing on a single canvas, now enhanced with powerful AI features. Start creating for free and get a free month of Framer Pro with code COGNITIVE at https://framer.com/design Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Sponsor: Google AI Studio (00:31) About the Episode (03:44) Introducing Z.AI (07:07) Drupu AI's Backstory (09:38) Achieving Global Recognition (Part 1) (12:53) Sponsors: Agents of Scale | Framer (15:15) Achieving Global Recognition (Part 2) (15:15) Z.AI's Internal Culture (19:17) China's AI Talent Market (24:39) Open vs. Closed Source (Part 1) (24:46) Sponsors: Tasklet | Shopify (27:54) Open vs. Closed Source (Part 2) (35:16) Enterprise Sales in China (40:38) AI for Role-Playing (45:56) Optimism vs. Fear of AI (51:36) Translating Internet Culture (57:11) Navigating Compute Constraints (01:03:59) Future Model Directions (01:15:02) Release Velocity & Work Culture (01:25:04) Outro
First, we break down “the market” by exploring the major indices investors follow every day. From the S&P 500 and Dow Jones Industrial Average to the Nasdaq Composite, we explain what these benchmarks measure, how they're built, and why your portfolio may not always mirror their movements. You'll learn the differences between price-weighted, equal-weighted, and market-cap-weighted indices, plus get insight into the Dow's historic milestones as it inches closer to 50,000.Then, we shift to Black Friday. With the holiday shopping season kicking off, we dig into the latest projections—how many Americans will shop, where they'll spend, and what trends are shaping this year's deals. Whether you love doorbusters or prefer digital carts, we'll connect the stats to what they could mean for consumers and the broader economy.Finally, as the year wraps up, we turn to your retirement strategy. We walk through the essentials of year-end IRA planning—from maximizing contributions to handling required minimum distributions and reviewing beneficiaries. We highlight key deadlines, common pitfalls to avoid, and tactics that can help strengthen your long-term savings.Three conversations, one goal: giving you the clarity and confidence to make informed financial decisions. Tune in!Join hosts Nick Antonucci, CVA, CEPA, Director of Research, and Managing Associates K.C. Smith, CFP®, CEPA, and D.J. Barker, CWS®, and Kelly-Lynne Scalice, a seasoned communicator and host, on Henssler Money Talks as they explore key financial strategies to help investors navigate market uncertainty. Henssler Money Talks — November 29, 2025 | Season 39, Episode 48Timestamps and Chapters6:46: Benchmarks and Big Numbers28:50: Black Friday Unwrapped41:54: Finish Strong: Your Year-End IRA PlaybookFollow Henssler: Facebook: https://www.facebook.com/HensslerFinancial/ YouTube: https://www.youtube.com/c/HensslerFinancial LinkedIn: https://www.linkedin.com/company/henssler-financial/ Instagram: https://www.instagram.com/hensslerfinancial/ TikTok: https://www.tiktok.com/@hensslerfinancial?lang=en X: https://www.x.com/hensslergroup “Henssler Money Talks” is brought to you by Henssler Financial. Sign up for the Money Talks Newsletter: https://www.henssler.com/newsletters/
Wildest week in AI since December 2024.
Gemini 3 is out and it may change the landscape in artificial intelligence. Benchmarks have it performing better than GPT-5 and Google is leaning into its competitive advantages in AI tech. Plus, we talk about the drop in Bitcoin and how Target lost its mojo. Travis Hoium, Rachel Warren, and Jon Quast discuss: - Gemini 3 is out - Anthropic's capital raise - Bitcoin is down, but is it out? - Why Target is falling behind in retail Companies discussed: Alphabet (GOOG, GOOGL), NVIDIA (NVDA), Target (TGT), Bitcoin (BTC), Coinbase (COIN), Circle (CRCL). Host: Travis Hoium Guests: Rachel Warren, Jon Quast Engineer: Dan Boyd Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We're committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
WWP NIGHT w/ the DETROIT RED WINGS (Nov. 15th vs. BUF) TICKETS: https://www.gofevo.com/event/WingedWheelPodcast11-15 WWP NIGHT w/ the GRAND RAPIDS GRIFFINS TICKETS ON SALE NOW: https://griffinshockey.com/wwp Are the Detroit Red Wings actually "NHL divisonal playoff seed" good, or is this just an early season mirage? Tune in as we start by discussing the wild shootout win for the Red Wings over Todd McLellan and Cam Talbot's former team, the Los Angeles Kings: Marco Kasper's 2 goals (including an almost high-stick off of Sandin-Pellikka's shot), Alex DeBrincat scoring again, the late collapse and called off overtime winner due to Fiala's interference, Cam Talbot's massive stops, and Lucas Raymond's shootout winner (4:15). Also, Patrick Kane's injury, Austin Watson being called up over Nate Danielson, what this means for the Grand Rapids Griffins, and when Steve Yzerman and Todd McLellan may choose to bring Danielson into the fold (19:40). Next, Detroit's loss to McTavish and the Anaheim Ducks as John Gibson revisits his former team: more DeBrincat goal-scoring, Raymond, DeBrincat, and Dylan Larkin syncing up, Moritz Seider's phantom "kicking motion", controversial reviews, & more (23:55). After that, we discuss whether the Detroit Red Wings are truly a good team worthy of an NHL playoff seed, what's different this year as Emmitt Finnie and Patrick Kane continue to solve big problems in their top 6, how they might continue through tougher months, and how insanely close the Atlantic Division and Eastern Conference are (36:10). Finally, NHL news & notes (including the Necas contract & Makar's next deal) (51:15) before we take your questions and comments in our Overtime segment (59:15) - enjoy! Head over to wingedwheelpodcast.com to find all the ways to listen, how to support the show, and so much more! This episode is brought to you by Green Light Lending: gogreenlightlending.com #ad This episode is brought to you by Hims. Visit hims.com/wingedwheel for your personalized hair loss treatment options. #ad Support the Jamie Daniels Foundation through Wings Money on the Board: https://www.wingedwheelpodcast.com/wingsmotb