POPULARITY
Categories
Forget the built-in Windows tools—Paul shares why third-party password managers are the secret to making passkeys smarter, more powerful, and truly universal across all your devices. Host: Paul Thurrott Download or subscribe to Hands-On Windows at https://twit.tv/shows/hands-on-windows Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. Sponsor: canary.tools/twit - use code: TWIT
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
This is a recap of the top 10 posts on Hacker News on February 11, 2026. This podcast was generated by wondercraft.ai (00:30): Claude Code is being dumbed down?Original post: https://news.ycombinator.com/item?id=46978710&utm_source=wondercraft_ai(01:57): Windows Notepad App Remote Code Execution VulnerabilityOriginal post: https://news.ycombinator.com/item?id=46971516&utm_source=wondercraft_ai(03:25): Discord/Twitch/Snapchat age verification bypassOriginal post: https://news.ycombinator.com/item?id=46982421&utm_source=wondercraft_ai(04:52): Amazon Ring's lost dog ad sparks backlash amid fears of mass surveillanceOriginal post: https://news.ycombinator.com/item?id=46978966&utm_source=wondercraft_ai(06:20): Chrome extensions spying on users' browsing dataOriginal post: https://news.ycombinator.com/item?id=46973083&utm_source=wondercraft_ai(07:48): Fluorite – A console-grade game engine fully integrated with FlutterOriginal post: https://news.ycombinator.com/item?id=46976911&utm_source=wondercraft_ai(09:15): GLM-5: From Vibe Coding to Agentic EngineeringOriginal post: https://news.ycombinator.com/item?id=46977210&utm_source=wondercraft_ai(10:43): Why vampires live foreverOriginal post: https://news.ycombinator.com/item?id=46976443&utm_source=wondercraft_ai(12:11): Officials Claim Drone Incursion Led to Shutdown of El Paso AirportOriginal post: https://news.ycombinator.com/item?id=46972610&utm_source=wondercraft_ai(13:38): FAA closes airspace around El Paso, Texas, for 10 days, grounding all flightsOriginal post: https://news.ycombinator.com/item?id=46973647&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Last year at the 2025 AAHKS Annual Meeting, our host William B. Kurtz, MD spoke with the James A. Rand, MD, Young Investigator’s Award recipient Michael E. Neufeld, MD, MSc, FRCSC about his study on Synovial Metal Ions in “Nickel Free” vs. Standard Cobalt-Chrome Containing Total Knee Replacement. Dr. Neufeld shared that the aim of his study was to compare intraarticular synovial fluid levels of metal ions in patients who underwent cemented primary TKA with a hypoallergenic implant vs. a matched cohort of standard cobalt-chromium (Co-Cr) containing implants at a minimum two-year follow-up. A case-controlled study was conducted using prospectively collected data from a single institution – 22 cases and 19 controls. Interestingly, the study uncovered that patients with hypoallergenic implants had intraarticular synovial Ni ion levels 3.6 times higher vs. standard implant controls, contesting the use of this hypoallergenic implant for Ni allergy/hypersensitivity. Listen to the full discussion and make sure you click subscribe! Thanks for listening to AAHKS Amplified! In This Episode: William B. Kurtz, MD Michael E. Neufeld, MD, MSc, FRCSC The post Synovial Metal Ions in Nickel Free vs. Standard Cobalt-Chrome Containing Total Knee Replacement first appeared on AAHKS.
In the 1990s, Microsoft and Netscape fought for control of the browser, the gateway between humans and the internet. Netscape went from 90% market share to zero in five years. Now, with over 30 agentic browsers launching in under 18 months, the same war is playing out again, only this time the stakes are higher. This episode breaks down the 90s browser wars, compares the tactics to what's happening today, and explains what website owners should do about it.Key takeawaysThe playbook hasn't changed - Bundling, free products, proprietary lock-in, and distribution deals decided the 90s browser wars. The same tactics are playing out with agentic browsers today.Google is running Microsoft's 1995 playbook - Microsoft embedded IE into Windows to protect its OS monopoly. Google is embedding Gemini into Chrome to protect its search monopoly. The browser is the defensive weapon, not the product.The Chromium trap is deeper than IE bundling ever was - Most agentic browsers (Comet, Atlas, Neon) run on Google's Chromium engine. Even competitors are built on Google's foundation.The prize shifted from attention to transactions - The 90s fight was about what people see. The agentic browser fight is about what AI agents buy, book, and do on your behalf.Your website is the new Netscape - If AI agents mediate every user interaction, your site risks becoming invisible infrastructure rather than a destination.Regulation will be too late - The DOJ took 6 years to settle with Microsoft. Netscape was already dead. The same timeline is playing out with Google's antitrust case.What to do todayDon't optimize for one agentic browser. Build for web standards: semantic HTML, ARIA labels, structured data, server-side rendering.Build direct audience relationships (email, communities, subscriptions) so you're not dependent on browser intermediaries.Make your site worth visiting, not just worth scraping. Offer value an AI agent can't replicate.Treat accessibility as an agent strategy. Screen reader compatibility = AI agent compatibility.Test your site with an agentic browser to see what works and what breaks.Read the full agentic browser landscape breakdown: nohackspod.com/blog/agentic-browser-landscape-2026Chapters00:00 - Introduction01:34 - The First Browser War09:15 - The Agentic Browser Explosion12:48 - Why Is This Happening Now?16:15 - Where the 2026 Version Gets Worse21:27 - What This Means for Your Website23:14 - What to Do About It26:49 - ClosingConnectWebsite: https://nohackspod.comLinkedIn: https://www.linkedin.com/in/slobodanmanic/Newsletter: https://nohackspod.com/subscribeNo Hacks is a podcast about web performance, technical SEO, and the agentic web. Hosted by Slobodan "Sani" Manic.
Jon talks to Eliza Labs Shaw Walters about his work on AI agent frameworks, and why blockchain games are a focus. [1:00] An introduction to Shaw Walters, the creator of AI agent framework ElizaOS.[3:10] "I see myself as a comedian and performance artist. I love the joke that becomes true."[4:48] "What I'm focused on is agents that get you your time back and agents that can play games."[6:06] "Attention is an important part of capitalism. If you're not spicy, pmarca doesn't read your tweet."[7:25] "We're one generation away from ... I don't need better. I need cheaper."[9:25] "I still have skill arbitrage on the average person. But in a year, it won't matter."[11:56] "We're making AI RuneScape. It's going to cost us $80,000 and take 4 months."[13:08] "Now we have dream tech - you can build the uncompromising, exactly what you want."[16:26] Can copyright be enforced in this new world of online, onchain abundance?[18:43] Is P(doom) a scalar or a dynamic multi-dimensional vector?[21:18] "I'm here to build the world I want to see."[22:10] "I've always felt the most important things to do came from the silliest things."[23:35] Explaining AI agent social prediction trading platform Babylon.[26:23] Does Shaw enjoy being a high-profile commentator?[28:40] "I made a Chrome extension that blocks haters. But it also blocked a lot of my friends."[33:35] AI means software companies will have to build their most generous, growth-oriented version.[34:07] Help Shaw build "AI Runescape", an open source project called Hyperscape AI.
The AI labs fighting for attention during the Super Bowl call to mind another iconic Super Bowl moment: Apple's 1984 ad for the Macintosh, which promised that the personal computer would be a source of unbound wonder, freedom, and delight.They were right, but over time, the personal computer has also become cluttered with errands.These “computer errands”—downloading a W-2 when tax season rolls around, hunting for the right coupon code before checkout, or navigating the unholy labyrinth of the Amazon Web Services dashboard just to change one permission setting—have taken over our digital lives. Atlas, OpenAI's agentic browser, sprang from the idea that AI should handle this tedium for you.In this week's episode of AI & I, Dan Shipper sat down with two members of the Atlas team, Ben Goodger and Darin Fisher. Goodger is Atlas's head of engineering, and Fisher is a member of the technical staff. Both are legends of the browser world. They've spent decades building the modern web, working together on Netscape, Firefox, and Chrome before arriving at Atlas. From that vantage point, they told Dan how they think browsing is about to change, why building a browser is harder than it looks, and what it's like to create a new one with AI coding tools like Codex.If you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Move fast, don't break thingsMost AI coding tools don't know which line of code will actually break your system. Try Augment Code, which understands your entire codebase, including the repos, languages, and dependencies that actually runs your business, and use their playbook to learn more about their framework, checklists, and assessments. Ship 30% faster with 40% shorter merge times.[Playbook at https://www.augmentcode.com/]Timestamps: 00:01:57 - Introduction00:11:51 - Designing an AI browser that's intuitive to use00:15:24 - How the web changes if agents do most of the browsing00:25:06 - Why traditional websites will not become obsolete00:29:00 - A browser that stays out of the way versus one that shows you around00:39:51 - How the team uses Codex to build Atlas00:44:47 - The craft of coding with AI tools00:52:33 - Why Goodger and Fisher care so much about browsersLinks to resources mentioned in the episode:Ben Goodger: Ben Goodger (@bengoodger) Darin Fisher: Darin Fisher (@darinwf) OpenAI's browser, Atlas: Introducing ChatGPT Atlas
This week go deep with Alex Komoroske, CEO and co-founder of Common Tools, about his vision for a more saner, more intentional tech paradigm in which the historical contingencies that gave us the digital world we have today have been fundamentally reworked.The version of AI most of us have come to accept or reject looks like corporate-owned super-assistants with all your data. Instead, we could have a decentralized ecosystem where software self-assembles around you—private, personal, and prosocial. Alex speaks on this possible world with authority: he spent 13 years at Google as PM Director on Chrome's web platform, Search, and AR, and later led corporate strategy at Stripe before co-founding Common Tools with Bernhard Seefeld.Some of the waypoints in our conversation include: confidential compute, emergent ontologies, where we want friction, the tyranny of the marginal users, the rise of the generalist, the importance of context ownership, and software ephemerality.We can't take a reasonable principled stance on the promises and perils of AI without considering the vast unexplored possibility space that Alex opens in this conversation. I'm grateful that I get to share it with you and help light the way for promising alternatives to what many of us have come to accept as “the way things are.”Links to extensive additional reading and listening below!✨ If you enjoy this podcast, please consider liking, subscribing, and commenting wherever you listen: YouTube • Spotify • Apple Podcasts • Etc.✨ Become a member to support the show and score myriad perks, like our book club: our next call is on Wendell Berry's Standing by Words this Sunday, Feb 15th!✨ Become a founding member for access to my five-week science and philosophy course at Weirdosphere and the raw recordings of every unreleased episode! (Anyone can chat with my course transcripts in a dedicated Google Notebook here.)✨ Browse and buy all of the books we discuss on the show at Bookshop.org✨ Contact me with inquiries or hire me as a consultantReferenced & Related• The FLUX Collective (team project w/ several people mentioned in this episode)• Bits and Bobs (Alex's long-running archive of weekly notes)• Common Ground (Alex's dialogues w/ Aishwarya Khanduja of The Analogue Group)• The Iterative Adjacent Possible (Alex on Medium)• The Runaway Engine of Society (Alex on Medium)• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice (podcast w/ Lenny Rachitsky)• Media and Machines by Anu Atluru at Working Theorys• Accelerando & Glasshouse & Halting State (three books) by Charles Stross• The Transparent Society by David Brin• The evolution of Covert Signaling by Paul Smaldino• Landscape rules predict optimal superhighways for the first peopling of Sahul by Stefani Crabtree et al.• The Tyranny of the Marginal User by Ivan Vendrov• 1,000 True Fans by Kevin Kelly• Blindsight & Echopraxia (two books) by Peter Watts• The Computer as a Communication Device by J.C.R. Licklider & Bob Taylor• Silicon Valley's quest to remove friction from our lives by Rohit Krishnan• The Most Valuable Commodity in the World is Friction by Kyla Scanlon• Bernhard Seefeld• Situated Software by Clay Shirky• Das Rad (animated short)• Geoffrey West• Mark Pesce• Fred Turner• Robert David SteeleExplore hundreds of related podcast episodes in the archives! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
Chrome Federal Credit Union CEO Bob Flanyak Jr. delivers a masterclass in authentic leadership and organizational culture. From collecting loans to building a 220 million dollar credit union recognized as Washington County's best overall business, Bob's journey reveals what truly matters in financial services. He shares powerful insights on succession planning, why employee relationships trump strategy, and how cooperative values create sustainable competitive advantages. Bob's candid discussion about credit union mergers, community impact, and talent development offers both inspiration and practical wisdom. His father's 29 years as a volunteer board member planted seeds that grew into a career philosophy centered on serving people over profits.What You Will Learn in This Episode: ✅ How to build and maintain a strong credit union culture through employee engagement, morning huddles, and living your organizational values rather than simply displaying them on walls.✅ The essential elements of effective succession planning in credit union leadership including balancing internal candidates with external searches while protecting organizational culture and values.✅ Practical strategies for credit union collaboration and financial literacy programs that allow small and medium-sized institutions to compete effectively without merging, including Financial Reality Fairs and student brand development.✅ Why brand building and community impact matter more than traditional metrics, and how cooperative values and relationship-focused leadership create sustainable member loyalty and employee engagement.Subscribe to Credit Union Conversations for the latest credit union trends and insights on loan volume and business lending! Connect with MBFS to boost your credit union's growth today.TIMESTAMPS: 00:00 Intro: Meet Bob Flanyak of Chrome Federal Credit Union06:31 Career journey from CUNA Mutual to retail credit union leadership across multiple states09:45 Chrome Federal Credit Union's history and advice to advance your career in the credit union space14:26 Succession planning strategy and protecting credit union culture during CEO transitions17:36 Addressing credit union mergers and fostering collaboration among small institutions20:54 Financial literacy programs and community impact through Financial Reality FairsKEY TAKEAWAYS:
Ayahuasca is being sold as healing. That's not what she lived.This episode is a real warning from someone who went in.On Wake Up with Miya, I sit down with Christa Black-Gifford, a singer, songwriter, and host of the Head to Heart podcast, to share her journey from growing up as a preacher's kid and touring in the Christian music world to getting pulled into plant medicine and spiritual deception—and how she found her way back to Jesus.We talk about ayahuasca and wachuma, what she encountered in ceremonies, how deception often starts small, and how to build real spiritual discernment when something feels powerful but isn't from God.If you've ever been curious about psychedelics, New Age spirituality, or “plant medicine” healing, this conversation gives you clarity and warning signs you can actually use.Subscribe to Wake Up with Miya for more truth-seeking conversations.For the full extended episode, join me on Patreon for the Plus Side.BUY ME A COFFEE LINKSupport the Show & Stay Connected:Buy Me a Coffee:https://buymeacoffee.com/sensiblehippiehttps://www.youtube.com/@WakeUpWithMiyaJoin My Patreon for ad-free episodes & exclusive content: https://Patreon.com/WakeupwithMiyaIf you're joining Waiola – The Plus Side, please subscribe through a web browser (Safari or Chrome) instead of the app — it directly supports the show.Mahalo nui loa for supporting independent work and helping keep this platform growing.Shop my Amazon Storefront:https://www.amazon.com/shop/profile/amzn1.account.AGYOPCXXGH6MN5RVAKGQWVZUZLEA/list/26B87RB4FZ9W2?ref_=cm_sw_r_cp_ud_aipsflist_6BWRT43TH4MY2NM2XD6XWant to be on the show or have a guest suggestion?Email me at: Miya@wakeupwithmiya.comFollow Me Online:Instagram: https://www.instagram.com/WakeupwithMiyaFacebook: https://www.facebook.com/WakeupwithMiyaExclusive Discount!Shop at LVNTA: https://lvnta.com/lv_IcTq5EmoFKaZfJhTiSUse code OHANA for 20% off!Listen on Your Favorite Platform:Spotify, Apple Podcasts, YouTube, and everywhere podcasts are available!RATE & REVIEW:Apple: https://podcasts.apple.com/us/podcast/wake-up-with-miya/id1627169850Spotify: https://open.spotify.com/show/0UYrXCgma1lJYzf8glnAxyMusic Credits:Beginning: "Echoes in the Shadows" - DK Intro: “At First Light” – LunarehOutro: “Uptown” – PALAEnd Music: “Crazy” - Eko
In this episode, Bryan and Scott gave an overview of their first impressions of 2025 Topps Chrome Formula 1, 2025 Topps Chrome Logofractor, and a brief preview of Sapphire Edition. 0:00 – Intro 10:57 – 2025 Topps Chrome F1 First Impressions 59:58 – 2025 Topps Chrome F1 Logofractor 1:07:55 - 2025 Topps Chrome F1 Sapphire Bryan @Q3Ccards and Scott @P1Castle cover the F1 Sports Card Hobby. We appreciate your support. Please consider leaving a review on Apple Podcasts, Spotify, iHeart, or Amazon Music. Like, subscribe, and enable notifications on YouTube so you never miss a new episode. P1Castle on Fanatics Live https://www.fanatics.live/shops/2ef57b91-f20d-47d6-8aa1-03dbcca54ecc Carbon Cardboard on Apple Podcasts: https://podcasts.apple.com/us/podcast/carbon-cardboard/id1730633164 Carbon Cardboard on YouTube: https://www.youtube.com/@carboncardboardpodcast P1Castle Website: P1Castle.com Q3Cards Website: Q3Cards.com @q3cards https://www.instagram.com/q3cards/
Ivanti zero-days trigger emergency warnings around the globe. Singapore blames a China-linked spy crew for hitting all four major telcos. DHS opens a privacy probe into ICE surveillance. Researchers flag a zero-click RCE lurking in LLM workflows. Ransomware knocks local government payment systems offline in Florida and Texas. Chrome extensions get nosy with your URLs. BeyondTrust scrambles to patch a critical RCE. A Polish data breach suspect is caught eight years later. It's the Monday Business Breakdown. Ben Yelin gives us the 101 on subpoenas. And federal prosecutors say two Connecticut men bet big on fraud, and lost. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Ben Yelin, Program Director for Public Policy & External Affairs at the University of Maryland Center for Cyber Health and Hazard Strategies, talking about weaponized administrative subpoenas. Selected Reading EU, Dutch government announce hacks following Ivanti zero-days (The Record) Singapore says China-linked hackers targeted telecom providers in major spying campaign (The Record) Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law (404 Media) Critical 0-Click RCE Vulnerability in Claude Desktop Extensions Exposes 10,000+ Users to Remote Attacks (Cyber Security News) Payment tech provider for Texas, Florida governments working with FBI to resolve ransomware attack (The Record) Chrome extensions can use unfixable time-channel to leak tab URLs (CyberInsider) BeyondTrust warns of critical RCE flaw in remote support software (Bleeping Computer) Hacker Poland's largest data leaks arrested (TVP World) LevelBlue will acquire MDR provider Alert Logic from Fortra. (N2K Pro Business Briefing) Men charged in FanDuel scheme fueled by thousands of stolen identities (Bleeping Computer) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Bon Jovi – Livin' On A PrayerCheap Trick – SurrenderBerlin – Take My Breath AwayCulture Club – Do You Really Want To Hurt MeBilly Joel – Uptown GirlGeorge Michael – FaithStyx – Too Much Time On My HandsJuice Newton – Queen Of HeartsNeil Diamond – Sweet CarolineJourney – Any Way You Want ItJay & The Americans – Come A Little Bit Close Hosted on Acast. See acast.com/privacy for more information.
Sting – If You Love Somebody Set Them FreeSimple Minds – Don't You (Forget About Me)Tina Turner – We Don't Need Another HeroStevie Wonder – I Just Called To Say I Love YouCyndi Lauper – True ColorsMatchbox Twenty – 3AMMadonna – Crazy For YouSmash Mouth – All StarN'Sync – I Want You BackThe Cardigans – Love Fool CCR Creedence Clearwater Revival – Bad Moon Rising Hosted on Acast. See acast.com/privacy for more information.
Tired of random, low-quality LinkedIn calls or no calls at all? In this episode, Troy Hipolito (a.k.a. the “Not So Boring LinkedIn Guy” and former Swiss-Filipino gamification designer) breaks down how he helps B2B and service-based businesses turn LinkedIn into a consistent pipeline of vetted, high-value meetings. After agency work with Fortune 500 brands dried up, Troy reinvented his business using LinkedIn, combining “slow dating” relationship-building with tight systems and clear daily workflows. The result: a predictable process that leverages videos, content, events, and smart follow-up instead of spammy cold pitches and random hustle. https://youtu.be/Oh-wtfBJAI0 Troy reveals the exact framework behind Skoop, his Chrome extension and dashboard, which boosts first-message-to-booked-meeting conversions from ~3% to 21%, a 6x improvement, using short, raw, hyper-personalized videos embedded directly in LinkedIn DMs. You'll learn how to fix a “jacked up” profile that repels your ideal clients, why you only really make money on LinkedIn in two places (content and DMs), how to structure free/low-ticket/high-ticket offers that feed each other, and how to use VAs, SOPs, and repeatable systems so you only spend 30 minutes a day recording videos while your backend runs like a machine. If you want more (and better) meetings without turning LinkedIn into a full-time job, this conversation gives you the blueprint. Quotes: “There's no single silver bullet. The core is providing value, but you have to distribute that value across multiple channels.” “LinkedIn is not a Facebook marketplace. It's a networking domain. It's about building real relationships, not just ‘buy my stuff.'” “We took that first LinkedIn message from about a 3% conversion to 21%, not to a reply, but to a vetted booked meeting.” “People don't read; they scan. Anything more than three lines is too much. That's why the message has to be context, content, context.” “Slow down and ‘slow date' your clients. The sooner you realize you're not a fit, the sooner you can move on with clarity.” Resources: Troy Hipolito on LinkedIn SKOOP
Subscribe to Throwing Fits on Patreon. Viral movie. This week, Jimmy and Larry are up at the ass crack of dawn to cut a banger on seasonal boot care, cuff gunk, the longest denim circumcision, dog shows and horse racing (handlers vs. jockeys edition), Japan's next new wave is already here, thoughts on Kai Cenat's new brand Vivet and how he is building it, Indian scammers, everything is post-Chrome, James went viral on Twitter which was obviously a nightmare, Timmy wore Ecko but he has bigger problems, the wiggas of old New York, Lawrence's dog is too fucked up for a Super Bowl party, there's no way Bad Bunny doesn't body the halftime show, Winter Olympic fever is here and it's bigger than just hockey gay sex (feat. skeleton phatties and frozen penises), double checking if any fashion folks are in the Epstein Files, whenever a Clinton testifies you know it's gonna be good, Nike reboots ACG again, we prove that Saks is bankrupt because they sucked at buying clothes, and much more.
Our 233rd episode with a summary and discussion of last week's big AI news!Recorded on 01/30/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Google introduces Gemini AI agent in Chrome for advanced browser functionality, including auto-browsing for pro and ultra subscribers.OpenAI releases ChatGPT Translator and Prism, expanding its applications beyond core business to language translation and scientific research assistance.Significant funding rounds and valuations achieved by startups Recursive and New Rofo, focusing on specialized AI chips and optical processors respectively.Political and social issues, including violence in Minnesota, prompt tech leaders in AI like Ade from Anthropic and Jeff Dean from Google to express concerns about the current administration's actions.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:04:09) Google adds Gemini AI-powered ‘auto browse' to Chrome | The Verge(00:07:11) Users flock to open source Moltbot for always-on AI, despite major risks - Ars Technica(00:13:25) Google Brings Genie 3 'World Building' Experiment to AI Ultra Subscribers - CNET(00:16:17) OpenAI's ChatGPT translator challenges Google Translate | The Verge(00:18:27) OpenAI launches Prism, a new AI workspace for scientists | TechCrunchApplications & Business(00:19:49) Exclusive: China gives nod to ByteDance, Alibaba and Tencent to buy Nvidia's H200 chips - sources | Reuters(00:22:55) AI chip startup Ricursive hits $4B valuation 2 months after launch(00:24:38) AI Startup Recursive in Funding Talks at $4 Billion Valuation - Bloomberg(00:27:30) Flapping Airplanes and the promise of research-driven AI | TechCrunch(00:31:54) From invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing | TechCrunchProjects & Open Source(00:35:34) Qwen3-Max-Thinking debuts with focus on hard math, code(00:38:26) China's Moonshot releases a new open-source model Kimi K2.5 and a coding agent | TechCrunch(00:46:00) Ai2 launches family of open-source AI developer agents that adapt to any codebase - SiliconANGLE(00:47:46) Tiny startup Arcee AI built a 400B-parameter open source LLM from scratch to best Meta's LlamaResearch & Advancements(00:52:53) Post-LayerNorm Is Back: Stable, ExpressivE, and Deep(00:58:00) [2601.19897] Self-Distillation Enables Continual Learning(01:03:04) [2601.20802] Reinforcement Learning via Self-Distillation(01:05:58) Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityPolicy & Safety(01:09:13) Amodei, Hoffman Join Tech Workers Decrying Minnesota Violence - BloombergSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Malicious Script Delivering More Maliciousness https://isc.sans.edu/diary/Malicious+Script+Delivering+More+Maliciousness/32682 Synectix LAN 232 TRIO Unauthenticated Web Admin CVE-2026-1633 https://www.cisa.gov/news-events/ics-advisories/icsa-26-034-04 Google Chrome Patches https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop.html LookOut: Discovering RCE and Internal Access on Looker (Google Cloud & On-Prem) https://www.tenable.com/blog/google-looker-vulnerabilities-rce-internal-access-lookout
NOTE: When you sign up for Patreon, PLEASE do it through a web browser (Safari, Chrome, etc.) and NOT an app on your iPhone. The Apple app charges 30% !!! If you just click on the link above, it should be fine. In today's episode, Becket Cook welcomes back biblical scholar Denny Burke—professor at Southern Baptist Theological Seminary and president of the Council on Biblical Manhood and Womanhood—to unpack a viral Joe Rogan clip where Texas Democrat U.S. Senate candidate James Talarico claims the Bible is pro-choice. Talarico argues that God asked Mary for consent before the incarnation in Luke 1, framing creation and pregnancy as requiring freedom and bodily autonomy. Burke dismantles this interpretation, showing it's a clear distortion: the Annunciation is God's sovereign announcement, not a negotiation, and Mary's response is humble submission as the Lord's bond-slave. They also refute the outdated "life begins at first breath" claim from Genesis 2:7, highlight the unborn's personhood in Luke (John leaping for joy in Elizabeth's womb), and warn how this eisegesis promotes theological liberalism with destructive consequences for life, marriage, and family. If you're exploring faith, politics, abortion rights, or biblical fidelity amid viral debates, this episode clarifies why Scripture consistently affirms the sanctity of unborn life. Denny Burk's Article: https://wng.org/opinions/a-gifted-manipulator-1753325365 The Becket Cook Show Ep. 229 Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Hundreds of millions just got an AI glow up and didn't even notice.
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Conor and Jaeden discuss Google's recent announcements, including Gemini and Chrome, and their implications for the AI landscape. They explore the importance of upskilling organizations with AI mindsets, the innovations in browser technology, and the transformative potential of AI Studio Live. The discussion highlights the shift in user interaction with AI and the future of technology in everyday tasks.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleWatch on YouTube: https://youtu.be/qrL-bafxNDkChapters00:00 Google's Dominance in AI and Chrome03:06 The AI Mindset and Organizational Transformation06:12 Innovations in Browser Technology08:49 AI Studio Live: A Game Changer11:50 The Future of User Interaction with AI See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This Day in Legal History: BlockburgerOn February 4, 1932, the United States Supreme Court decided Blockburger v. United States, 284 U.S. 299 (1932), a case that established an enduring rule in American criminal law known as the Blockburger test. This test is used to determine whether two offenses are sufficiently distinct to permit multiple punishments or prosecutions under the Double Jeopardy Clause of the Fifth Amendment.In the case, the defendant was charged with multiple violations of the Harrison Narcotics Act for selling morphine on different occasions. The legal question was whether he could be prosecuted separately for each sale and for selling without proper prescription and for selling not in the original stamped package, even if these occurred during the same transaction.The Court held that each offense requires proof of a fact the other does not. If that's the case, then they are distinct for double jeopardy purposes. This became the “same elements” test, sometimes called the Blockburger test, and it remains a key tool for analyzing double jeopardy claims today.Notably, the test doesn't focus on whether the charges arise from the same conduct or transaction, but on whether each statutory provision requires proof of a fact which the other does not.This legal principle has been cited in thousands of cases, and it continues to shape how prosecutors and courts evaluate overlapping criminal charges.Ryan W. Routh, convicted of attempting to assassinate Donald Trump weeks before the 2024 presidential election, is scheduled for sentencing on Wednesday. Prosecutors are seeking a life sentence, citing months of planning, the use of disguises and multiple cellphones, and Routh's readiness to kill others to carry out the plot. He was arrested near Trump's West Palm Beach golf course in September 2024 after fleeing the scene and leaving behind a rifle and gear resembling body armor. At trial, Routh represented himself, making erratic statements and offering little in the way of a legal defense. He was convicted of five charges, including attempted assassination and illegal firearm possession. Routh claims he did not intend to kill Trump and has requested a 27-year sentence along with psychological treatment. The incident was the second assassination attempt on Trump during the campaign season. Prosecutors emphasized that Routh's actions could have succeeded had it not been for Secret Service intervention. Following the verdict, Routh attempted to stab himself with a pen in court and had to be restrained. Trump praised the conviction, calling Routh “an evil man with an evil intention.”Man convicted of attempting to assassinate Trump to be sentenced | ReutersNetflix Co-CEO Ted Sarandos faced sharp questioning from U.S. senators over the company's proposed $82.7 billion acquisition of Warner Bros Discovery, a deal that could reshape the streaming and entertainment landscape. At a Senate antitrust hearing led by Republican Mike Lee, lawmakers from both parties expressed concern that the merger could reduce competition, limit job opportunities for entertainment workers, and reduce content diversity. Lee warned the deal might let Netflix dominate streaming and steer major Warner Bros franchises away from theaters or rivals. Sarandos defended Netflix's position, citing competition from platforms like YouTube, though senators noted YouTube's ad-based model differs from subscription services.The Department of Justice is currently reviewing the merger alongside a competing bid from Paramount Skydance. Paramount's proposal faces financing challenges, and its CEO, David Ellison, has ties to Donald Trump, raising political questions. Democratic Senator Cory Booker questioned Sarandos on whether Trump would influence the deal's approval, a notion Sarandos said he couldn't confirm. Sarandos argued that all viewing time on television is in direct competition, but senators remained skeptical of Netflix's claims that its competition includes ad-supported platforms. The hearing reflects broader unease about consolidation in streaming, and the DOJ's decision will ultimately shape the industry's direction.Netflix co-CEO faces grilling by US Senate panel over Warner Bros deal | ReutersThe U.S. Department of Justice and a majority of state attorneys general are appealing a major antitrust ruling in the case against Google over its dominance in the online search market. Although a federal judge previously determined that Google held a monopoly, he declined to impose significant structural remedies, such as requiring Google to sell its Chrome browser or stop paying Apple to make Google the default search engine on Apple devices. The government's appeal is expected to target this leniency.Google is also appealing the ruling and has requested a delay in compliance with the judge's order to share certain data with competitors while the appeals process is ongoing. The case, originally filed in 2020, marks one of the most significant antitrust challenges against a tech company in decades. The court noted that newer players like OpenAI have recently emerged, potentially altering the competitive landscape.The ruling was widely viewed as a partial win for Google, frustrating regulators who had hoped for broader changes to curb the company's influence in digital advertising and search. The appeal signals continued government efforts to pursue more aggressive antitrust enforcement in the tech sector.US files appeal in Google search antitrust case | Reuters This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Wanna send us a message? On the show this time we go through the first previews of WWE Topps Chrome 2026, AEW New Arrivals for 2026, go through WARGAMES! in our Coffee Break and discuss the latest in Topps Slam Support the show
Guns N'Roses – November RainHeart – BarracudaDire Straits – Walk Of LifeBryan Adams – (Everything I Do) I Do For YouCorey Hart – Sunglasses At NightGolden Earring – Radar LovePet Shop Boys – West End GirlsDr. Teeth & The Electric Mayhem – Can You Picture That Hosted on Acast. See acast.com/privacy for more information.
Metallica – Master Of PuppetsAC/DC – It's A Long Way To The Top (If You Wanna Rock N' Roll)Bob Seger – ShakedownGeorge Thorogood & The Destroyers – One Bourbon, One Scotch, One BeerTom Petty & The Heartbreakers – RefugeeStevie Nicks – Edge Of SeventeenSteve Miller Band – Abracadabra Eclipse – Ain't No Mountain High Enough Hosted on Acast. See acast.com/privacy for more information.
This year, one of our goals is to catch steelhead through the ice. Winter of 2026 seems to be lining up for good ice. Today, our guest is Mike Durkalec. Mike's full time job is Aquatic Biologist for the Cleveland Metroparks. He is a well known Great Lakes angler. One of his passions is catching salmonid through the ice. In this detailed interview, we get into everything from where and how to find fish in Great Lake harbors, to essential equipment. As with any ice fishing, safety first. No ice is safe, so please be careful. Hopefully, after listening to this podcast you can get out and catch some chrome through the ice. Remember to like, subscribe and follow us on facebook and instagram. Thanks for listening!
The AI Breakdown: Daily Artificial Intelligence News and Discussions
As Meta and Microsoft report earnings, markets are sending a mixed but revealing signal about AI: this doesn't look like a classic bubble fear so much as a judgment about who's winning the AI narrative. Meta is rewarded for aggressive spending paired with visible revenue impact, while Microsoft is punished for caution and slowing cloud growth despite massive backlog demand. The takeaway isn't that investors are fleeing AI—it's that they're increasingly selective about which AI stories they believe will convert spending into growth. In the headlines: SoftBank eyes another $30B into OpenAI, ServiceNow deepens its Anthropic partnership, Microsoft scrambles to respond to Claude Cowork, Google upgrades Chrome with agentic browsing, and Tesla invests $2B into xAI.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybriefAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefSection - Build an AI workforce at scale - https://www.sectionai.com/LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Google is adding a tool that will make it easier to generate images using artificial intelligence. Learn more about your ad choices. Visit podcastchoices.com/adchoices
NOTE: When you sign up for Patreon, PLEASE do it through a web browser (Safari, Chrome, etc.) and NOT an app on your iPhone. The Apple app charges 30% !!! If you just click on the link above, it should be fine. In today’s episode, Becket Cook sits down with Jay Dios, who shares his powerful testimony of growing up in a conservative Christian home, struggling with pornography and same-sex attraction from a young age, entering a same-sex marriage, and ultimately experiencing a radical transformation through Jesus Christ. Jay opens up about deconstructing his faith, embracing affirming theology, reading God and the Gay Christian, and even meeting Matthew Vines in person—before God revealed the truth that led him to repentance, freedom, and surrender. Jay explains what ultimately compelled him to divorce his husband, how Scripture reclaimed authority in his life, and why faithfulness to God—not identity or desire—became the turning point. Now rooted at Passion City Church, Jay reflects on obedience, contentment, sanctification, and living for eternity rather than cultural approval. This conversation is for anyone wrestling with faith and sexuality, confused by affirming theology, or seeking clarity on biblical truth, repentance, and redemption through Jesus Christ. The Becket Cook Show Ep. 228 Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Thu, 29 Jan 2026 18:15:00 GMT http://relay.fm/material/552 http://relay.fm/material/552 Andy Ihnatko and Florence Ion Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. clean 3835 Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. Links and Show Notes: The new era of browsing: Putting Gemini to work in Chrome Become the main character in a new city with AI features from Google Arts & Culture How animators and AI researchers made ‘Dear Upstairs Neighbors' Star in your own memes: Introducing Me Meme in Google Photos Suppor
Thu, 29 Jan 2026 18:15:00 GMT http://relay.fm/material/552 http://relay.fm/material/552 The Ghost Inside 552 Andy Ihnatko and Florence Ion Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. clean 3835 Chrome gets infused with more Gemini in the big update that hit this week. And Google's pushing hard for its AI to be seen as a creative tool. Links and Show Notes: The new era of browsing: Putting Gemini to work in Chrome Become the main character in a new city with AI features from Google Arts & Culture How animators and AI researchers made ‘Dear Upstairs Neighbors' Star in your own memes: Introducing Me Meme in Google Photos Support Materia
The company is also previewing a new auto browse feature. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Microsoft just dropped an emergency patch for an Office zero-day being exploited in the wild. A WordPress plugin has a CVSS 10.0 vulnerability — that's the golden goose of hacking. 900,000 Chrome users had their ChatGPT conversations stolen by malicious extensions with Google's Featured badge. And two cybersecurity professionals pleaded guilty to moonlighting as ransomware affiliates. Welcome to 2026. It's gonna be a fun year. In this episode: CVE-2026-21509: Microsoft Office zero-day (security feature bypass) CVE-2026-23550: WordPress Modular DS critical vulnerability Prompt Poaching: Chrome extensions stealing AI conversations Brightspeed breach: Crimson Collective claims 1M+ records Insider threat: Security pros turned BlackCat/ALPHV affiliates Key takeaway: Update your stuff. A patch does you no good if it isn't installed. Subscribe for weekly cybersecurity news, vulnerability breakdowns, and threat intelligence. https://forgeboundresearch.com/podcasts/
Google launches Gemini into the Chrome browser and Amazon ends Palm scanning.Starring Tom Merritt, Sarah Lane, and Jason Howell.Links to stories discussed in this episode can be found here. Hosted on Acast. See acast.com/privacy for more information.
Joel revealed his absolutely chaotic laptop desktop and we needed to know—what's your messy shame that would trigger everyone? A woman on Instagram went viral listing all the types of people who piss her off (looking at you, "hubby" and "nummy" users), so we shared ours including people who say "expresso" and those who constantly say "we have to catch up" but never do. Deni Hines walked out of I'm A Celebrity and we revisited Joel and Chrissy's iconic "we are the show" moment from their season. Harry Styles allegedly has "no loyalty" to Louis Tomlinson after dropping his single on the same day as Louis' album release, and the drama is juicy. We dove into the wildest episodes of My Strange Addiction including people who eat cardboard, snort their meals, and use... unconventional skincare routines.See omnystudio.com/listener for privacy information.
This episode is sponsored by Your360 AI. Get 10% off through January 2026 at Your360.ai with code: INSIDE. On this week's AI Inside, Jeff Jarvis and Jason Howell test Google's new Gemini-powered Auto-Browse Chrome agents, wonder whether Yahoo Scout really matters, question Apple's Gemini-fueled Siri revamp and rumored AI pin, and explore Mozilla's “rebel alliance” bet on open-source AI. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00 - Podcast begins 0:04:30 - Chrome takes on AI browsers with tighter Gemini integration, agentic features for autonomous tasks 0:26:42 - Yahoo Scout looks like a more web-friendly take on AI search 0:38:31 - Apple to Revamp Siri as a Built-In iPhone, Mac Chatbot to Fend Off OpenAI 0:42:59 - Not to be outdone by OpenAI, Apple is reportedly developing an AI wearable 0:47:10 - Mozilla is building an AI ‘rebel alliance' to take on industry heavyweights OpenAI, Anthropic 0:56:14 - Google DeepMind launches AI tool to help identify genetic drivers of disease 0:59:05 - The EU tells Google to give external AI assistants the same access to Android as Gemini has 1:01:07 - Shopify Merchants to Pay 4% Fee on ChatGPT Checkout Sales 1:02:23 - Microsoft announces powerful new chip for AI inference 1:03:50 - EU launches formal investigation of xAI over Grok's sexualized deepfakes Learn more about your ad choices. Visit megaphone.fm/adchoices
It is our favorite time of year: the 2026 Trend Report is here! Caroline, Taryn, and Liz are joined by the Ballard Designs Product Design Team—Hillary Park, and Will Turner—to break down exactly what is coming next in the world of interiors. The team reveals the surprising colors predicted to dominate (including "Green Glow" aka Slime and "Fresh Purple"), why "Builder Khaki" is making a nostalgic comeback, and the specific design aesthetic that bridges the gap between Gen Z and Boomers. They also discuss the move away from gray, the evolution of bouclé, and why your next gallery wall should feature "weird" personal art. Quick Decorating Takeaways: Brown is the New Black: Move over, cool grays. The team confirms that brown—from "Cocoa Powder" to "Builder Khaki"—is the dominant neutral for 2026. It pairs perfectly with the trending warm metals (like nickel) and "dirty" pastels. Embrace "Grandma Crafts": High-tech is out; analog is in. The trend of "Grandma Crafts" is huge, with needlepoint, embroidery, and paint-by-numbers becoming the ultimate way to unwind and decorate. Look for the "North Star": Celestial motifs are having a moment. Look for stars, moons, and zodiac themes in hardware, bedding, and fabrics as people seek direction and meaning in their homes. What You'll Hear on This Episode: 00:00 Welcome to the 2026 Trend Report 01:30 How the team predicts trends (Fashion Snoops, WGSN, Veranda) 04:45 The 5 Big Color Predictions: Transformative Teal, Wax Paper, Fresh Purple, Cocoa Powder, and Green Glow 06:30 The "Slime" Green debate and the board game Hues and Cues 11:00 The resurgence of Khaki and Ralph Lauren nostalgia 14:00 Cornflower Blue: The "Happy" color that isn't going anywhere 16:30 Metals: Why Nickel is overtaking Chrome 20:30 Paint Colors of the Year (Cloud Dancer, Warm Eucalyptus, hidden Gem) 23:00 Material Trends: Leather, colored stains, and the decline of shiny glam 26:00 Is Bouclé over? (Spoiler: It's evolving into skirts) 28:00 The "Nancy Meyers" Aesthetic vs. Maximalism 34:00 Pattern Trends: Lattice, Ribbons, and "Weird" Checks 41:30 Fun Micro-Trends: Cabbage Ware and "Vampire Core" (Oxblood) 43:00 Celestial motifs and the "North Star" theme 54:00 "Weird Art": Why you should frame cigarette packs and personal relics 58:00 The rise of "Grandma Crafts" Also Mentioned: Board Game: Hues and Cues Trend: Nancy Meyers Aesthetic Paint Color: Pantone "Cloud Dancer" Shop Ballard Designs Please send in your questions so we can answer them on our next episode! And of course, subscribe to the podcast in Apple Podcasts so you never miss an episode. You can always check back here to see new episodes, but if you subscribe, it'll automatically download to your phone. Happy Decorating! Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode of The Staging Area, presented by dcsports87, I'm back with Tory to break down what's really happening in the market right now.We start with the $45,100 Victor Wembanyama White Geometric Auto sale and use it as a lens to talk about something bigger than one card.Are these prices realWho is actually buyingWhy Topps Chrome basketball keeps commanding attentionAnd what liquidity really means in 2026If you collect modern cards, sell singles, or think about timing and pricing, this conversation will challenge how you look at the market and your own collection.This episode is about clarity, not predictions.A special thank you to dcsports87 for supporting this series. Check out dcsports87 for your eBay consignment needs and visit the dcsports87 eBay store to find great cards ending every night.Get your free copy of Collecting For Keeps: Finding Meaning In A Hobby Built On HypeGet exclusive content, promote your cards, and connect with other collectors who listen to the pod today by joining the Patreon: Join Stacking Slabs Podcast Patreon[Distributed on Sunday] Sign up for the Stacking Slabs Weekly Rip Newsletter using this linkFollow dcsports87: | Website | eBay | Instagram | Twitter Follow Stacking Slabs: | Twitter | Instagram | Facebook | Tiktok ★ Support this podcast on Patreon ★
Doombuds, Office 1.0, Telnetd, Chrome, Vishing, Cursed Ralph, PeckBirdy, The Boss, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-550
Doombuds, Office 1.0, Telnetd, Chrome, Vishing, Cursed Ralph, PeckBirdy, The Boss, Aaran Leyland, and More on the Security Weekly News. Show Notes: https://securityweekly.com/swn-550
Doombuds, Office 1.0, Telnetd, Chrome, Vishing, Cursed Ralph, PeckBirdy, The Boss, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-550
Edtech ThrowdownEpisode 207: Tips and Tricks for 2026: Chrome and GmailWelcome to the EdTech Throwdown. This is episode 207 called “Tips and Tricks for 2026: Chrome and Gmail” In this episode, we'll talk about some of our favorite tips for surfing the web with chrome and sending or receiving emails in Gmail. Hopefully these hacks can help make your day a little easier. This is another episode you don't want to miss. Check it out.Segment 1: Lost but Now FoundRestore lost or accidentally Control Shift T Command Shift T on a MACZoom in and OutControl +/-Segment 2: Edtech Tips and TricksGoogle Chrome Tips and TricksNickQR CodeGo to the site > Three Dots > Cast,Save, and Share > Create a QR CodeSmart PDF HighlightingUse the "Link to Highlight" feature in Chrome. Right-click any text on a webpage and select "Copy link to highlight." When students click it, they are taken exactly to that sentence on the page.Use Google Lens to grab text from images. Go to the 3 dots > Search with Google Lens > drag around the image with the text you want > choose Copy TextGuiseAdd Tab to Group (Right click on a tab, hit add to group)Send Tabs Across Devices If you find a recipe on your laptop but want to take it to the kitchen on your phone, right-click the tab (or the address bar) and selectSend to your devices. It pops up as a notification on your other device instantly..Make the site an APP: Go to the site > Three Dots > Save and Share > Install page as app. It will now have its own icon in your Taskbar/Dock and won't get lost in your tabs. GmailNick:Find Large AttachmentsType larger:10m in the search bar to find every email taking up more than 10MB. It's the fastest way to clear storage space.Schedule Send LaterTemplates (Canned Responses)Enable this in Settings > Advanced. Save your standard "Late Work Policy" or "Meeting Request" as a template. To use it, click the three dots in a new draft and hit "Templates."Guise:Undo SendGo to Settings > See all settings > Undo Send and change it to 30 seconds. It's the ultimate "safety net" for typos.ArchiveThe "Plus" Addressing HackIf your email is teacher@gmail.com, you can use teacher+newsletters@gmail.com to sign up for sites. Gmail ignores everything after the +, but you can create a Filter to automatically label or skip the inbox for anything sent to that specific "plus" address.Edtech Throwdown: Vote on twitter @edtechthrowdown and under the pinned post on the...
This is the live version of my conversation with Tiffany Haney from the podcast Rooted Frequency. We originally recorded this as a live show, and even though there were some tech hiccups, the conversation itself was really interesting and I wanted to make sure you could still hear it.In this episode, we talk about time travel and future knowledge, The Simpsons and prediction theories, the Ohio Serpent Mound, Chaco Canyon, and other topics that naturally came up during the discussion.On the Plus side, we go deeper into Trump's tweet and the possible code within it, Candace Owens, and the Charlie Kirk incident.If you want to hear the entire conversation, including the extended Plus-side portion, head over to my Patreon Waiola Plus side:
Huey Lewis & The News – Hip To Be SquareU2 – Mysterious WaysSurvivor – Eye Of The TigerAmbrosia - Biggest Part of MeAnimotion – ObsessionU2 - I Still Haven't Found What I'm Looking For - Live Choir VersionStyx – Mr. RobotoPhil Collins – Another Day In Paradise Jan Hammer – Miami Vice Theme Hosted on Acast. See acast.com/privacy for more information.
I feel like a campy superhero sidekick could say “sufferin’ scatman!” and it would be funny. Matt: mastodon.cloud/@mattherron Louisa: mastodon.xyz/@Louisa Jeff: Letterboxd.com/jeffjk Please rate, review, and subscribe to our podcast and follow us on Twitter @hackthenetpod or e-mail us at SeeingReddit@gmail.com! Tell your friends if you enjoy the show! Our theme song is Chrome by Podington Bear and is licensed under the Creative Commons Attribution-NonCommercial 3.0 International License.
In this CPQ Podcast episode, Frank Sohn sits down with Vinay Toomu, who leads both ScaleFluidly (CPQ / quote-to-order platform) and CommerceCX (a systems integrator working with Salesforce and Conga). Since Vinay's last appearance in 2023, ScaleFluidly has matured into a full quote-to-order revenue orchestration platform—built on a composable core engine that customers can extend with their own apps. Vinay shares what he sees across real implementations: the biggest wins come from improving adoption, reducing friction for sales teams, and putting the right governance in place. They discuss support for direct sales, partner sales, and ecommerce, ScaleFluidly's low-code/no-code approach, and how their architecture differs for SMB (multi-tenant)versus enterprise (environment separation). The episode also covers newer capabilities like role-based controls, security certifications (ISO 27001 and SOC 2 Type 2), and a Chrome assistant designed to streamline CRM workflows. Finally, they unpack ScaleFluidly's practical view of AI in CPQ—where it works today, what's harder at enterprise scale, and how consolidation in the CPQ market could influence innovation.
Michael Caron - Former PURA commissioner talks electricity! Microsoft onedrive, TPM talk, RAM Talk, Old Win 10 PC to be upgraded but…it's 7th Gen, Photographs downloads but HD is full? Should I change my email-box? Microsoft commercial for Edge directed users to Chrome!
CISA's acting director assures Congress the agency has “stabilized”. Google and Cisco patch critical vulnerabilities. Fortinet firewalls are being hit by automated attacks that create rogue accounts. A global spam campaign leverages unsecured Zendesk support systems. LastPass warns of attempted account takeovers. Greek authorities make arrests in a sophisticated fake cell tower scam. Executives at Davos express concerns over AI. Pwn2Own Automotive proves profitable. Our guest is Kaushik Devireddy, AI data scientist at Fable Security, with insights on a fake ChatGPT installer. New password, same as the old password. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Kaushik Devireddy, AI data scientist at Fable Security, discussing their work on "How a fake ChatGPT installer tried to steal my password". Selected Reading CISA Is 'Trying to Get Back on Its Mission' After Trump Cuts (CISA) Google Patches High-Severity V8 Race Condition in Chrome 144 published: today (Beyond Machines) Cisco Patches Actively Exploited Flaw in Unified Communications Products (Beyond Machines) Hackers breach Fortinet FortiGate devices, steal firewall configs (Bleeping Computer) Zendesk ticket systems hijacked in massive global spam wave (Bleeping Computer) LastPass Warns of Phishing Campaign Attempting to Steal Master Passwords (Infosecurity Magazine) Greek Police Arrest Scammers in Athens Using Fake Cell Tower for SMS Phishing Operation (TechNadu) Execs at Davos say AI's biggest problem isn't hype — it's security (Business Insider) Hackers exploit 29 zero-days on second day of Pwn2Own Automotive (Bleeping Computer) Analysis of 6 Billion Passwords Shows Stagnant User Behavior (SecurityWeek) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
NOTE: When you sign up for Patreon, PLEASE do it through a web browser (Safari, Chrome, etc.) and NOT an app on your iPhone. The Apple app charges 30% !!! If you just click on the link above, it should be fine. In today's episode, Becket Cook sits down with Tia Arshad for a deeply moving and powerful testimony of God's radical rescue. Born in Libya to a Christian family, abandoned as a young child in a Pakistani boarding school after her mother's sudden death, and later facing rejection and hardship in the UK, Tia spent eight years living as a lesbian—finding temporary belonging in gay clubs amid heavy drinking, chain-smoking, and profound emptiness. Yet through a series of divine interventions, terrifying dreams, and a life-altering supernatural moment, God dramatically broke through, leading her to encounter Jesus in Scripture and walk away from that life entirely. This raw conversation uncovers the deep impact of trauma, the emptiness of alternative lifestyles, and the unmatched freedom and satisfaction found only in Christ—no going back to “Egypt.” A must-watch if you're wrestling with same-sex attraction, rejection, addiction, doubt in God's love, or wondering how the church can better welcome those leaving LGBTQ+ identities. The Becket Cook Show Ep. 227 Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Welp. That was wild.