Method of separating mixtures
POPULARITY
This week on The Bourbon Showdown Podcast, we welcome Bardstown Bourbon Company back to the show as Director of Distillation & Head Distiller Nick Smith joins us in Charlotte, NC to talk all things Bardstown.Nick walks us through his journey to the head distiller chair and shares how Bardstown Bourbon Company has built a reputation for exceptional whiskey through transparency, innovation, and a bold vision from day one. As one of the OG's of this brand, Nick offers firsthand insight into how the brand was built, the philosophy behind their process, and their mission to bring truly outstanding whiskey to whiskey lovers everywhere.It's a behind-the-scenes look at one of the most forward-thinking distilleries in America — and a true whiskey masterclass you won't want to miss.
Swyx joined SAIL! Thank you SAIL Media, Prof. Tom Yeh, 8Lee, Hamid Bagheri, c9n, and many others for tuning into SAIL Live #6 with Nathan Lambert and Sebastian Raschka, PhD. Sharing here for the LS paid subscribers.We covered: This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
Threat actors break out in under 30 minutes Claude allegedly hit with distillation attacks DeFi platform shutting down after crypto theft Get links to all of today's news in our show notes here: https://cisoseries.com/cybersecurity-news-hacked-in-30-minutes-claude-distillation-defi-shutdown-after-attack/ Thanks to today's episode sponsor, Adaptive Security This episode is brought to you by Adaptive Security, the first security awareness platform built to stop AI-powered social engineering. Attackers don't need malware anymore; they need trust. Tip: set a simple passphrase for high-risk actions, like wire requests or "urgent" account recovery – especially within finance teams and families. If the caller can't answer it, pause and verify. Adaptive runs deepfake and vishing simulations so employees practice this before it's real. adaptivesecurity.com.
Referências do EpisódioDetecting and preventing distillation attacksDeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningOpenAI suddenly thinks intellectual property theft is not cool, actually, amid DeepSeek's riseAI-augmented threat actor accesses FortiGate devices at scale | Amazon Web ServicesAI-augmented threat actor accesses FortiGate devices at scaleRoteiro e apresentação: Carlos CabralEdição de áudio: Paulo Arruzzo Narração de encerramento: Bianca Garcia
Have you ever wondered what trees taste like? That's the thought that spurred our guest, Ben Branson, to launch his latest venture, Sylva. We meet in Essex at his woodland, distillery and lab, where he crafts non-alcoholic drinks that capture the natural flavours of trees. Also the founder of Seedlip, the world's first distilled non-alcoholic spirit, Ben has now turned his focus entirely to nature to make Sylva 'a tree company that makes drinks'. Join us to explore the production process, admire a centuries-old oak that could offer a new flavour, and discover Ben's mission to encourage others to love trees as much as he does. We also learn how Ben's ADHD and autism help fuel his curiosity and innovation, and inspired him to launch The Hidden 20%, a charity, podcast and movement finding and sharing the truth about neurodiversity. Don't forget to rate us and subscribe! Learn more about the Woodland Trust at woodlandtrust.org.uk Transcript You are listening to Woodland Walks, a podcast for the Woodland Trust presented by Adam Shaw. We protect and plant trees for people to enjoy, to fight climate change and to help wildlife thrive. Adam: Well, in today's episode of Woodland Walks, I'm off to see a man who invented the world's first distilled non-alcoholic drink. It was called Seedlip and effectively he created it in his kitchen and took it to 40 countries and in the process, I think it's fair to say, helped change the world of adult drinks and it certainly spurned lots of imitators, which you may well know. He also has a different mission. Really, I think it's fair to say his mission is now at least partly to involve the environment in much of what he does. He has, for instance, won two golds at the prestigious Chelsea Flower Show, and last year launched Pollen Projects, which aims to continue to disrupt the drinks industry, in particular though, by harnessing the flavour powers of trees. And that's something I suppose I've really never thought about. Anyway, so I met him at his farm, his home, his laboratory - they're all more or less the same thing - somewhere in Essex. Ben: My name is Ben Branson. I'm the founder of Sylva. Adam: Fantastic. So first of all, we've lots to talk about. Ben: We do. Adam: Because we are sitting in an amazing, is this a laboratory? Ben: Yeah, I guess. Adam: It doesn't feel like a laboratory, it feels something much more designery. It feels like a Porsche should be made here or something. Anyway, looking over an extraordinary landscape, and we're going to go for a walk through that. But first of all, would you tell me a little bit about what, I suppose, what's brought you here and your history and why, why you're doing what you're doing. Ben: So I grew up on a farm and my family have been farming up in Lincolnshire for 320 years now. So that's nine generations and we're still farming. And I guess that meant I enjoyed a childhood largely outside and trees played a big part of that. Animals did, fields, produce, and hard work, I guess, not from me, but from my family. Adam: Right. So did you ever work on the farm? Ben: I did, yeah. And I would spend summers... Adam: So you got your hands dirty? Ben: Yeah, I would spend summers sitting on a tractor or going and sitting with my grandfather on a combine. Adam: Right. And you came to prominence for something completely different. I mean, not sheep or potatoes or anything, but the non-alcoholic drink called Seedlip. So briefly, how on earth did that happen? Ben: Yeah, Seedlip was a, I guess, a series of sort of curious accidents. Adam: I suppose we should pause, just tell, for those who don't know, what on earth we're talking about, what is Seedlip? Ben: So Seedlip was the, or is, the world's first distilled non-alcoholic spirit. Adam: Fine. Ben: So botanicals, distilled. and made into various different blends that you could have with tonic or in a cocktail. Adam: Will you get annoyed or too crude to describe it as a non-alcoholic gin? Ben: Too crude in that it is illegal to describe it as a non-alcoholic gin now *both laugh* Adam: Oh okay, not just too crude! Okay, incorrect. But okay, in my mind, I saw it as a... Ben: Akin to. Adam: Akin to. Ben: In that, in that world of a clear, botanical-driven liquid. Adam: Fine. So we know what it is. Ben: We know what it is. I've never worked for somebody else's drinks business before. I've never worked in manufacturing. Yeah, I learned how to distill ingredients partly through YouTube, partly through a book that was originally published in 1651. I have a 1664 copy of here called The Art of Distillation that details apothecaries' experiments for herbal remedies using distillation. So I, this started with me growing herbs at home and down the rabbit warren of Wikipedia looking at different lists of ingredients. Adam: So why did you want to do that? Ben: Because I'm the kind of guy that has a collection of 4,000 1940s Penguin books. And I'm the kind of guy that learned how to do taxidermy in an ancient church on a family farm in Lincolnshire. And I'm the kind of guy that likes to tinker and experiment and go really deep into things that I'm interested in. Adam: So it was just a, it wasn't, this wasn't, 'I'm going to be a businessman', you just fancied having a go? Ben: Nothing to do with drinks. I was just curious. Adam: Amazing. Ben: And taking mint from my garden and then making a liquid that smells and tastes like that plant was really magical. Adam: I mean, I don't want to talk about Seedlip a lot. Ben: Me neither *laughs* Adam: No, but just, I suppose, the last question on that, it must have come a point at which you've gone, this is a project in my garage to, hey, we could do this. How important was that? How significant a moment was that? Ben: Yeah, so the two key turning points were one, driving into London to go out for dinner, obviously not drinking alcohol and being served the most disgusting, pink, fruity, sweet, horrible mocktail and thinking, why is it so difficult to get a decent option? That was one key moment. And the second key moment was I made 1,000 bottles of Seedlip two years after I'd started distilling and I thought they'd last six months and they sold out in Selfridges in three weeks. And that went from my kitchen to, yeah, 35 countries and set this movement alight, which is all part of, yeah, we're slowly meandering towards now where we've got to today. Adam: So you sold that to Diageo? Ben: I sold the majority of that to Diageo in 2019. Adam: You own a bit of it. Ben: Yes. Adam: And now you're starting a new venture. Ben: Yes. Adam: Which is? Ben: Which is a company called Pollen Projects that has two brands. Seasn, which is a pair of cocktail bitters. So people may know Angostura. Very intense, concentrated, strong liquids that transform your sparkling water or your cocktail. And then my favourite, favourite project, which is Sylva, which is all about trees and making aged non-alcoholic spirits. Adam: So we'll talk about the trees, we'll, let's go for a trip about that. But before we get into all of those specifics, I think I'm right in saying that you, one of the purposes, one of the sort of foundations of the work you do is a sort of purpose-led business. Again, is that a fair description? Ben: Absolutely. Adam: And the business talks about that. Now, I'm a business journalist. It's now ubiquitous for businesses to go, we're purpose-led and we feel we're strong in the community. And one of the problems for journalists, and I think the public at large, is distinguishing between those who have some sort of genuine purpose here and those who feel we need to add that as our marketing strand. Ben: Yeah. Adam: Do you recognise that issue and if so, how do you overcome that? It must be very difficult to go, no no, I know everyone's saying this, but I really believe it. Ben: I am very, very clear that the reason why I'm doing this is because this is a way in which I can express myself. This isn't work for me. This is how I express myself because it's what I'm interested in. I'm very interested in trees and I'm incredibly curious and I really want to learn. And so I believe that trees are this most incredible, underestimated source of flavour, as well as all the other wonderful things that we already know about trees. And I want people to love trees. And so that is, if I can make a product that meets a need in people's lives and tastes delicious and they want to drink again, that for me is a real win-win rather than, I don't know, any other kind of business purposes or made-up, I'm really clear, like half of this is really selfish. Adam: Selfish in what way? Ben: In the sense that I want to keep working with trees. And I want to explore trees in my working life rather than it being a hobby at the weekends. Adam: And is it about that? Is it about like, I like trees and I want to work in the environment and it's great because I've got a sort of commercial reason to do that. Is there something, I get that, is there something also about social purpose, about feeling that the business should do some good or not? I don't mean to judge it in either way because it's perfectly fine for business not to do that. Ben: Sure. I think it's baked in. Seedlip, Sylva, Seasn. Someone said to me, probably about 12 months ago, they're born good. They don't have any alcohol in them. They are there to offer choice and they are there to include people. That's already baked in, in terms of the product. And so, yeah, there's lots of details we have with Sylva of some of the environmental credentials around our packaging or what we do with our waste, all that, but they are sort of below the surface, as it were. Ultimately, we want people to have a delicious drink and a great option and great choice. Adam: And how important do you think the public feel that sort of role for companies? It feels to me that certainly since COVID, there was a bigger demand for the public to hear companies stand up for something. Do you see that or do you think that was there and has gone away or what's your view on that? Ben: I, or we, are big believers that our brands or the company should have a point of view and part of your company having a point of view is how you're positioned in the market and against your competitors and ultimately what makes you unique and different. Adam: The weather's been so good to us, so I don't want to stay in too long. But I suppose the last question, I read other interviews you've done talking about other business leaders who've inspired you. Who and why? Ben: So David Hieatt was one of the team behind howies jeans. And then he went on to, he's a Welshman, he moved back to Wales to a town that used to be famous for making jeans. And over a, I don't know, 10, 12 year period, he got that town making jeans again. And those jeans were typically worn by lots of creative people. I hate jeans, so I never bought a pair. Adam: Sorry, here I am wearing a pair of jeans, you should have said. *both laugh* Ben: I just hate wearing them. I hate wearing them. And I just followed, I can't even remember how I came across him. I followed his work. He then, I was amazed to be included in one of his, he calls them mavericks and makers. I was included in one of his lists of people doing interesting things. And then I was invited to give a talk at his sort of cult following retreat called the Do Lectures. Adam: The Do Lectures? Ben: The Do Lectures. And it's an amazing retreat on a farm in Wales. Everyone's sort of in wigwams and you kind of, you're in this old, old barn giving this, giving this talk. And I sort of plucked up a bit of courage to actually talk to him, but was quite starstruck actually. And I've just followed his writing and and he came out with a brilliant phrase that we used or adopted or adapted, which was Hiut Denim was an ideas company that made jeans. And I loved that. And we adapted that for Seedlip to be a nature company that made drinks. And I've adapted it again for Sylva to be a tree company that makes drinks. And so we are not just our product and the thing that we make, I guess. So David is, he's a wonderful writer, great thinker, and yeah, I love him. Adam: Okay, brilliant. Which is a good point. You raised the trees, which is why we're here. Let's go hug some. Ben: That's the most important room. That's the wood room. Adam: Okay, so this is, right. Sorry, what's the dog called? Ben: The dog is Pesto. Adam: Pesto, enormous Pesto. Enormous Pesto. So we're in a shed. That's an ultrasound you can hear in the background, which Ben will explain why. So you've got bits of wood with numbers on, so you're just trying out, oh, so you chipped up the wood? Ben: Yeah, we chip up the wood, that's plum. Adam: Right. Ben: So everything's from the UK. That'll be probably apple... So we process all the wood here. Adam: Right. And then, okay, so we can hear the ultrasound next door. So you've got lots of chips of apple, let's say. You dry it out in a domestic oven. You've got a couple of ovens. The point of putting it in this ultrasound is what? Ben: Yeah, so we want to extract the flavour and the character from the wood. So we distill grain in the lab where we just were. And then we fill a keg with the wood chips and the grain distilled. So you've got liquid and wood together. We add lots of oxygen to that to make it a really rich environment. And then we put it in our ultrasound machine. Adam: And the ultrasound does what? Ben: And the ultrasound gets into the wood and forces out all of the aromatic compounds. So we're talking esters, the tannins, the colour, all the bits that taste yummy, we take out, and that's cycling on 28,000 kilohertz ultrasound at temperature for varying different amounts of time. Adam: And then you have a liquid. Ben: Then we have a liquid. Adam: Which has got flavours in it. Ben: Exactly. And then we're separating the wood from the liquid, and we want all the wonderful flavour from the wood to go into the grain liquid. Adam: And is that literally just, well, I've got a bit of liquid, I'm going to add a bit of flavour to that? Is that sort of... Ben: Yeah, I mean, this process for me started 14 years ago. There's nobody in the world doing this. I've had to basically develop and create a whole production process. Adam: Wow, amazing. Ben: I'm interested in the whole tree, yeah, and what flavour is there in different parts of the tree, different ages of tree, different growing conditions of trees. I mean, the scope when there's 73,000 tree species is enormous. Adam: So you've got that... Ben: And then we have silver walnut, which was a very small, we only made 300 bottles. All of the wood comes from here. And that was a real, I wanted to try and capture kind of the forest in winter, so a dormant forest. And that uses black walnut wood, sweet chestnut wood, elm wood. We had an elm tree fall down and so we took some of that. And then we sourced some reindeer moss, which is actually a lichen from Scotland. And so, yeah, silver walnut, which comes in packaging made of the forest floor. So mycelium, you get a couple of glasses. Adam: So are you never cutting down a tree? Are you taking bits of it? Ben: I'm not saying we're never cutting down a tree, but we are being very choiceful with how we source and where it comes from. And look, trees are falling down all the time. Adam: Yeah, no, I understand. Ben: And we don't need to... Yeah, we can, basically, we can use a very small amount of wood for a lot of bottles. Adam: And what happens, so you get all this wood, you've chipped it up, you're extracting all the flavours, now you have a lot of wood without any flavour in it. So what happens to all that stuff? Ben: So the spent wood, two things. One, at the moment, because we are small and new and kind of figuring out what we're doing, everything goes back to the forest. So to compost, back to the forest floor. Adam: You just spread it around? Ben: Yeah. What we'd, I mean, we can use it as mulch in the orchard. What we'd love to do is, you know, I know we could dry that wood out and make incense from it, for example. I know that we could dry that wood out and make a surface. And there are lots of, there is terrazzo type products called ferrazzo. Adam: I don't know what those are. Ben: Terrazzo is the, you'll see it, it's speckly kitchen surfaces that have got bits of ceramics and yeah, well, somebody's launched ferrazzo with bits of wood in. Adam: *laughs* Okay fair enough. Ben: So yeah, I think there's a lot that we could do in the future. We can't eat wood, because our bodies can't process lignin. But in terms of, yeah, the afterlife of what happens when we've extracted the flavour and the colour, there's going to be options. Adam: So this whole area of using trees for a drink, I've not, I mean, I'm ignorant of loads of stuff, so maybe this is common and I just don't know about it, but how novel is this? Ben: It's, if you think about maple syrup, birch water, we've tapped trees for a long time. And then you think about aged alcoholic spirits, specifically whisky, I guess. And then you think... Adam: And oaked wines. Ben: And oaked wines. Or you think about barbecued food, smoked food. We actually do have this connection and a lot of history in terms of the flavour and power of wood for things that we kind of consume. But in non-alcoholic drinks, yeah, not in the process that we're using or to the breadth of trees that we're working with. Adam: It's quite primal in a way, the way you describe it there. You can imagine early cooking would have taken place on wood, wouldn't it? Ben: Yes, yes. Adam: So, and then we go, oh, actually that tastes quite nice, that sort of woody flavour to it. Ben: I liken it to, or the picture I have, the most perfect rose-tinted picture I have in my mind is, I am sat by a fire, a wood fire, on a wooden chair, at a wooden table, with a piece of paper, and a pen that uses oak gall ink. Adam: Right, yeah. Ben: And I am drinking, probably from a wooden vessel, some Sylva. Adam: OK. Ben: And that is, you know, that's kind of, that's pretty heavenly, I think, in my head. Adam: Have you ever... Ben: Here's a sweet chestnut tree. Adam: Have you ever written with oak gall? Ben: I haven't. Have you? Adam: Yes. Ben: Have you! And? Adam: Yeah, I mean... *both laugh* We just did it in the forest, so I'm sure you can improve the quality of the ink. But it is extraordinary that you go, I think, I could have got this wrong, but I think Shakespeare wrote with oak gall. Ben: Absolutely. A thousand years of printing history. Adam: Yeah, I mean, it is extraordinary and it sort of worked. You know, it wasn't great, but then we didn't know what we were doing. So it is interesting that you just go, take that off the tree, grind that up, let's write. And you go, it worked. That's extraordinary that that worked at all, really. So yeah, yeah. *both laugh* Ben: All because of a little wasp. I mean, it's kind of... Yeah, it is it is wild. Adam: Amazing. So I know you're running a business and this is both your home and then the business and whatever, but what do you think about the environmental debate? We live in interesting times where, I mean, even really recently, people have been talking about moving to net zero and then lots of very serious political figures talking about, well, no, that's actually not going to work and stepping back from commitments to electric cars, and I think politicians are doing that partly because they feel there isn't the public support for the costs of supporting the environment. What's your take on all this? Ben: I've been probably a few, maybe three, four years ago, I was really hopeful. I think there was some real energy behind COP and there was some just, there was, it felt like there was just a lot happening. And then the last couple of years, I guess, I felt less hopeful in terms of the, sustainability has lost its edge and lost, maybe just lost being a priority. Or we've got bored of it or lots of things have been set up which are brilliant and there is a bit of lack of interest from the public. Or we've stopped worrying so much about the future of the environment because other things have come in for us to worry about. Adam: Right, so you think we have a limited scope for worrying and that's full? Ben: Yeah, I do. So I don't know, I kind of, you know, and it's obviously it's incredibly geopolitical and dependent on the time in terms of who's in charge and therefore what energy this gets given and therefore what then seeps into the media, the narrative, the public discourse on this. And I can't speak for everybody, but if I had a sense, it would probably be, I'm doing my bit now. You told me I need to recycle this or turn this off or get an electric car or I don't know, like I'm doing that. Adam: Yeah. Ben: So what are you guys doing? You know, I don't know. Not that I'm not hopeful, but I feel like the sentiment has become less hopeful. Adam: You think these things change? Ben: Absolutely. I mean, look at the, yeah, I, if I... hold on to the last 10 years of seeing our attitudes towards alcohol and the non-alcoholic drinks options to now where we are, things can change. Adam: Yeah. It is interesting. I mean, which way? Because we've got... Ben: I just wanted to draw this wonderful oak tree to your attention. Adam: Oh I see, yes. This is something from Harry Potter, the great whomping willow. Ben: The whomping willow. Adam: That's right, yeah, which it's not a willow, but go on. Ben: So our woodpecker... Adam: Oh, yes, look at that. Ben: Look at that. I mean, absolutely perfect, perfect hole. So this oak tree, probably at least 400 years old and struck by lightning last year. Adam: Wow, is that what the damage we're seeing? Ben: This natural char. Adam: Yeah, I was going to say, we can see this very charred bit of it. Gosh, and there's bits fallen down, is that from the lightning strike? Ben: This is what we've taken down. Adam: Oh, you've taken that down. Ben: So that is naturally charred. So to me that is... Adam: Ahh, is that flavour? Ben: Yeah. Adam: Everyone else goes, oh my God, the tree got hit by lightning. Ben rushes out and goes, fantastic, a new flavour! Ben: I am, that takes me to A, we've got some of that back at the lab. So we've, we've seen what it tastes like, which is wonderful. B, this is what barrels do to the, you know, it's what they do to the inside of a barrel, they char it. And 3, I start thinking, how can you engineer lightning to strike wood? Adam: Right. Ben: Not a tree, but wood. Adam: Right, okay. Ben: To create this natural char. Adam: Okay, amazing. So we might see that in a drink sometime soon. Ben: *laughs* Yeah, we love we love this tree. Adam: You also run, well not run, you present a podcast about ADHD. Is that correct? Ben: I set up a neurodiversity charity two years ago following my autism and ADHD diagnosis. And yeah, we set up a podcast called The Hidden 20% where on a weekly basis I sit down with everything from neuroscientists to top researchers, psychologists, celebrities, people running neurodiversity charities. And yeah, we kind of try and get to the truth. Adam: And you having ADHD, is that significant for you? Ben: It's significant in the sense that I didn't know that I was autistic and ADHD until I was 39. That's quite significant, and that's been a big learning. Adam: But whether it was diagnosed or had a label or whatever, is sort of separate from what I was trying to ask, you must have noticed some characteristics? Ben: Oh, I was the last to know, apparently. Adam: Right. You didn't feel, or even looking back on it now, you don't feel that your ADHD has had some sort of influence on what you've done? Ben: It's my brain, so it absolutely has influenced everything that I've done. But given that I saw my first psychiatrist when I was 8, and I've seen multiple psychologists, psychiatrists, you know, I've been in rehab in my early 20s, and no one ever, ever had talked about autism, ADHD. And so to get to 39, and I'm not alone, unfortunately, and a huge amount of people who've been missed, because we thought it was only little boys. Adam: Right. Do you think it's been, whatever challenges or difficulties that's brought, in looking in retrospect, do you think it's brought some positives as well? Ben: Oh, I think one of the biggest challenges around people and understanding or having more understanding around neurodiversity is that it's not all bad and that it's not a disease. And there are huge, you know, I have, I'm a synesthete, so I can taste colour and I see flavour and colour. Adam: Ok so that's a very clear benefit isn't it! Ben: Really helpful. I have a pretty photographic memory, which is incredibly helpful when you're analysing or trying to memorise lots of different plants or trees and behaviour around how a tree performs. Adam: Seems to me you're also very focused. Is that fair? Ben: Very focused. Adam: And that's often a symptom, isn't it, super focus? Ben: Yeah, so we talk about, in ADHD, people talk about hyper-focus. And in autism, people talk about special interests. Adam: Right. Ben: And I have both of those *laughs*. And trees, so trees is my special interest and being ADHD allows me to hyper-focus on that. I'm only learning that I can harness it and use it and I have a really good understanding of how my brain works now and that's massively empowering. Adam: Okay, brilliant. All right. Well, you've taken us on a circuit. We're back to, not the shed, that's a terrible... Ben: The lab. Adam: The barn. Very nice barn. So shall we go back in? Is there something to taste? Ben: Yeah, I think we should have a drink. Adam: I shouldn't leave without tasting it. Brilliant. Ben: No *laughs* You can go and see all this apple wood as well. Adam: Oh yeah. Oh look, the apple wood van is leaving. So has he deposited his apple wood? Yes. Okay, that was quick. So while Ben prepares some rather nice non-alcoholic tree tipples for us, I wanted to take this opportunity of thanking you for joining us on this particular podcast. And wherever you are and whenever you do it, I wanted to wish you from all of us, to all of you, some very happy wanderings. Thank you for listening to the Woodland Trust Woodland Walks. Join us next month when Adam will be taking another walk in the company of Woodland Trust staff, partners and volunteers. And don't forget to subscribe to the series on iTunes or wherever you are listening. And do give us a review and a rating. If you want to find out more about our woods and those that are close to you, check out the Woodland Trust website. Just head to the Visiting Woods pages. Thank you.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Okay, so, first … yes, of course, water matters. Without water, there can't be Mezcal or Tequila, both because you need it to make the spirits but more importantly, people make these spirits and without water, there are no people. With that out of the way … evolutionary biologist and frequent Roadtripper Daniel Moen checked in to ask, Why does water matter in Tequila and Mezcal? I immediately went to my old trope: About half the bottle or more is water. But then he followed up with, The water you're consuming when you drink Mezcal, that fifty percent is actually distilled water. And distilled water, by definition, is neutral. So … why does it matter? Or does it not? We wade into the issue in this episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Linda Sullivan of seynasecreto with wisdom from Daniel Mohn, Eduardo Angeles of Lalocura, and Ari Marks and Michael Schallau of is/was Brewing.Episode NotesThanks to Miguel Dimayuga Meneses for the cover photo of El Choforo working on the well project in Atempa, Guerrero — a project funded by SACRED Agave through a grant from 818 Tequila.Shout outs this episode to Fiji water, Felipe Camarena of G4 Tequila, and the bio-digester project of Serafin Morales Cruz of Mezcal Alarii!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The team discuss Lagerfest and what it takes to put the unique beer festival together. Usually october every year, Lager Fest is hosted by McFleshman's Brewery with craft brewers, food trucks, and live music. Learn abo ut their brewing events, community impact, and exciting plans for future festivals.PATREON SUPPORTpatreon.com/respectingthebeerpodcastUncut episodes with bonus contentAccess to exclusive beersAccess to 50-minute video tour of McFleshman'sFACEBOOK GROUPhttps://www.facebook.com/groups/respectingthebeerQUESTIONS?Email us at respectingthebeer@gmail.com--CHAPTERS00:00 Recapping Lagerfest 202501:11 McFleshman's Lagerfest03:02 Organizing a Beer Fest07:32 The Beer Parade!12:54 Future Plans and Innovations14:24 Unique Beers and Experiments18:21 Distillation and Barrel Aging18:48 Exclusive VIP Tasting19:20 The Strength of Gertrude20:52 Beer Festivals and Their Value22:40 Marketing Strategies at Beer Fests24:25 Lager Fest Financials26:40 Future Events and Expansions33:05 Support us on Patreon!--CREDITSHosts:Bobby Fleshman - https://www.mcfleshmans.com/Allison Fleshman -https://www.instagram.com/mcfleshmans/Joel HermansenGary Ardnt - https://everything-everywhere.com/everything-everywhere-daily-podcast/Music by Sarah Lynn Huss - https://www.facebook.com/kevin.huss.52/Recorded & Produced by David Kalsow - https://davidkalsow.com/Brought to you by McFleshman's Brewing Co
Im aktuellen Predictive AI Quarterly sprechen wir über zentrale Entwicklungen im Bereich Predictive AI und teilen Erfahrungen aus einem konkreten LLM-Projekt. Thema sind unter anderem TabPFN 2.5, neue Ansätze für Explainability sowie der wachsende Einfluss von AI-Agents auf Softwareentwicklung. Im Praxisteil berichten wir über ein mehrsprachiges Textanalyse-Projekt für den gemeinnützigen Verein Monda Futura. Dabei geht es um die strukturierte Auswertung von rund 850 Zukunftsvisionen mithilfe von LLMs. Abschließend diskutieren wir Learnings zu Modellwahl, Kosten und dem sinnvollen Zusammenspiel von Mensch und KI. **Zusammenfassung** TabPFN 2.5: Skalierung, Distillation für produktive Nutzung und höhere Inferenzgeschwindigkeit ExplainerPFN als Alternative zu SHAP für Feature Importance ohne Zugriff auf das Originalmodell Trend zu AI-Agents, die große Teile der Softwareentwicklung übernehmen Use Case Monda Futura: Analyse von 850 mehrsprachigen Zukunftsvisionen (DE/FR/IT) Pipeline: Fragmentierung, Themenextraktion, Klassifikation und Szenarienerstellung Effektiver Einsatz von GPT-5-Mini vs. GPT-5.2-Pro je nach Aufgabentyp Zentrales Learning: Beste Ergebnisse durch Human-in-the-Loop statt Vollautomatisierung **Links** Prior Labs TabPFN-2.5 Model Report https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report ExplainerPFN Forschungs-Paper (zero-shot Feature Importance) https://arxiv.org/abs/2601.23068 OpenCode – Open Source AI Coding Agent https://opencode.ai/ Monda Futura https://mondafutura.org/ OpenAI API & GPT-Modelle Überblick https://platform.openai.com/docs/models OpenAI Structured Output Guide https://platform.openai.com/docs/guides/structured-outputs
Pastor Alan R. Knapp discusses the topic of "Vignettes I: A Move Toward Distillation" in his series entitled "Rev The Book (2X)" This is Lesson 109 and it focuses on the following verses: Revelation 6 and 7
We're back! After a short hiatus, Distil-Nation NZ returns with a major upgrade. This week, Tom and Cameron officially introduce the third member of the team: Ty. We dive into how our new trio dynamic is going to shake up the show before getting stuck into the state of the industry.
Distillation has been a common topic here on Crafted as of late, and we have had the pleasure of speaking with some of the most important names in the American spirits industry. Yet, commercial distillation inevitably limits creativity and innovation.That is not the case for this week's guest, Jesse Wilson, the founder of a pair of companies focused on home distillation, Still It and Chase the Craft. What began as a video series based around a man learning to distill has grown into an exploration of all things spirits innovation - Jesse has now distilled everything from Big Macs to 300 year old absinthe recipes. In this conversation we dive into the history of both companies, the science of distillation, the magic of aging, and the unique insights Jesse has gained from his exploration of supremely small batch spirits production. We Want to Hear from You!Have a topic, craft category, or craft company you'd like to see us cover? Email us here to share those or any other thoughts you have about CRAFTED.RELATED LINKS:Blister Craft CollectiveBecome a BLISTER+ MemberChase the CraftStill It18th-Century AbsintheDistilling Big MacsPotato VodkaBLISTER NEWSLETTER:Get It & Our Weekly Gear GiveawaysTOPICS & TIMES:How Eli Discovered ‘Still It' (3:58)Distillation Adventures & YouTube (7:15)Jesse's First Still (21:03)Chase The Craft (27:41)Absinthe (34:04)Head, Hearts, & Tails (51:10)Charring Barrels (1:04:48)Big Mac Distillation & Meme Spirits (1:09:23)Who Should Try Home Distilling (1:16:30)The Future of Home Distilling (1:21:29)SEE OUR OTHER PODCASTS:Blister CinematicBikes & Big IdeasGEAR:30Blister Podcast Hosted on Acast. See acast.com/privacy for more information.
Send us a textA toast to legacy, a clash over creativity, and a bottle that could change how we drink. We open with a tribute to Peter Fraser—winemaker, mentor, and quiet force at Yangarra—then step straight into the friction points shaping wine right now: the awards that reward meaningful storytelling, the slogan that sparked a pile-on, and the packaging pivot that's bigger than aesthetics.We unpack the Wine Communicator Awards and why Halliday's podcast comeback matters when trust is hard-won. Max Allen's recognition is a reminder that longform writing still anchors the culture—holding memory, nuance, and accountability. From there, we tackle the Next Crop t-shirt controversy. Was it a bad call? Yes. But the deeper lesson is how to build safe creative lanes for emerging leaders: responsible messaging, clear guardrails, and mentorship that keeps bold ideas alive rather than shutting them down.Innovation takes a more practical turn with Brown Brothers' aluminium wine bottle. Lighter to ship, infinitely recyclable, and container-deposit friendly, it addresses the carbon drag of glass without asking the wine to change. We explore why consumers push back, how category cues evolve, and what it takes to make sustainability feel like an upgrade. Then we zoom out to France, where grower protests signal a global reality: oversupply hurts. Distillation aid and vine pull schemes buy time, but the honest fix is right-sizing plantings, shifting styles, and aligning with demand.For a festive detour, we fact-check Netflix's Champagne Problems—funny, charming, and gloriously wrong on méthode traditionnelle. It's a teachable moment that starts with pop culture and ends in real craft. We wrap with something practical: a standout Aldi Malbec that nails benchmark Argentinian style—dark fruit, firm tannin, bright acid—and doubles as a great learning bottle. And a heads-up: we're about to taste through some of Australia's most iconic wines, from Grange to regional legends, to map where heritage and modern taste meet.If you enjoyed this episode, follow the show, share it with a friend who loves wine (or loves a good debate), and leave a quick review to help more listeners find us. What change do you want to see in wine next? Follow us on instagram @winewithmegandmel
Book a Vedic astrology reading with Isaac by emailing info@plantcunning.com.In this episode we talk with herbalist, teacher and distiller Cathy Skipper, who you can find at https://aromagnosis.com/. Cathy shares her journey into the world of plants and herbalism, influenced by her family's rich history in natural medicine and gardening. She discusses her deep connections to the plant path and how it has become a family tradition. Cathy delves into her work on menopause, influenced by her personal experiences and Carl Jung's depth psychology. She talks about the alchemical journey of menopause and the concept of the 'sovereign woman', emphasizing the importance of embracing the transformation through menopause. Cathy also explores how essential oils and aromatherapy can be powerful tools in this spiritual and emotional journey. In addition to her insights on menopause, Cathy shines light on her distillation practice. She narrates the meticulous process of harvesting and distilling plants, focusing on the significance of using plants from the high desert and maintaining the integrity of essential oils. This episode is packed with wisdom on holistic healing, the power of plant medicine, and the deep connections we share with nature. Whether you're an herbalist, therapist, or simply someone curious about the mysteries of nature and personal transformation, this conversation with Cathy Skipper is sure to inspire and enlighten.00:00 Introduction to the Plant Cunning Podcast00:43 Cathy Skipper's Journey to Herbalism03:09 The Alchemy of Menopause11:11 Aromatherapy and Its Healing Power18:22 The Art and Science of Distillation24:01 Self-Healing and Jungian Concepts30:23 The Integrity of Essential Oil Practitioners31:46 The Importance of Small Distillers32:25 Aromatherapy in Psychedelic Therapy33:53 The Brutality of Industrial Harvesting34:26 The Reality of Monocrops35:52 Personal Experiences with Distillation39:29 Forgotten Plants and Their Significance53:15 The Healing Power of Aromatic Plants57:28 Upcoming Projects and Resources
When it comes to grand gestures, painting your entire city in the one colour is quite something. So much for painting the town red. How about pink? Jaipur beckons as one of India's most enchanting destinations, where the Old City is harmoniously bathed in the same pink hue. The elegant capital of Rajasthan was painted in pink stucco in 1876 to welcome Albert Edward, the Prince of Wales, who was the son of Queen Victoria and later became King Edward VII. Jaipur's Maharaja chose the colour because pink symbolises hospitality in Rajput culture, and the gesture impressed the Prince so much that he nicknamed the city the "Pink City." The name – and the stucco - has stuck ever since. A year later, a law mandated that all future buildings in the Old City must be painted pink. As I swept into Jaipur on a private holiday Wendy Wu Tours, there's no denying how that welcoming palette sweetly seduces even the most jaded of travellers. Jaipur effortlessly casts you under its spell, as you delve into its rich history and treasury of enticements. There is an unmistakeable magic about the place. Nearly 300 years ago, an enlightened maharajah with a penchant for jewels and a keen eye for architecture built this planned city, wedged between the arid hills of northwest India. Built in the form of a rectangle, Jaipur was divided into nine blocks, seven for public use and two reserved for the state's most prestigious palaces and buildings. The entire city was girdled by a formidable protective wall. Called Jaipur after the city's founder, Jai Singh II, the planned city soon gave rise to astonishing royal palaces and vast workshops of artisans recruited to establish a new commercial hub. These days, gem cutters, jewellery designers and garment-makers are still doing a flourishing trade in Jaipur. And the royals still occupy a wing of the majestic City Palace, while gleaming mid-rise towers and a new subway system anchor Jaipur's forward march. But for all the contemporary progress, it's the architectural grandeur, proud sense of place and thriving craftwork traditions that make this destination so infectious. Street markets are splashed in colour and handicrafts, and Hindu temples can be found nearly every 100 metres. Though the streets heave with beeping and belching traffic, aimless tourists and dung-dropping cows, there is a charm and charisma to the carnival of commotion. Close to City Palace, my wonderful Wendy Wu Tours guide Vipin treated us to some sizzling old-school retail therapy. We walked under the peeling pink porticos of the roadside bazaars which were emblazoned with everything from puppets to pyjama pants; passing by carts of fried chickpea cakes, and marble-lined shrines with statues of Hindu gods. Garment-hunting was high on the agenda, for gifts to take home. Vipin led us to his favourite shop, where an explosion of colourful fabrics heaped in piles and stacked to the ceiling soon greeted us, as attentive staff served us Masala chai. Whether you're after local, authentic t-shirts, shirts, trousers, scarfs, saris, rugs, cushion covers, towels or table-cloths…this is textile and garment-shopping heaven. As my sister snapped up some sensational saris, I haggled down the price on a sublime tablecloth with block-printed red elephants. The other boom retail business is jewellery, because Jaipur is a global centre for cutting and polishing precious and semiprecious gems. Head to Jewels Emporium's stately white building and take a tour of the workshops, where cutters shape facets, and men at workbenches adorn gold settings with jewels, and rinse the gold dust from their hands in wash basins. That water is later sold to extraction companies. The quality of the workmanship is second to none, true Rajasthan's abiding legacy. Jaipur's Pink City bragging rights is best epitomised by the Hawa Mahal, or Palace of Winds. We stopped by to pose in front of this five-story palace façade, constructed from pink sandstone. Built in 1799, its iconic facade features 953 small windows which allowed royal women of the court to observe street processions without being seen while also creating a natural cooling effect. The palace's architecture is a blend of Rajput and Mughal styles, ornately designed with protruding bays of lattice stonework and cupolas mimicking Krishna's crown. Could there be a more glorious façade in the world? City Palace is a stirring complex to leisurely explore, peppered with mouth-watering architecture, tranquil courtyards and lush gardens. The prize draw is the Palace of the Breeze, a triumph in building design, whereby the air circulates so efficiently that it keeps the occupants cool even in the extreme summer months, when the mercury can nudge 50 degrees. The on-site museum is studded with royal treasures, costumes and curiosities. Out in the courtyard, I admired some massive silver vessels that carried the Maharaja's drinking water from the Ganges River to London in 1902 for Edward VII's coronation. You can see the royal reception rooms that are still in use, where Jaipur's royal family entertain guests in an ornate dining room and parlour, where chairs have silver lions for armrests. The walls are painted with gold dust and extracts of rubies and emeralds. Yipin pointed out where the royal family reside. There's a lot of tabloid tattle about Jaipur's current Maharaja, Pacho Singh. He's only 27, quite the polo-playing playboy and is currently living in the palace with his French girlfriend. But it's fully expected he cannot marry her, in deference to royal tradition and Rajasthan's adherence to arranged marriages and astrological alignment. Beyond the Old City, no visit to Jaipur is complete without savouring the sky-piercing magnificence of the four-hundred-year-old Amber Fort. (Pronounced Ah-meer.) Sprawling across the upper reaches of a hillside like a scene out Return of the Jedi, this fortress was previously the seat of power for the Rajput kings from 1599, before relocating to Jaipur's Old City just over a century later. En-route to the fort, Vipin led us to the most dreamy viewpoint, on the shores of Lake Maotha, where we gazed up in awe at this hilltop colossus. Brightly dressed elephants lumbered by, readying to carry visitors up the slope to the fortress. I had previously taken an elephant ride up to Amber Fort, but I've put weight on since then and am more mindful of animal welfare. We opted for a jeep ride through the skinny lanes snaking their way up to Sun Gate. From here, we marvelled over the China Wall-esque fortifications, the Amber Wall, riding across the ridgelines as far as the eye can see. ( It's 12km in length.) But this Rajput stronghold hooks in the tourist hordes principally for its gobsmacking array of palatial buildings and extravagant ornamental gardens. In shades of honey and rose stone, white marble and gilt decor, it's a frothy fusion of ornate Hindu and Islamic design. Amber Fort's exquisite craftsmanship is best exemplified by the Mirror Palace, or Sheesh Mahal. Candlelight dinners would have been next-level. A single lit candle spangles the beautifully cut Belgian glass panels and mirror mosaics that festoon the walls of the banquet room, transforming the space into a night-sky kaleidoscope across the walls and ceiling. Apparently, this room was made by the Maharaja so that the Maharani (queen) could see the stars at night, as she was not allowed to sleep in the open. Other highlights include the many-pillared Diwan-i-Am, the Hall of Public Audience; the Jal Mandir, or Hall of Victory, which features carved marble panels, a mirrored ceiling, and expansive views over the ramparts of the fort. The Sukh Niwas, the Hall of Pleasure, is another drop-dead-gorgeous marble room that was cleverly cooled with water. Here, the Maharaja reportedly relaxed with his ladies. Amber Fort is a resplendent blockbuster, reverberating with the glory days of Rajput rule in Rajasthan. Jaipur residents are rightly proud of their architectural showstoppers. One of the signature attributes to the city is the fact that some historic palaces, no longer required for the affairs of state, have been reimagined as beacons of hospitality. But I stayed at a brand-new luxury build that is a grand triumph of contemporary construction, fully inspired by Jaipur's landmarks and Rajasthani finesse. Introducing Anantara Jewel Bagh Jaipur. Whether it's for a big bling-bling Bollywood wedding or for immersive luxury experiences away from the heaving throng of Jaipur's tourist spots, this hotel delivers a tour de force in lavish Rajasthani living, with a contemporary take. It is splendour defined. As our Wendy Wu Tours driver pulled into the entrance, a troupe of Rajasthani dancers and costumed drummers serenaded our arrival in spectacular, effervescent style. Unfurling over 5.5 lush acres, the grand hotel's money shot is its sensational main façade, drawing rich inspiration from Amber Fort and Rajasthan's royal palaces. Features include majestic arches, intricately designed jharokhas (bay windows), and graceful chhajjas (overhanging eaves) and detailed carvings. It was thoughtfully painted in the same colours as Amber Fort's walls during golden hour. Rajputana history and heritage permeates the hotel. You'll notice it in the materials—yellowstone from Jaisalmer, marble from Banswara and Makrana, and locally sourced timbered. There's the treasury of sublime artworks, notably including portraits of Rajput warriors and royalty. Peek inside the Rang Mahal ballroom, where the walls are completely covered with celebratory nods to the state's heritage. It was handpainted over two-and-a-half years by third-generation artists. Delicate thikri glasswork, hand-carved wood accents, and intricate gold leaf detailing abound across the hotel. Amer Bagh garden venue unfurls like a verdant blanket at the base of the hotel – and is a stirring outdoor venue. Jai Bagh (victory garden is the main outdoor space for guest and I was absolutely enthralled delving into the daily bazaar that is staged here in the afternoon, complete with puppeteers, block printers, bangle makers and astrologers. Anantara's core DNA is to create hotels steeped in local elements and the Jaipur addition excels at delivering exactly that. You can even go chowk-hopping, vegetable shopping and cooking with local women. The hotel boasts 150 rooms and suites, layered across five categories, with most overlooking the inner courtyard or Jai Bagh. We stayed in the Anantara One-Bedroom Terrace Suite, which is kitted out with its own outdoor Jacuzzi and expansive terrace. I was transfixed here watching muscular monsoon thunderstorms tear up the sky and soak the land! Guestrooms do not skimp on celebrating the sense of place, with sumptuous comforts and artful design elements, from the zardozi on the pillows, thikri work on the headboards, wooden jharokas by the window nooks, and Mughal miniature art on the walls. If that's not enough to tempt you, Anantara's first outpost in India will enchant you with its gastronomic verve. Led by Executive Chef Sunil Jajoria, Sheesh Mahal is a pinch-yourself jewel box of a venue, to experience the true essence of Rajasthan with exquisitely fitted out with glittering mirror mosaics, dressed in 350,000 pieces of glass, inspired by the legendary Mirror Palace. Coud there be a more wondrous place to savour the true essence of Rajasthan's culinary brilliance? The menu marries traditional Rajasthani delicacies with global influences, paired with signature cocktails like the Jewel of Jaipur. Jajoria, a Rajasthan native, has been researching local cuisine for the last seven years. Perfected over 20 trials, his menu dives deep into how maharajas entertained. His tasting menu kicks off with hummus that tastes like Bikaneri bhujia, moving on to ker sangri kebabs, besan kebabs mimicking paneer (as there was no paneer in the history of Rajasthan, says the chef), and Shekhawati maas tacos. The menu is handwritten by the chef himself, on vintage paper he has been collecting since his he was a boy. Pair this menu with the ‘Echoes of Distillation' spirit tasting of heritage liquor from the royal family of Mahansar. It's mixology at its finest, with no shortage of artful theatre thrown in. Another cracking experience is Amrit Mahal, the vibrant all-day dining venue, which presents a diverse array of international and Indian specialties. There were too many highlights to recount, but the lamb baos, edamame truffle dimsums, lotus root on fire, Cantonese buttermilk prawns, soba noodles, and Japanese caramel cheesecake were all pleasurably devoured. The buffet breakfast here is like a royal banquet! The spa experience is a signature feature pillar of any Anantara property, and resident experts were brought in from Thailand to train the Jaipur team. The treatment repertoire remains consistent with their global spa menus—a mix of Ayurveda, Thai massages, and western therapies. I deployed my sister to the spa for some personal panel-beating and she is still buzzing about the deep tissue massage she savoured. She rates it as the best massage of her life, “life-affirming, age-reducing and liberating.” Another starring attribute of the hotel is its genuine sense of connection with the community. As a part of their grassroots outreach, the hotel works with local women to make the rotis on their menu on a chulha or traditional clay stove. You can take an early morning trip to the flower market or a guided farming experience. Even better, take a private visit to Hathi Gaon, which is a community of rescue elephants residing in their natural habitat. Or partake in a local culinary class. I was particularly impressed that the hotel takes care of their staff's accommodation needs, building nearby apartments to ensure they are well-housed. Hospitality is as sparkling as the palace-like hotel itself - faultless, ultra-attentive, charismatic and truly unforgettable. Treat yourself to a remarkable Jaipur escape at Anantara Jewel Bagh. You'll be royally treated from the moment you arrive. You will not want to leave. www.anantara.com Delve into India and the Golden Triangle with the award-winning tour specialists across Asia and beyond. I chose a tailor-made Classic India private holiday, that enables you optimise your itinerary and accommodation preferences, as much as you wish. The itinerary can be as active or as laid back as you are, with full flexibility over included meals and excursions. You'll be in the best of hands with Wendy Wu Tours. www.wendywutours.co.nz/india From New Zealand, it's just a one-stop connection to a multitude of destinations in India, including New Delhi, with Singapore Airlines, on their various daily services from Auckland and Christchurch to Singapore. Enjoy well-timed connections for an easy transit in Singapore. Across all classes of travel, the award-winning carrier has not only fostered a world-beating reputation for its exceptional customer service and in-flight product, but also its innovation. Become a KrisFlyer member and enjoy complimentary in-flight WiFi. For best fares and seats to suit head to https://www.singaporeair.com Mike Yardley is our resident traveller on Jack Tame Saturday Mornings.See omnystudio.com/listener for privacy information.
uisge beatha is the Gaelic term. We often overlook the water that is not only in our whiskey but it makes up us. I promise you will love this deep dive. So come on in the water is fine.Thewhiskeyshaman.comPatreon.com/the_whiskeyshamanBadmotivatorbarrels.com/shop/?aff=3https://www.instagram.com/zsmithwhiskeyandmixology?utm_source=ig_web_button_share_sheet&igsh=MWZ4dGp2MzlucjVvdw==Uisge beatha (pronounced approximately "ish-ka-ba-ha" or "ISH-ka BY-ah") is a Gaelic term meaning "water of life" from which the word "whiskey" ultimately derives. This ancient term, originally a translation of the Medieval Latin aqua vitae ("water of life"), was used by Irish monks and later evolved into variations like usquebaugh and ultimately the English "whisky". Meaning and OriginLatin Roots: The Gaelic term is a direct translation of the Latin aqua vitae. Early Use: Irish monks in the Middle Ages first applied the term uisce beatha to distilled spirits. Evolution: The term evolved phonetically and was anglicized into usquebaugh before becoming the "whisky" we know today. PronunciationIrish Gaelic: The pronunciation is often described as "ISH-ka BY-ah". Scottish Gaelic: Another variation is "ISH-ka BY-ah". Significance"Water of Life":The name highlights the perceived life-sustaining and medicinal properties of early distilled spirits, which were believed to have healing abilities. Direct Link to Whiskey:It is the direct etymological ancestor of the English word "whiskey" and "whisky". The journey from aqua vitae to whiskeyThe story of uisge beatha begins with the Latin term aqua vitae ("water of life"), which was used throughout medieval Europe to describe distilled alcohol. Irish monks and early distillation: During the Early Middle Ages, Irish monks who traveled throughout the Mediterranean brought distillation techniques back to Ireland. With no grapes available for wine, they adapted the process to ferment and distill grain and barley instead. This was the first creation of what we know as whiskey today. The monks translated the Latin aqua vitae into the Gaelic uisce beatha.Early medicinal use: The initial purpose of the distilled spirit was medicinal. It was used as a cure-all for ailments, a disinfectant for wounds, and a form of anesthesia. The annals of Clonmacnoise in Ireland record an instance in 1405 where a chieftain died from "taking a surfeit of aqua vitae," implying overconsumption of the spirit.Spread to Scotland: Irish monks, who were also instrumental in Christianizing Scotland, are believed to have carried the knowledge of distillation with them. By 1494, a record from the Scottish Exchequer noted an order of malt "to Friar John Cor, by order of the King, to make aqua vitae," an entry that confirms the practice was well-established by that time. The anglicization of the nameOver time, as the popularity of the distilled spirit grew, the long Gaelic name was shortened and adapted by English speakers.Early spellings: Early English forms included uskebeaghe (1581) and usquebaugh (1610), which were phonetic approximations of the Gaelic pronunciation.Final abbreviation: The process of abbreviation and mispronunciation eventually led to "whiskey" (or "whisky"), dropping the second half of the phrase.The "whiskey" vs. "whisky" divide: The variation in spelling between "whiskey" (Ireland and the United States) and "whisky" (Scotland and most other countries) emerged in the 19th century. During that period, the quality of some Scotch whiskies declined, leading Irish distillers to add the "e" to distinguish their perceived higher-quality product in the American market. Cultural significanceBeyond its medicinal origins, uisge beatha came to play a vital cultural role, reflecting the craftsmanship and resourceful nature of its makers.Farmhouse production: Distillation provided a way to use surplus grain and could be easily stored or traded.
Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy!Full Video EpisodeTimestamps00:00 - Introductions04:58 - History of Major AI Models and Their Impact on Fal.ai07:06 - Pivoting to Generative Media and Strategic Business Decisions10:46 - Technical discussion on CUDA optimization and kernel development12:42 - Inference Engine Architecture and Kernel Reusability14:59 - Performance Gains and Latency Trade-offs15:50 - Discussion of model latency importance and performance optimization17:56 - Importance of Latency and User Engagement18:46 - Impact of Open Source Model Releases and Competitive Advantage19:00 - Partnerships with closed source model developers20:06 - Collaborations with Closed-Source Model Providers21:28 - Serving Audio Models and Infrastructure Scalability22:29 - Serverless GPU infrastructure and technical stack23:52 - GPU Prioritization: H100s and Blackwell Optimization25:00 - Discussion on ASICs vs. General Purpose GPUs26:10 - Architectural Trends: MMDiTs and Model Innovation27:35 - Rise and Decline of Distillation and Consistency Models28:15 - Draft Mode and Streaming in Image Generation Workflows29:46 - Generative Video Models and the Role of Latency30:14 - Auto-Regressive Image Models and Industry Reactions31:35 - Discussion of OpenAI's Sora and competition in video generation34:44 - World Models and Creative Applications in Games and Movies35:27 - Video Models' Revenue Share and Open-Source Contributions36:40 - Rise of Chinese Labs and Partnerships38:03 - Top Trending Models on Hugging Face and ByteDance's Role39:29 - Monetization Strategies for Open Models40:48 - Usage Distribution and Model Turnover on FAL42:11 - Revenue Share vs. Open Model Usage Optimization42:47 - Moderation and NSFW Content on the Platform44:03 - Advertising as a key use case for generative media45:37 - Generative Video in Startup Marketing and Virality46:56 - LoRA Usage and Fine-Tuning Popularity47:17 - LoRA ecosystem and fine-tuning discussion49:25 - Post-Training of Video Models and Future of Fine-Tuning50:21 - ComfyUI Pipelines and Workflow Complexity52:31 - Requests for startups and future opportunities in the space53:33 - Data Collection and RedPajama-Style Initiatives for Media Models53:46 - RL for Image and Video Models: Unknown Potential55:11 - Requests for Models: Editing and Conversational Video Models57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing58:23 - Bitter Lesson and the Future of Model Workflows58:44 - FAL's hiring approach and team structure59:29 - Team Structure and Scaling Applied ML and Performance Teams1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks1:04:02 - Closing Remarks and Culture at FAL Get full access to Latent.Space at www.latent.space/subscribe
Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy! 00:00 - Introductions 04:58 - History of Major AI Models and Their Impact on Fal.ai 07:06 - Pivoting to Generative Media and Strategic Business Decisions 10:46 - Technical discussion on CUDA optimization and kernel development 12:42 - Inference Engine Architecture and Kernel Reusability 14:59 - Performance Gains and Latency Trade-offs 15:50 - Discussion of model latency importance and performance optimization 17:56 - Importance of Latency and User Engagement 18:46 - Impact of Open Source Model Releases and Competitive Advantage 19:00 - Partnerships with closed source model developers 20:06 - Collaborations with Closed-Source Model Providers 21:28 - Serving Audio Models and Infrastructure Scalability 22:29 - Serverless GPU infrastructure and technical stack 23:52 - GPU Prioritization: H100s and Blackwell Optimization 25:00 - Discussion on ASICs vs. General Purpose GPUs 26:10 - Architectural Trends: MMDiTs and Model Innovation 27:35 - Rise and Decline of Distillation and Consistency Models 28:15 - Draft Mode and Streaming in Image Generation Workflows 29:46 - Generative Video Models and the Role of Latency 30:14 - Auto-Regressive Image Models and Industry Reactions 31:35 - Discussion of OpenAI's Sora and competition in video generation 34:44 - World Models and Creative Applications in Games and Movies 35:27 - Video Models' Revenue Share and Open-Source Contributions 36:40 - Rise of Chinese Labs and Partnerships 38:03 - Top Trending Models on Hugging Face and ByteDance's Role 39:29 - Monetization Strategies for Open Models 40:48 - Usage Distribution and Model Turnover on FAL 42:11 - Revenue Share vs. Open Model Usage Optimization 42:47 - Moderation and NSFW Content on the Platform 44:03 - Advertising as a key use case for generative media 45:37 - Generative Video in Startup Marketing and Virality 46:56 - LoRA Usage and Fine-Tuning Popularity 47:17 - LoRA ecosystem and fine-tuning discussion 49:25 - Post-Training of Video Models and Future of Fine-Tuning 50:21 - ComfyUI Pipelines and Workflow Complexity 52:31 - Requests for startups and future opportunities in the space 53:33 - Data Collection and RedPajama-Style Initiatives for Media Models 53:46 - RL for Image and Video Models: Unknown Potential 55:11 - Requests for Models: Editing and Conversational Video Models 57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing 58:23 - Bitter Lesson and the Future of Model Workflows 58:44 - FAL's hiring approach and team structure 59:29 - Team Structure and Scaling Applied ML and Performance Teams 1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration 1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks 1:04:02 - Closing Remarks and Culture at FAL
In today's episode, I sat down with Greg Hill, a longtime cider maker at Urban Orchard Cidery and a dedicated adjunct instructor in the brewing program here in Asheville. Greg is wrapping up his PhD work while guiding Urban Orchard through an incredibly tough year—one marked by supply challenges, wild weather events, and a major orchard fire. Together, we dove into his ongoing dissertation about workforce development and experiential learning in the brewing and beverage industries, and what it truly takes to prepare students for a successful career in craft beverage production. Greg Hill is a graduate of the AB Tech Brewing, Distillation, and Fermentation program, and he's now in his twelfth year at Urban Orchard Cidery. As a key educator in beverage management and microbiology, Greg brings a wealth of real-world knowledge, blending hands-on experience with academic rigor. Outside of work, he's a stadium-hopping football fan with broader plans for worldwide sports adventures, and a seeker of new and unusual flavor experiences. "I wanted to put a bunch of flair in there, make it all pretty and nice, and it's really just idiot proof. Repetitive. So you're saying I am doing this. This is why I'm doing this. I'm going to do this again. And then this is why I'm doing this.” ~Greg Hill Today on Consuming the Craft:· Experiential learning and hands-on practice are crucial components for real career readiness in the craft beverage industry.· Greg's PhD research has shifted from standardizing education to focusing on how beverage programs prepare students for the real-world workforce.· Practical education, like troubleshooting and working directly with equipment, is just as valuable as classroom theory.· Urban Orchard Cidery faced major challenges over the past year, including hurricane aftermath, a devastating orchard fire, and a hailstorm that damaged crops.· Loyal customers, especially fellow local businesses like Highland Brewing, played a significant role in Urban Orchard's recovery.· Cider isn't just for drinking; it can also be a creative ingredient in cooking and cocktails, especially during fall.· The craft beverage landscape continues to evolve, and keeping up means constant adaptation and diversification, as seen in new products and global influences.· Despite the doom and gloom in parts of the beverage industry, cideries like Urban Orchard are finding ways to persevere and grow, demonstrating the resilience of the craft community. Contact for Greg Hill: Urban Orchard Cidery: https://www.urbanorchardcider.com/For educational inquiries at AB Tech Brewing, Distillation, and Fermentation: https://www.abtech.edu/programs/academic/ab-tech-craft-beverage-institute-brewing-distillation-fermentation Resources Mentioned: o Urban Orchard Cideryo Highland Brewing Companyo New Belgium Brewing Company (for pilot system donation)o AB Tech Brewing, Distillation, and Fermentation Programo Diablo cider from Urban Orchardo Agua de Bolivia herbal liqueur This episode is brought to you by… McConnell Farms - Taste the Way You Remember. Enjoy homemade ciders and ice cream made from only the best produce on the market. Visit the McConnell Farms website to learn more about our seasonal inventory and the delicious creations you can make with our homegrown produce. Consuming the Craft Thanks for tuning into this week's Consuming the Craft Podcast episode, brought to you by AB Tech's Craft Beverage Institute of the Southeast. If you enjoyed this episode, please subscribe and leave a review wherever you get your podcasts. Apple Podcasts | GooglePlay Be sure to share your favorite episodes on social media to help us reach more craft beverage enthusiasts. To learn more about AB Tech and the Craft Beer Institute of the Southeast, visit our website.
As a historical researcher, one of the time periods I seem to have the least knowledge of is the history of this latest whiskey boom. As luck would have it, today I get a chance to dive into that history with someone who lived it. Meet Noah Rothbaum, the author of the upcoming book The Whiskey Bible: The Complete Guide to the World's Greatest Spirit. For 25 years, he's seen whiskey go from inconsequential in an era where whiskey trails were non-existent, to today's whiskey boom. Motivation that led to Noah getting into whiskey writing The landscape of whiskey at the turn of this century Mad Man myth or truth? Bourbon's domination of the American story, yet limited brands The rise of rye and craft The trip to St. Andrews and where Scotch whisky interest was at the time The first distillery visit Canadian, Irish, Japanese, and the development of World whiskies.
Oaklore Distillery (Matthews, NC) WF067 Charlotte. Where is that? North Carolina? South Carolina? You'd be surprised at how many people get tripped up by that. Today though, let it be known we're going to explore the whiskeys of Oaklore Distillery in Matthews a town that is definitely North Carolina. Join me as I chat with Tom and Matt, the founders of Oaklore who share their passion for whiskey and the process of creating their unique products. They discuss their learning experiences, the importance of quality, and the rich history of whiskey in North Carolina. The conversation also delves into how the unique way the produce their spirits, the significance of barrel aging, and the visitor experience at their distillery. The flow goes like this: The Birth of Oaklore Distillery The Journey of Learning and Experimentation North Carolina's Whiskey Landscape Quality and Craftsmanship in Whiskey The Power of the Stave Sourcing and Distillation Process Creating Flagship Products Visitor Experience at Oaklore Distillery
There was this video that my friends at Zignum released years ago that suggested that traditional Mezcal was not clean. That was contrasted recently in a conversation I had with master tequilero Felipe Camarena, who suggested that county water with all its chlorine and other chemicals wasn't clean. And then maestro mezcalero Eduardo Angeles has a further take on what “clean” means. So what exactly does it mean in the process of making spirits? How much does it matter? Find out in the filthiest episode yet of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Michael Schallau of is/was Brewing with wisdom from Felipe Camarena of G4 Tequila and Eduardo Angeles of Lalocura.Episode NotesShout outs this episode to Sleep Walk Chocolate, David Hammond, Zignum Mezcal, Campanilla, Mezcal Villasuso, and 30A Distilling!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week's episode of Soul Archaeology, evolutionary astrologer Ali Ofstedal explores a rare and powerful moment: the New Moon in Virgo at 0° (August 22/23, 2025). This is the first of two Virgo New Moons this season, with the second arriving as a Solar Eclipse at 29° Virgo on September 21.In this episode of Soul Archaeology, we explore why this Virgo New Moon is not just another reset point, but the opening note of an entire season of purification, refinement, and karmic alignment.You'll learn:What makes this 0° Virgo New Moon unique and how it sets the tone for the eclipse to comeHow the square to Uranus in Gemini shakes up routines and pushes us toward flexible, living systems of care instead of rigid controlHow this lunation fits into the larger arc of eclipse season 2025 and what it reveals about the collectiveVirgo as an archetype: the healer, the one who discerns, the keeper of refinement and embodied wisdomHow to work with this double Virgo initiation in your own life - seeding intentions now and preparing for karmic closure under the eclipseThis is a threshold moment. What you begin under this first New Moon will be tested, refined, and transformed by the eclipse.Listen in to understand how to align your intentions with the deeper evolutionary current of this Virgo season.THANK YOU for being here.Inner Compass: A North + South Node Course use code "NORTH" for 10% off until August 25thA Free Evolutionary Archetype GuideLUME red light therapy box for $260 offKamana use the code ALIO15 for 15% offPound Jewelry use the code ALI20 for 20% offEarth Archive's Rainforest Serum - use the code ALIOFSTEDAL for 11% offUse the code "PODCAST" for 10% off an astrocartography or astrology session with me.Catori Life jewelryThank for you being here.Your presence means the world. I would love to hear from you.Ways you can support Soul Archaeology: Subscribe, rate & review on Apple or Spotify, share on social media or with a friendSoul Archaeology on Instagram Ali Ofstedal on InstagramWork with AliJ3tLjKEnaY8ns2te7N2o
Today on Consuming the Craft, I'm joined by Dan Rowe, the head brewer at Yeehaw Brewing Company, who returns for his third appearance. We dive deep into the evolution and expansion of Yeehaw, from its origins a decade ago in Johnson City to its impressive reach across Tennessee, South Carolina, and beyond. Dan walks us through the complexities of scaling up production, adapting to fast-changing markets and regulations, and the behind-the-scenes work of launching high-profile collaborations like the Vols Lager with the University of Tennessee. We also explore how the brewery stays competitive through innovation—ranging from commissioning a state-of-the-art canning line to experimenting with non-alcoholic, CBD, and THC-infused beverages. Dan shares about his new venture into teaching at ETSU, where he's helping mentor the next generation of brewers, and sheds light on Yeehaw's upcoming projects, including a new Myrtle Beach taproom and an expanded presence in Virginia.Dan Rowe is an accomplished brewer with over a decade of experience across several prominent breweries in the Southeast. Now the head brewer at Yeehaw Brewing Company, Dan oversees operations at their bustling Johnson City facility and coordinates distribution throughout the region. His expertise spans from recipe development and quality control to implementing new technology and expanding production capacity. Passionate about education and giving back to the industry, Dan also serves as an instructor at East Tennessee State University, sharing his knowledge with future beverage professionals."It's an ever-changing landscape, and beer's not dead—there's just more options and flavors now than ever before." ~Dan RoweToday on Consuming the Craft:· Yeehaw Brewing is celebrating its 10th anniversary and is rapidly expanding, recently increasing its distribution footprint into South Carolina, Virginia, and potentially Alabama.· Coordinating distribution and production across several states involves complicated logistics, especially with the different alcohol laws and paperwork from state to state.· The brewery recently upgraded to a 12-head rotary canning line, dramatically increasing efficiency and packaging quality, allowing for record-setting production days.· The Vols Lager, Yeehaw's collaboration with the University of Tennessee, quickly became the company's largest production undertaking, driving big growth and brand recognition.· Yeehaw is experimenting with low-dose, nano-emulsified THC and CBD seltzers, navigating the tricky regulatory, quality assurance, and consumer education challenges that come with new beverage categories.· Dan Rowe is playing an active role in brewing education, teaching at ETSU, and facilitating partnerships for student training in real-world brewing and distillation environments.· New outlets for Yeehaw are on the horizon, including a taproom in Myrtle Beach (partnered with Ole Smoky Moonshine) and a possible additional Tennessee location.· Maintaining high quality, streamlining recipes, and keeping grocery-friendly beers on shelves are central to Yeehaw's strategy for sustainable growth.Contact Dan Rowe:· Yeehaw Brewing Company: https://www.yeehawbrewing.com/· ETSU Brewing & Distillation Studies: https://www.etsu.edu/Resources Mentioned: · Yeehaw Brewing Company and its new rotary canning line· Vols Lager, brewed in partnership with the University of Tennessee· East Tennessee State University Brewing and Distillation program· Tennessee Hills Brewstillery· Laird's Apple Brandy (Distillery Permit #1)· Tennessee Stud Hard Cider· Old Smoky Moonshine· Broadway at the Beach (future Myrtle Beach taproom location)· McConnell Farms This episode is brought to you by… McConnell Farms - Taste the Way You Remember. Enjoy homemade ciders and ice cream made from only the best produce on the market. Visit the McConnell Farms website to learn more about our seasonal inventory and the delicious creations you can make with our homegrown produce. Consuming the CraftThanks for tuning into this week's Consuming the Craft Podcast episode, brought to you by AB Tech's Craft Beverage Institute of the Southeast. If you enjoyed this episode, please subscribe and leave a review wherever you get your podcasts.Apple Podcasts | GooglePlay Be sure to share your favorite episodes on social media to help us reach more craft beverage enthusiasts. To learn more about AB Tech and the Craft Beer Institute of the Southeast, visit our website.
Your nickname is Flatnose?@Glasgowdistillery @flatnosewhisky_islayboysltd #whisky #scotch #podcast #radioshow #hostCo hosts : Good ol Boy Harmeet, Good ol Boy Justin, Made Man Brent, Made Man BobSIPS – Our hosts are untying the complexities of a lineup featuring the Glasgow 1770 alongside Flatnose whisky. Can you imagine being called Flatnose as a kid? Brent has a breakthru moment of sensory training bliss when he nails the peat with the smell of a burning tire, specifically a Dunlop tire. With plenty of banter, hilarious antics, and unexpected tasting notes, our Good Ol' Boys navigate through medicinal aromas, fruity cocktails, and the fine line between enjoyable peat and overwhelming smoke. We will be discussing this whiskey and rating them from 1-5 with 5 being the best:9:01 Glasgow 1770 The Original Single Malt Scotch Whisky 2 SIPS13:14 Glasgow 1770 Triple Distilled Single Malt Scotch Whisky 3 SIPS17:18 Glasgow 1770 Peated Single Malt Scotch Whisky 2 SIPS13:00 Flatnose Blended Scotch Whisky 3 SIPS27:36 Flatnose Blended Malt Scotch Whisky 4 SIPS33:12 Barelegs Islay Single Malt Scotch Whisky 3 SIPSinfo@sipssudsandsmokes.comX- @sipssudssmokes IG/FB/Bluesky - @sipssudsandsmokesSips, Suds, & Smokes® is produced by One Tan Hand Productions using the power of beer, whiskey, and golf. Available on Apple Podcasts, YouTube Music, Amazon Music, Pandora, iHeart, and nearly anywhere you can find a podcast.Enjoying that cool Outro Music, it's from Woods & Whitehead – Back RoadsDownload your copy here:https://amzn.to/2XblorcThe easiest way to find this award winning podcast on your phone is ask Alexa, Siri or Google, “Play Podcast , Sips, Suds, & Smokes” Credits:TITLE: Maxwell Swing / FlapperjackPERFORMED BY: Texas GypsiesCOMPOSED BY: Steven R Curry (BMI)PUBLISHED BY: Alliance AudioSparx (BMI)COURTESY OF: AudioSparxTITLE: Back RoadsPERFORMED BY: Woods & WhiteheadCOMPOSED BY: Terry WhiteheadPUBLISHED BY: Terry WhiteheadCOURTESY OF: Terry WhiteheadPost production services : Pro Podcast SolutionsAdvertising sales: Contact us directlyContent hosting services: PodBeanProducer: Made Man BobExecutive Producer: Good ol Boy MikeWhisky, Scotch Whiskey, Glasgow 1770, Single Malt, Blended Scotch, Iowa Whiskey, Tasting Notes, Spirits, Distillation, Whiskey Reviews, Peat, Triple Distilled, Alcohol By Volume, Non-Chill Filtered, Whiskey Ratings, Craft Distilling, Whiskey Tasting, Flavor Profiles, Whiskey Enthusiasts
A piña for espadín@Reycampero @montelobos @delmagueymezcal @mezcalvago @graciasadiosmezcal #agave #agavespirits #mezcal #podcast #radioshow #host #Espadín Co hosts : Good ol Boy Michael, Good ol Boy Mike, and Good ol Gal Cary AnnSIPS – A Deep Dive into Espadín Mezcals - Join us for a spirited exploration of Espadín Mezcals in this episode of Sips, Suds, & Smokes! A flavorful journey to discover the nuances of each spirit as they discuss tasting notes, production methods, and the cultural significance behind these beloved agave spirits.From the smoky depths of Oaxaca to the unique flavors of each bottle, our hosts share their insights and ratings on these mezcals, revealing the complexity and character that Espadín brings to the table. Whether you're a mezcal novice or a seasoned aficionado, this episode promises to enlighten and entertain as we sip, savor, and celebrate the rich tapestry of mezcal culture.We will be discussing this Mezcal's featuring Espadín and rating them from 1-5 with 5 being the best:7:33 Del Maguey Vida Classico from San Luis de Rio SIPS - 411:25 Rey Campero Espadín Joven from Candelaria Yegole SIPS - 416:20 Montelobos Espadín Joven Rancho Loma Larga in Santiago Matalan, Oaxaca SIPS - 520:29 Vago - Ensemble en Barro SIPS - 429:11 Gracias a Dios Pechuga de Guajolote Espadín Joven SIPS - 5info@sipssudsandsmokes.comX- @sipssudssmokes IG/FB/Bluesky - @sipssudsandsmokesSips, Suds, & Smokes® is produced by One Tan Hand Productions using the power of beer, whiskey, and golf. Available on Apple Podcasts, YouTube Music, Amazon Music, Pandora, iHeart, and nearly anywhere you can find a podcast.Enjoying that cool Outro Music, it's from Woods & Whitehead – Back RoadsDownload your copy here:https://amzn.to/2XblorcThe easiest way to find this award winning podcast on your phone is ask Alexa, Siri or Google, “Play Podcast , Sips, Suds, & Smokes” Catchup with Cary Ann on her blog - www.straightup615.comMike is always up to something amazing - all of his cocktail books are available onlinehttps://www.mikewolfbooks.com/Credits:TITLE: Maxwell Swing / FlapperjackPERFORMED BY: Texas GypsiesCOMPOSED BY: Steven R Curry (BMI)PUBLISHED BY: Alliance AudioSparx (BMI)COURTESY OF: AudioSparxTITLE: Back RoadsPERFORMED BY: Woods & WhiteheadCOMPOSED BY: Terry WhiteheadPUBLISHED BY: Terry WhiteheadCOURTESY OF: Terry WhiteheadPost production services : Pro Podcast SolutionsAdvertising sales: Contact us directlyContent hosting services: Talk Media Network, Audioport, Earshot, Radio4All, & PodBeanExecutive Producer: Good ol Boy MikeMezcal, Espadin, Agave, Del Maguey, Ray Camparo, Monte Lobos, Vago Ensemble, Oaxaca, Mezcal Tasting, Spirits, Sips Suds And Smokes, Craft Cocktails, Cocktail Books, Agave Spirits, Pechuga, Distillation, Artisan Mezcal, Flavor Profiles, Tasting Notes, Mezcal Regions
Drunk Leprechaun, Take 4@Redbreastirishwhiskey #irishwhiskey #scotch #podcast #radioshow #hostCo hosts : Good ol Boy Justin, Made Man Brent, Made Man BobSIPS – Dive into the rich world of Irish whiskey! Expect lively banter, tasting notes, and plenty of laughs as they rate each whiskey on their unique scale. We span the world of barley based spirits from one side the planet to the other. Pour yourself a dram and join the fun! Don't miss out on this engaging episode where our hosts remind us that life is too short to drink bad whiskey. Tune in and raise a glass with us! We will be discussing this whisky and rating them from 1-5 with 5 being the best:6:00 Redbreast Single Pot Still Irish Whisky 12 Yr4 SIPS8:55 Redbreast Single Pot Still Irish Whisky 15 Yr4 SIPS13:27 Redbreast Single Pot Still Irish Whisky Missouri Oak4 SIPS20:00 Glendronach Allardice Single Malt Scotch Whisky 18 Yr5 SIPS25:33 Westland Solum American Single Malt Whiskey 3 SIPSinfo@sipssudsandsmokes.comX- @sipssudssmokes IG/FB/Bluesky - @sipssudsandsmokesSips, Suds, & Smokes® is produced by One Tan Hand Productions using the power of beer, whiskey, and golf. Available on Apple Podcasts, YouTube Music, Amazon Music, Pandora, iHeart, and nearly anywhere you can find a podcast.Enjoying that cool Outro Music, it's from Woods & Whitehead – Back RoadsDownload your copy here:https://amzn.to/2XblorcThe easiest way to find this award winning podcast on your phone is ask Alexa, Siri or Google, “Play Podcast , Sips, Suds, & Smokes” Credits:TITLE: FlapperjackPERFORMED BY: Texas GypsiesCOMPOSED BY: Steven R Curry (BMI)PUBLISHED BY: Alliance AudioSparx (BMI)COURTESY OF: AudioSparxTITLE: Back RoadsPERFORMED BY: Woods & WhiteheadCOMPOSED BY: Terry WhiteheadPUBLISHED BY: Terry WhiteheadCOURTESY OF: Terry WhiteheadAdvertising sales: Contact us directlyContent hosting services: Talk Media Network, Audioport, Earshot, Radio4All, & PodBeanProducer: Made Man BobExecutive Producer & Editor: Good ol Boy MikeIrish Whiskey, Red Breast, Glendronach, American Single Malt, Whiskey Tasting, Whiskey Review, St. Patrick'S Day, Distillation, Whiskey Ratings, Bourbon Cask, Oloroso Sherry, Single Pot Still, Whiskey Enthusiasts, Spirits, Whiskey History, Tasting Notes, Whiskey Cocktails, Whiskey And Food Pairing, Sustainable Distilling, Craft Distillery
Two Days and a Wake-Up until Joanne Rowling Murray's 60th Birthday. In their home stretch conversation, Nick and John fulfill a reader request to discuss the book inside Deathly Hallows (one of three actually…), ‘Tales of Beedle the Bard,' a text that Albus Dumbledore leaves Hermione in his will for her to read and apply to the Horcrux Hunt. Nick tells the story of Rowling's creation of six hand-written copies as six-of-a-kind gifts for those who brought Harry Potter to life. John dives into the center story of the five tales, ‘The Hairy Heart,' and tells the meaning of Harry's heart to draw out what Rowling meant by describing Beedle as “the distillation” of the Hogwarts Saga. Enjoy!New to the Lake and Shed Kanreki Birthday series? Here's what we're doing:On 31 July 2025, Joanne Murray, aka J. K. Rowling and Robert Galbraith, will be celebrating her 60th birthday. This celebration is considered a ‘second birth' in Japan or Kanreki because it is the completion of the oriental astrological cycle. To mark JKR's Kanreki, Dr John Granger and Nick Jeffery, both Nipponophiles, are reading through Rowling's twenty-one published works and reviewing them in light of the author's writing process, her ‘Lake and Shed' metaphor. The ‘Lake' is the biographical source of her inspiration; the ‘Shed' is the alocal place of her intentional artistry, in which garage she transforms the biographical stuff provided by her subconscious mind into the archetypal stories that have made her the most important author of her age. You can hear Nick and John discuss this process and their birthday project at the first entry in this series of posts: Happy Birthday, JKR! A Lake and Shed Celebration of her Life and Work.Tomorrow? In the Day Before the Big Day, Nick and John do a deep dive into the Golden Thread of ‘The Lost Child,' a plot point occurring (by one count!) forty times in Rowling-Galbraith's twenty one books. Stay tuned!Links to posts mentioned in today's Lake and Shed conversation for further reading:The Heart is the Human Spiritual Center: Deathly Hallows, Ink Black Heart, and Beedle the BardRowling: Beedle the Bard is the Distillation of Harry Potter ThemesTwelve Answers to Beedle the Bard Get full access to Hogwarts Professor at hogwartsprofessor.substack.com/subscribe
Dave Selinger is the CEO of Deep Sentinel, an AI-powered video surveillance system. They have raised $38M in funding from Intel Capital, Shasta Ventures, and others. Prior to this, he was the cofounder and CTO of Redfin. And he was also the cofounder of RichRelevance. Dave's favorite books: - The Speed of Trust (Author: Stephen Covey)- Snow Crash (Author: Neal Stephenson)(00:01) – Origin Story: A Near-Miss and the Broken Security Market(04:22) – What Deep Sentinel Does and Why It Works(06:23) – Benefits of Vertical Integration in Security Tech(10:20) – How Deep Sentinel Tackles False Positives with AI(14:06) – Balancing Escalation Risk and Deterrence(17:06) – How Deep Sentinel Processes and Uses Its Data(19:36) – Positioning Deep Sentinel in the Competitive Landscape(21:12) – Go-to-Market Learnings for Hardware-Software Companies(23:39) – Residential vs. Commercial Security: A Comparison(26:41) – Regulation and Public Sentiment Around Security AI(29:03) – Insurance, Security, and Incentive Alignment(31:23) – Company Building and Lessons from 20 Years of Founding(39:26) – The Role of Distillation and LLMs in Deep Sentinel's Future(42:27) – Rapid Fire Round--------Where to find Dave Selinger: LinkedIn: https://www.linkedin.com/in/selly/--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-infinite X: https://x.com/prateekvjoshi
In today's conversation, I'm joined by Joe Beatrice and Tripp Stimson from Barrell Craft Spirits. We're going to dig into the art of whiskey blending, the unique characteristics of their products, and the intricacies of whiskey tasting. We'll explore the artistic differences between blending and distilling, the importance of maturation, and the flavor profiles of Barrell's whiskeys, including Seagrass and Foundation Single Barrel. Here are the main topics: Introduction to Barrel Craft Spirits The Art of Blending Whiskey Understanding Distillation and Maturation Tasting Seagrass Rye Whiskey Exploring Flavor Profiles and Tasting Notes Foundation Single Barrel and Its Characteristics Batch 37: Annual Release Insights The Role of Oak in Whiskey Marketing and Branding of Barrel Craft Spirits For an extended version of this episode, join the Whiskey Lore Speakeasy (7 day free trial available) at Patreon.com/whiskeylore .
Finally Im getting to releasing this great episode. Not from not wanting to but MAN its been busy. Today we are talking to the Doll Of Distilling herself Erin Lee. I am so excited for this chat, we talked all about Step up program. We dove deep into Tea. You Just have to check it out.Badmotivatorbarrels.com/shop/?aff=3https://www.instagram.com/zsmithwhiskeyandmixology?utm_source=ig_web_button_share_sheet&igsh=MWZ4dGp2MzlucjVvdw==Patreon.com/the_whiskeyshamanFewspirits.comADI's International Spirits Competition Rising Star in Distilling Shortlist Nominee: Erin LeeErin Lee, a graduate of the StepUp Internship program, has transitioned from fashion designer and tea sommelier to distiller in a remarkable journey. Her passion for tea, sparked by a trip to Asia, led her to become a Certified TAC Tea Sommelier in 2021. This newfound appreciation for intricate flavors directed her towards whiskey, culminating in her selection for the inaugural STEP UP Intern class by the American Craft Spirits Association. Erin has trained at renowned distilleries across the U.S. and joined Few Spirits as Head Distiller in 2023. Currently, with support from the Michael James Jackson Foundation, she is pursuing her General Certificate in Distillation from the Institute of Brewing and Distilling. And for these reasons the American Distilling Institute's International Spirits Competition selected Lee as a shortlist nominee for our 2024 Rising Star in Distilling Award.A grain-to-glass distillery since 2011, FEW Spirits produces award–winning craft whiskey and gin in a tucked away alley located in the growing Chicago suburb of Evanston, IL..ALSO KNOWN AS THE COLUMBIAN EXPOSITION, THE 1893 WORLD'S FAIR TOOK PLACE IN CHICAGO, ILLINOIS. IT REPRESENTED THE DAWN OF MODERNISM & INTRODUCED OUR NATION TO phosphorescent LIGHTS, MOVING WALKWAYS, an electricity powered water fountain and the invention of THE FERRIS WHEEL. It WAS even THE LAUNCHING GROUND OF A NOTABLE BRAND OF JUICY CHEWING GUM AND A BRAND OF BEER THAT WAS AWARDED A FIRST PLACE BLUE RIBBON. BOTH ARE STILL AROUND TODAY. however, IT WAS ALSO THE GOLDEN AGE OF PRE-PROHIBITION WHISKY.SEVERAL DECADES LATER, EVANSTON, A SUBURB JUST NORTH OF CHICAGO, WOULD BECOME KNOWN AS THE SEAT OF PROHIBITION AND THE established HEADQUARTERS FOR THE TEMPERANCE MOVEMENT – SPEARHEADED BY A WELL KNOWN FEMALE FIGURE, WHO ALSO ADVOCATED HEAVILY FOR WOMEN'S SUFFRAGE. prohibition was repealed in 1933, but EVANSTON WAS A DRY COMMUNITY THAT CLUNG DEAR TO concept of the NOBLE EXPERIMENT decades AFTER repeal. THESE TWO ICONIC MOMENTS IN TIME AND PLACE, BECAME the INSPIRATION behind few spirits. PAUL HLETKO, FOUNDER & DISTILLER, SAW THE POWER OF PEOPLE FOLLOWING THEIR DREAMS. he TURNED HIS SIGHTS TO MAKING SPIRITS. HE RELENTLESSLY PURSUED A NEW STYLE OF URBAN WHISKEY THAT WOULD BE THE EQUAL TO THAT OF HIS FRIENDS IN KENTUCKY. BUT CREATING WHISKEY IN THE SEAT OF PROHIBITION? SEEMED LIKE A GOOD PLACE TO START A BOLD, PARADIGM BREAKING WHISKEY BRAND. DOWN A DARK BACK ALLEY IN EVANSTON, YOU WILL FIND AN OLD CHOP-SHOP MAKING FLAVORFUL BOURBON & RYE FROM GRAIN-TO-GLASS WITH A BOLD CHICAGO-STYLE TWIST. Innovative products continue to flow from the team at FEW. PROOF, AS EVER, THAT APPEARANCES CAN BE DECEPTIVE.Smashing PumpkinsIn an innovative collaboration, FEW Spirits and The Smashing Pumpkins have unveiled a distinctive bourbon, meticulously crafted and brought to bottle strength using Billy Corgan's own Midnight Rose Tea from his tea shop in Chicago, Madame Zuzu's. This unique partnership – unlike any in the rock-whiskey world before – blends the bold flavors of our exceptional FEW bourbon with the avant-garde musical spirit of The Smashing Pumpkins, known for their influential alternative rock sound showcased in iconic albums like ‘Siamese Dream' and ‘Mellon Collie & The Infinite Sadness.
Ron Aleman of Taco Mucho knows and loves Tequila. But he's now finding himself drawn to Raicilla. “I've never had one I didn't like,” he says. And I have a theory about that. So I run that theory by Marissa Paragano of The Tequiladies in this Jalisco-centric episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Marissa Paragano of The Tequiladies and Tequila That Cares.Episode NotesMarissa is also a board member of Tequila That Cares, a philanthropic organization bringing positive change to the agave spirits industry!Shout outs this episode to El Bandido Yankee Tequila, G4 Tequila, Tequila Fortaleza, Cazcanes Tequila, Lagunitas Brewing, Dark Matter Coffee, La Venenosa Raicilla, PKGD Group, El Acabo Raicilla, and the DC Chocolate Fest!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this spirited conversation with Maggie Campbell, CEO of American Cane, some of the topics we discuss include: Maggie's road to rum, which winds through philosophy, spice shops, wineries, and legendary brandy houses before arriving at her break-out role as distiller (and later President) at Privateer Rum Why sales and distribution are the hardest part of the spirits industry and how her experience working in the sales trenches and partnering with distributors paved the way for a great adventure working as a project leader for Mt. Gay Rum in Barbados We touch on the functional aesthetics of both vineyards and cane fields - the ways in which soil health, drainage, and cane-specific indicators like, lodging, arrowing, and low MOC factors can contribute to a healthy and beautiful cane growing operation And we also discuss how to think about terroir in spirits - specifically how it's derived not only from landscapes and climate conditions, but also from the perspectives taken and decisions made in these diverse locales over long periods of time. Along the way, we muse on why you can't hide anything in a rum distillation, the multinomial nature of cane dogs, why only foreigners like the monkeys in Barbados, and much, much more.
Today on Consuming the Craft, I'm celebrating the 10-year anniversary of the first graduating class from the Brewing, Distillation, and Fermentation program here at AB Tech. Joining me is John Lyda, a full-time faculty member, brewing industry veteran, and program cornerstone who's been instrumental in driving the evolution of our curriculum. Together, we reflect on how far the program has come—from its beer-centric roots to now embracing a diverse array of beverages like cider, seltzer, non-alcoholic options, wine, and even innovative creations like mushroom wine. We discuss the ever-changing demands of the craft beverage industry, the importance of flexibility and quality assurance, and the ways our program responds to students' varied backgrounds and aspirations, from international students to second-career professionals. We also unpack the realities of teaching distillation, the challenges of non-alcoholic beverage safety, and our ongoing efforts to develop continued education for enthusiasts and industry veterans alike. John Lyda brings nearly a decade of experience as an educator at AB Tech, following his successful tenure in the professional brewing world. Known for his technical expertise and approachable teaching style, John has seen the program grow in both scope and ambition, guiding students as they tackle everything from traditional lagers to fruit wines and beyond. Highly regarded for his commitment to industry standards and innovation, John champions the importance of adaptability and quality in every phase of craft beverage production. “Making sure that your quality control and quality assurance is there before it releases to the market… is something that people get very, very excited and want to get to market very, very quickly with their idea, but they want to make sure it's safe.” ~Puff Today on Consuming the Craft:· The Brewing, Distillation, and Fermentation program at AB Tech has expanded far beyond its original beer-focused curriculum to include cider, seltzer, wine, and non-alcoholic beverages.· Student interests now span the full spectrum of craft beverage production, including international and second-career students with diverse backgrounds.· Quality assurance and safe production protocols—such as pasteurization—are critical, especially in non-alcoholic and new beverage categories.· The curriculum exposes students to the hands-on production of beer, mead, wine, cider, seltzer, soda (including ginger ale), and even unique experimental fermentations, such as mushroom wine.· Distillation training is provided within the bounds of federal law, emphasizing consistent and sanitary fermentation as the foundation for high-quality spirits.· Ongoing program updates include new equipment, such as open-top fermenters, and facility design projects, which help students adapt to industry needs.· AB Tech is developing continuing education and workforce development options to serve working professionals and enthusiasts seeking to expand their knowledge or acquire industry credentials.· Flexibility and innovation remain at the heart of the program, preparing students to adapt to industry trends and shifting consumer demands. Resources Mentioned:• AB Tech Brewing, Distillation & Fermentation Program • Craft Beverage Institute of the Southeast • Canada Dry Ginger Ale (for sensory training and ginger ale/beer discussion) To learn more or request information about courses, continuing education, or industry partnerships, reach out to jeffreymirvin@abtech.edu. If you're interested in lifelong learning or supporting your business's growth in the craft beverage field, let us know how we can help! This episode is brought to you by… McConnell Farms - Taste the Way You Remember. Enjoy homemade ciders and ice cream made from only the best produce on the market. Visit the McConnell Farms website to learn more about our seasonal inventory and the delicious creations you can make with our homegrown produce. Consuming the Craft Thanks for tuning into this week's Consuming the Craft Podcast episode, brought to you by AB Tech's Craft Beverage Institute of the Southeast. If you enjoyed this episode, please subscribe and leave a review wherever you get your podcasts. Apple Podcasts | GooglePlay Be sure to share your favorite episodes on social media to help us reach more craft beverage enthusiasts. To learn more about AB Tech and the Craft Beer Institute of the Southeast, visit our website.
There are people in rural Mexico for whom Mezcal is 100% cultural heritage. And there are people there for whom it's 100% business — sometimes tied to cultural heritage but sometimes tied simply to paying for food, clothing, shelter, and life's aspirations and pleasures. So what does it mean when that business side taps into the cultural heritage side to increase business? It's a thoughtful, precious episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with Shawn Miller of PKGD Group.Episode NotesShout outs this episode to Ultramundo Mezcal, Mezcal Palomo, Chrissie Hynde, Sleep Walk's award-winning Blanco Jaguar chocolate bar, red Solo cups, DC Chocolate Festival, Linda Sullivan, The Tequiladies, Connie, and Mezcal Lalocura!
I see it all over the news, hear it in conversations: we're drinking less. But then I look at the sales numbers from DISCUS and it makes me wonder, are we really drinking less? Or are we just drinking differently? It's a complex math equation that we try to solve in this episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with Linda Sullivan of seynasecreto.Episode NotesYou can check the raw data from DISCUS here.
This weeks guest is Katie Black. Born in Austin, Texas, and raised between live music and livestock shows, Katie wrapped up high school in Wichita Falls, a town where ambition has to shout over Friday night football. Katie originally headed to New Jersey on a film and theatre scholarship, clearly chasing a dramatic plot twist, but ended up studying dental assisting instead. Then came Germany—where instead of finding herself, Katie found gastronomy, a word she barely knew but now lives by. Since then, she's racked up more beverage certifications than most sommeliers have opinions, including an A.A.S. in Brewing, Distillation, and Fermentation and enough lab hours to clone a hop. Her career? Think if Anthony Bourdain and Boba Fett had a baby who freelanced. Katie's brewed wild beers, managed cider taprooms, distilled bittersweet amaro, and built full bar programs from scratch. In Texas, she's worked everything from tequila fueled ACL activations to high-speed Formula 1 service—wrangling crowds, menus, and bartenders like a rodeo queen of spirits. Katie recently competed on Moonshiners: Master Distiller, and is now diving into brand advocacy, education, and throwing events that actually make people care about what's in their glass. @probablykb A big thank you to Jean-Marc Dykes of Imbiblia for setting up our new website theindustrypodcast.club. Imbiblia is a cocktail app for bartenders, restaurants and drink nerds and built by a bartender with more than a decade of experience behind the bar. Several of the features includes the ability to create your own Imbiblia Recipe Cards with the Imbiblia Cocktail Builder, rapidly select ingredients, garnishes, methods and workshop recipes with a unique visual format, search by taste using flavor profiles unique to Imbiblia, share recipes publicly plus many more……Imbiblia - check it out! Looking for a Bartending Service? Or a private bartender to run your next corporate or personal event? Need help crafting a bar program for your restaurant? Contact Alchemist Alie for all your bartending needs: @alchemist.alie Contact the host Kypp Saunders by email at kyppsaunders@gmail.com for products from Elora Distilling, Malivoire Winery and Terroir Wine Imports. Links kyppsaunders@gmail.com @sugarrunbar @the_industry_podcast email us: info@theindustrypodcast.club
The fastest-growing spirits category in the USA is Tequila/Mezcal. The most popular cocktail is the Margarita. We love tacos, we love tamales. So … can all of that help us as a country learn to love the people who brought us these things we love?Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Alba Huerta of Julep Houston.Episode NotesShout outs to Alvin Schultz, Ben Quasius, Gustavo Arellano and his book Taco USA.This episode was recorded about a month before the immigration raids in Los Angeles. The government's violent response to the peaceful protests against those raids is, I think, evidence of what Alba says in this episode.This Instagram post from my friends at Lost Lore Tequila says everything I'd want to say in a smarter way than I could
↳ Why is Anthropic in hot water with Reddit? ↳ Will OpenAI become the de facto business AI tool? ↳ Did Apple make a mistake in its buzzworthy AI study? ↳ And why did Google release a new model when it was already on top? So many AI questions. We've got the AI answers.Don't waste hours each day trying to keep up with AI developments.We do that for you on Mondays with our weekly AI News That Matters segment.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's Advanced Voice Mode UpdateReddit's Lawsuit Against AnthropicOpenAI's New Cloud ConnectorsGoogle's Gemini 2.5 Pro ReleaseDeepSeek Accused of Data SourcingAnthropic Cuts Windsurf Claude AccessApple's AI Reasoning Models StudyMeta's Investment in Scale AITimestamps:00:00 Weekly AI News Summary04:27 "Advanced Voice Mode Limitations"09:07 Reddit's Role in AI Tensions10:23 Reddit's Impact on Content Strategy16:10 "RAG's Evolution: Accessible Data Insights"19:16 AI Model Update and Improvements22:59 DeepSeek Accused of Data Misuse24:18 DeepSeek Accused of Distilling AI Data28:20 Anthropic Limits Windsurf Cloud Access32:37 "Study Questions AI Reasoning Models"36:06 Apple's Dubious AI Research Tactics39:36 Meta-Scale AI Partnership Potential40:46 AI Updates: Apple's Gap Year43:52 AI Updates: Voice, Lawsuits, ModelsKeywords:Apple AI study, AI reasoning models, Google Gemini, OpenAI, ChatGPT, Anthropic, Reddit lawsuit, Large Language Model, AI voice mode, Advanced voice mode, Real-time language translation, Cloud connectors, Dynamic data integration, Meeting recorder, Coding benchmarks, DeepSeek, R1 model, Distillation method, AI ethics, Windsurf, Claude 3.x, Model access, Privacy and data rights, AI research, Meta investment, Scale AI, WWDC, Apple's AI announcements, Gap year, On-device AI models, Siri 2.0, AI market strategy, ChatGPT teams, SharePoint, OneDrive, HubSpot, Scheduled actions, Sparkify, VO3, Google AI Pro plan, Creative AI, Innovation in AI, Data infrastructure.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
At the DISCUS conference a few months ago, I was at a party that was populated by small-brand tasting tables. The one that drew my attention most was the one that was both least visited and most colorful. And my response to everything they were pouring was, Yes, please! But most especially, the White Chocolate/Raspberry Tiramisu whiskey! It's an artificially inspired episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Linda Sullivan of seynasecreto with wisdom from Al Murphy of Mother Murphy's Flavors.Episode Notes Thanks to Al Murphy of Mother Murphy's FlavorsShout outs to the Distilled Spirits Council of the US, Runts, Skittles, Hampden Estates Rum, Elevate Innovation, Non-Alcoholic Phony Negroni, Zignum Mezcal, Ancho Reyes Verde, Mexico in a Bottle, and “Outspoken,” the monthly storytelling event at Chicago's Sidetrack!
Katie Black has distilled, fermented, bartended, event coordinated, traveled Europe, and so much more. It's hard to guess where she'll turn up next, but one thing is clear: her love of people and hospitality will keep guiding her. Her curiosity drives her, and with every move she practices the art of "intention". And as we post this chat today, she announces the next big chapter of her career and a move to Houston. We chat dating, produce, sharing, and more.
I thought this episode was going to be about blind tastings. And it is. Or, at least, it starts that way. But when you're talking with Marissa, aka @tequilaencyclopedia , you rarely take as straight a path as you'd expect. So if you want to listen to a conversation about that … or about CostCo … or about the Agavos Awards … or how much I paid Marissa to join me on a trip to 818 Tequila, this episode is for you!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Marissa Paragano of The Tequiladies.Episode NotesMarissa is also a board member of Tequila That Cares, a philanthropic organization bringing positive change to the agave spirits industry!Shout outs this episode to the Agavos Awards, 818 Tequila, Costco, and Marissa's dream team, the Boston Red Sox!
Chuck Cowdery was called “the dean of American whiskey journalism” by no less than Robert Simonson in the New York Times. So when I decided more than 250 episodes into this podcast to finally devote one to Mexican whiskey, I naturally invited my celebrated drinking buddy over. It's (perhaps) the corniest episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Chuck Cowdery and additional wisdom from Ismael Gomez of Laika Spirits, Xaime Navarro of Whisky Juan del Campo and Gracias a Dios Mezcal, Txomin Alcorta of Whisky Prieto y Prieta, and Jonathan Barbieri of Whiskey Maiz Nation!Episode NotesCheck out The Chuck Cowdery Blog!Thanks to Ismael Gomez of Laika Spirits, Xaime Navarro of Gracias a Dios Mezcal and Juan Del Campo Whiskey, Txomin Alcorta of Whisky Prieto y Prieta, and Jonathan Barbieri of Whiskey Maiz Nation for the quotes this episode.You can check out NOM 199 here and to see how the rules for whiskey in Mexico compare to the rules for Mezcal, Tequila, and everything else, check out this spreadsheet.And if the tangent about feni in Goa got you going, check out Hansel Vaz's Instagram page and Fazenda Cazulo!
Nova Premier is our most advanced AI model yet, featuring a million-token context window and enhanced capabilities at nearly half the cost of competitors. Dive into this update and more with hosts Simon and Jillian. 00:00 - Intro 00:31 - Amazon Nova Premier 02:56 - Analytics 04:46 - Artificial Intelligence 11:02 - Business Applications 11:38 - Cloud Financial Management 11:57 - Compute 12:10 - Contact Center 14:50 - Containers 15:13 - Database 17:52 -Developer Tools 18:08 - Management and Governance 20:25 - Networking 22:48 - Marketplace 24:04 - Security Identity End Compliance 26:09 - Storage 27:56 - Outro Show Notes: https://dqkop6u6q45rj.cloudfront.net/shownotes-20250516-191312.html
Last episode we contrasted the good done through the consumption of heritage agave spirits with the damage caused by same. This episode, we talk about the damage done by the casual consumption of agave spirits. And coffee. And chocolate. And … well, everything. It's a casual-destruction episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Linda Sullivan of seynasecreto with wisdom from Xaime Navarro of Gracias a Dios Mezcal.Episode NotesThanks to Xaime Navarro of Gracias a Dios Mezcal and Juan Del Campo Whiskey for the quote this episode.Shout outs to Palomo Mezcal, Dark Matter Coffee, Chengdu Bistro, Rogue Ales & Spirits, and Chuck Klosterman and his novel Downtown Owl! (But also … go read Sex, Drugs, and Cocoa Puffs: A Low Culture Manifesto. Like, now.)
What do you get when you mix the world's largest teaching distillery with the finest students on the planet? The answer is: "Kentucky WildCask Bourbon!" If you are an astute listener, you've heard Anthony reference the "esteemed institution" and the James B. Beam Institute. The bourbon we taste and review this week was the created by a group of students at the University of Kentucky. It might be hard to believe, but UK has a Distillation, Wine, and Brewing Certificate and students in that program took a class where they learned how to create and market a brand. How does the product taste?. You'll have to listen to find out. One thing is for sure, we are so happy to see this release and we hope for many more. --------------------------SocialsIG: https://www.instagram.com/themashupkyFB: https://www.facebook.com/themashupkyTW: https://twitter.com/themashupkyPartnership(s)Visit Bourbonoutfitter.com and enter code THEMASHUP for a special discount or visit bourbonoutfitter.com/THEMASHUPVisit https://woodworkcollective.shop and enter code MASHUP for a 15% discount on your orderMusic: All the Fixings by Zachariah HickmanThank you so much for listening!
It's amazing and moving to see mezcalerxs whose lives have clearly improved as a result of the growing interest in agave spirits. But there's another side to that story – or, really, multiple other sides to it: some good, some bad, and some … well, the jury is still out on those. We talk about those various stories in this episode of Agave Road Trip!Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Linda Sullivan of seynasecreto.Episode NotesThis episode cover is Courtesy of Slim Pickens! Thanks, Slim!The episode of “The Hidden Brain” that I was trying to recall is “Relationships 2.0: The Power of Tiny Interactions + Your Questions Answered: Erica Bailey on Authenticity.” So good!My friends Jen & Adam Murphy have great Instagram feeds, if you're a gardening nerd or a garden-in-a-cocktail nerd!Shout out this episode to Jason Cox whose Cinco Sentidos brand of agave spirits is amazing! Equally amazing are his two (count them, two!) brands of Oaxacan rum: Alambique Serrano and Cañada - Aguardiente Oaxaqueño!Shout out this episode to Mezcal tour guide Randall Stockton and his amazing pandemic program, Feeding Chichicapam!Shout out this episode to Chuck Klosterman and his novel Downtown Owl! (But also … go read Sex, Drugs, and Cocoa Puffs: A Low Culture Manifesto. Like, now.)
When I first started drinking Mezcal, I was told that there were some agaves that couldn't be farmed — that could only grow wild. And I still here that narrative some two decades later. And it still doesn't make sense to me. So I did the only logical thing: I asked a farmer.Agave Road Trip is a critically acclaimed, award-winning podcast that helps gringx bartenders better understand agave, agave spirits, and rural Mexico. This episode is hosted by Lou Bank with special guest Tracey Vowell of Three Sisters Garden.Episode NotesIf you're anywhere near Chicago or Kankakee, order your farm-fresh produce (and dry goods like locally grown popcorn, beans, and oats) from Three Sisters Garden!
In this episode of Barrel Room Chronicles, host Kerry Moynahan heads to the University of Kentucky for a behind-the-scenes look at one of the most innovative programs shaping the future of American whiskey. Her journey begins with a conversation with Dr. Janice Fernheimer, Professor and founding facultymember of the University's Distillation, Wine & Brewing Studies program. They discuss her unique path from Jewish Studies into bourbon scholarship, the origins of the “Craft Writing” course, and her groundbreaking work on the Womenin Bourbon Oral History Project and the graphic novel America's Chosen Spirit.Later, Kerry tours the James B. Beam Institute for Kentucky Spirits with Associate Director Ilka Balk. Together, they explore the Institute's world-class research distillery and brand-new rickhouse, learning how students are gaining hands-on experience in production, fermentation, and aging. The episode highlights the intersection of education, heritage grains, sustainable practices, and real-world bourbon training — all happening on a dry campus. Whether you're a whiskey enthusiast, a fan of fermentation science, or simply curiousabout how the next generation of distillers is being educated, this episode offers an inspiring and informative deep dive into the academic side of spirits.To see the full show notes for this episode visit BarrelRoomChronicles.com BRC is a production of 1st Reel Entertainment.
By drawing on the results of other models, distillation can shape AI that's almost as good, quickly and more cheaply. WSJ tech reporter Miles Kruppa says that has investors worried about the risks of pouring money into the field's cutting edge. And retail reporter Kate King says that while Amazon may be the champion of online retail, its recent store closures show it hasn't replicated that success in the bricks-and-mortar space. Pierre Bienaimé hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices