POPULARITY
Categories
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Raymond James' Josh Beck talks with TITV Host Akash Pasricha about Amazon's massive 16,000-person layoff and what to expect from Meta and Microsoft earnings tonight. We also talk with The Information's Aaron Holmes about Microsoft's internal reaction to Anthropic's Claude CoWork as well as Ann Gehan & Theo Wayt about the shutting down of Amazon's grocery store experiments. Finally, we get into the return of speculative SPACs with our Finance Editor Ken Brown.Articles discussed on this episode: https://www.theinformation.com/articles/microsoft-races-respond-new-threats-anthropichttps://www.theinformation.com/newsletters/the-information-finance/new-spac-boom-let-last-spac-boomhttps://www.theinformation.com/briefings/amazon-cuts-16-000-employeeshttps://www.theinformation.com/newsletters/the-briefing/amazons-fresh-dream-expiresSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/
In this episode of Tank Talks, Matt Cohen and John Ruffolo unpack Prime Minister Mark Carney's China agreement and his Davos speech, calling out the collapse of the rules-based international order and pushing “middle powers” to coordinate against coercion. John and Matt agree the speech was sharp, but they hammer the real issue: Canada has to build leverage at home (resources, infrastructure, internal trade, and actual execution) or “diversifying” becomes a vibes-only strategy.The conversation then pivots to Trump's Greenland framework, rare earth realities, and why the real choke point is processing, not just “owning minerals.” Finally, they switch lanes into markets, covering the biggest anticipated IPOs of 2026 (SpaceX, OpenAI, Databricks, Stripe, Revolut, Canva), why liquidity could snap back for LPs, and why SPACs are creeping back as a funding path for deep tech, including General Fusion's SPAC and the emergence of the Canadian Rocket Company as Canada tries to repatriate space talent.Canada–China trade reset and what it actually means (02:13)Matt tees up the January 16 China agreement and the idea of trade diversification under U.S. tariff uncertainty. John frames it as a fix for specific trade pain (not a full political pivot) and warns against treating China as a “safe alternative.”Davos speech: “truth bombs” vs real-world action (04:11)They break down Carney's Davos message on coercion, great power tactics, and middle-power coalitions. John calls it “spectacular,” but both stress the gap between rhetoric and measurable outcomes.Canada's leverage problem: “build Canada first” (06:39)John argues Canada can't diversify trade if it has nothing competitive and scalable to trade. The conversation turns into a blunt call for domestic execution: resources, pipelines, and the hard stuff that moves GDP.Matt's frustration: Why no national address to Canadians? (08:06)Matt goes off on the lack of direct, plainspoken communication to Canadians about what has to change, what's coming, and what tradeoffs might be required.Trump and Greenland: Bond markets, politics, and power (12:32)John calls Trump's posture performative and points to constraints that actually matter, including internal GOP pressure and market reactions (he highlights the bond market as the real “adult in the room”).Top anticipated IPOs of 2026: the mega list (19:12)They run through what's being floated as the monster class of potential offerings: SpaceX, OpenAI, Databricks, Stripe, Revolut, Canva (and more speculation). The bigger point: it's not number of IPOs, it's dollar value and liquidity unlock.Canada's space bets: Canadian Rocket Company emerges (21:15)Matt shares CRC's emergence from stealth with $6.2M funding (all Canadian investors including BDC and Garage). Focus: repatriating SpaceX/Blue Origin talent and pushing Canada deeper into the space industrial base.Connect with John Ruffolo on LinkedIn: https://ca.linkedin.com/in/joruffoloConnect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
Reed Albergotti is the technology editor at Semafor. Albergotti joins Big Technology Podcast to break down which companies are best positioned in the coming year. We cover Meta's superintelligence gamble, Google's Gemini push, OpenAI's model race, and the rise of AI companions. We also discuss Tesla's self-driving moment of truth, Nvidia's upside and risks, Microsoft's Copilot dilemma, big media and streaming shake-ups, Anthropic's IPO prospects, SPACs and private equity, quantum, and the strange new love stories people are forming with their bots. Hit play for a fast, prediction-packed tour through the year in tech—and a sharp, entertaining look at where the AI economy and Big Tech are headed next. Learn more about your ad choices. Visit megaphone.fm/adchoices
My interview with Larry Goldberg (aka Tesla Larry) all about the upcoming SpaceX IPO. We go over in depth Bill Ackman's public proposal to help SpaceX with a SPARC offering, essentially allowing the company to go public bypassing the traditional investment banking process. This innovative proposal has created a lot of waves in the Tesla / Elon Musk / SpaceX community as the IPO rumors continue to heat up. Overall I think it's good we have this new alternative to consider, it will help with negotiating with the banks and gives Tesla investors a shot at getting SpaceX shares.Tesla Larry on X: https://x.com/TeslaLarry0:00 Larry Goldberg aka Tesla Larry0:42 Bill Ackman's SPARK SpaceX IPO Proposal1:31 SPACs and Disrupting The IPO Process6:12 Ackman's Disruptive Proposal on X24:23 How Tesla Investors Get Early SpaceX AccessMy X: / gfilche HyperChange Patreon :) / hyperchange Disclaimer: Larry and I are investors in Tesla & SpaceX and nothing in this show is financial advice.
Guest: Matthew Le Merle, CEO & Managing Partner, Blockchain Coinvestors About the Guest: Matthew Le Merle is a leading figure in early-stage venture capital, having previously managed Keiretsu, the world's largest angel network. In 2014, he made a strategic pivot into the digital asset space, co-founding Blockchain Coinvestors. The firm is now dedicated to the vision that digital monies, commodities, and assets are inevitable, and all of the world's financial infrastructure must be upgraded. With investment strategies now in their 12th year, Blockchain Coinvestors has backed a combined portfolio of over 1,250 blockchain companies and projects, including more than 110 blockchain unicorns. In This Episode: Decoding the Future of Finance and Emerging Blockchain Unicorns Join us for a deep dive with Matthew Le Merle as he shares the strategic insights that drove his firm into blockchain investing over a decade ago. We explore the massive shift toward digital assets, the unique mechanics of the fund-of-funds model, and the critical role of tokenomics in the crypto ecosystem. Key Discussion Points:
This episode is a special replay of David George's conversation with Harry Stebbings on 20VC. David is a General Partner on a16z's growth team, and in this discussion he breaks down how he thinks about breakout growth investing: why great business models are now table stakes, where real edge comes from non-consensus views on TAM, and how to underwrite upside in a world of higher prices and increasing competition.They also dig into the mechanics behind the scenes: unit economics at growth, “pull vs push” products, winner-take-most market structures, and how David decides when to double or triple down on a company. Along the way, they touch on SPACs, the rise of crossover funds, single-trigger decision making, and how David manages fear, pressure, and performance over the long arc of an investing career. Resources:Learn more about 20VC: https://www.thetwentyminutevc.com/Watch on YouTube: https://www.youtube.com/@20VCFollow Harry on X: https://x.com/HarryStebbingsFollow David on X: https://x.com/DavidGeorge83 Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures](http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this episode, Andrejka Bernatova, Founder and CEO of Dynamix Corporation, shares how investors can evaluate mission-critical energy and infrastructure businesses by focusing on fundamentals such as contract quality, customer durability and operating reality, rather than market sentiment or emerging-tech narratives. Learn about approaches to public-market pathways, including when SPACs offer advantages, what truly makes a company public-ready and the operational value-creation levers that matter most in capital-intensive sectors.
Peter Wright, CEO of McKinley Acquisition Corporation (MKLY), details the current state of the SPAC market, highlighting its "next generation phase" with increased investor appetite and improved deal structures. He emphasizes the "better, faster, and cheaper" advantages of SPACs over traditional IPOs, citing deal certainty, valuation certainty, and capital certainty. Wright also outlines McKinley Acquisition Corporation's non-negotiables when screening companies, focusing on those ready, worthy, and eager to embrace public market transparency and growth.======== Schwab Network ========Empowering every investor and trader, every market day. Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
With Matt George, CEO of Merlin Labs Many firms chasing autonomous ground-vehicles have relied on SPACs to reach the public markets, and now Merlin Labs wants to bring autonomy to the skies. The Boston-based startup is developing an AI-powered “pilot,” the Merlin Pilot, designed to manage full “takeoff-to-touchdown” flights across a wide range of aircraft, from light planes to heavy transports. This week, we talk with Merlin Labs CEO Matt George about why the company sat out the first wave of SPAC-driven aerospace mania, and why partnering with Inflection Point Acquisition Corp. IV (Nasdaq: BACQ) feels like the right moment to go public. Matt explains that, in aviation, the use cases for autonomy can be more immediate and valuable than on the ground, from reducing crew needs on cargo and transport flights to enabling fully uncrewed operations in military and civil aviation. We also explore the broader macro conditions driving demand higher, and where the market for autonomous flight stands today. How fast could it grow, and how high could it fly?
RenMac kicks off Black Friday with a dive into consumer weakness as deGraaf outlines why seasonality is stacked against discretionary stocks, and what recent oversold signals in SPACs, semis, and Bitcoin mean for market trend shifts. Dutta questions the logic of a “hawkish cut” as sentiment, income, and labor data deteriorate, warning the Fed may fall further behind the curve. And Pavlick breaks down rising geopolitical friction from Taiwan to USMCA hearings and evaluates how tariffs, Fed appointments, and ACA subsidies will shape 2026 policy risk. Just in time for the Holiday's, RenMac unveils its swag store, supporting a great cause - check it out at www.renmacmerch.com
Andrejka Bernatova joins Diane King Hall at the NYSE to discuss the state of companies going public. She explains the reasons why some choose to go via SPACs and others choose IPOs. She also shares her company Dynamix Corp.'s (ETHM) role in taking The Ether Machine public. Andrejka discusses the importance of a healthy balance sheet when choosing the SPAC route. She addresses the importance of underlying fundamentals helping to buoy the go-to-public space. Specifically on the increase in cryptocurrency companies going public, she thinks the digital asset space will be influential in everyday life.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
In this episode of Run the Numbers, CJ sits down with Jeff Bernstein, managing partner at Riveron, to unpack what really happens when a company decides it might be time to go public. Jeff draws on his experience across banking, hedge funds, operating roles, and advisory work to break down IPOs, dual-track processes, and the surprising realities behind price discovery—including why a 20x-oversubscribed book isn't what it seems. He also dives into the re-emergence of SPACs, what's different this time, and the key considerations CFOs should weigh before choosing that route. From IPO-readiness must-haves to building the muscle memory needed for public-company life to the sketchiest EBITDA adjustment he's ever seen, Jeff brings stories, frameworks, and hard-won lessons for any finance leader thinking about the road to the public markets.—LINKS:Jeff Bernstein on LinkedIn: https://www.linkedin.com/in/jeff-bernstein-498a23158/Company: https://riveron.com/CJ on LinkedIn: https://www.linkedin.com/in/cj-gustafson-13140948/Mostly metrics: https://www.mostlymetrics.com—TIMESTAMPS:00:00:00 Preview and Intro00:02:59 Sponsors – Tipalti, Aleph, Fidelity Private Shares00:06:20 The Mechanics of Going Public at Riveron00:09:59 The State of Tech Capital Markets00:11:19 Comparing the Internet, Mobile, and AI Waves00:14:11 Understanding Dual Track Processes00:15:34 Sponsors – Metronome, Mercury, RightRev00:19:35 Why Companies Choose to Go Public or Sell00:23:02 Why Price Discovery Is Harder in Today's Market00:26:05 The Pros and Cons of Direct Listings00:29:16 Balancing Fairness Between Employees and Investors00:30:47 Inside the IPO Pricing Process00:34:26 How Banks and Investors Game Allocations00:41:22 The Return of SPACs and Why They're Back00:43:46 Key Considerations for CFOs Evaluating SPAC Mergers00:47:53 The Most Successful SPACs to Date00:49:00 Building Public Company Readiness00:52:03 Developing Muscle Memory for Quarterly Reporting00:55:31 The CFO's Role as Chief Communicator00:57:47 Long-Ass Lightning Round – Overhyped Metrics and Sketchy EBITDA—SPONSORS:Tipalti automates the entire payables process—from onboarding suppliers to executing global payouts—helping finance teams save time, eliminate costly errors, and scale confidently across 200+ countries and 120 currencies. More than 5,000 businesses already trust Tipalti to manage payments with built-in security and tax compliance. Visit https://www.tipalti.com/runthenumbers to learn more.Aleph automates 90% of manual, error-prone busywork, so you can focus on the strategic work you were hired to do. Minimize busywork and maximize impact with the power of a web app, the flexibility of spreadsheets, and the magic of AI. Get a personalised demo at https://www.getaleph.com/runFidelity Private Shares is the all-in-one equity management platform that keeps your cap table clean, your data room organized, and your equity story clear—so you never risk losing a fundraising round over messy records. Schedule a demo at https://www.fidelityprivateshares.com and mention Mostly Metrics to get 20% off.Metronome is real-time billing built for modern software companies. Metronome turns raw usage events into accurate invoices, gives customers bills they actually understand, and keeps finance, product, and engineering perfectly in sync. That's why category-defining companies like OpenAI and Anthropic trust Metronome to power usage-based pricing and enterprise contracts at scale. Focus on your product — not your billing. Learn more and get started at https://www.metronome.comMercury is business banking built for builders, giving founders and finance pros a financial stack that actually works together. From sending wires to tracking balances and approving payments, Mercury makes it simple to scale without friction. Join the 200,000+ entrepreneurs who trust Mercury and apply online in minutes at https://www.mercury.comRightRev automates the revenue recognition process from end to end, gives you real-time insights, and ensures ASC 606 / IFRS 15 compliance—all while closing books faster. For RevRec that auditors actually trust, visit https://www.rightrev.com and schedule a demo.—#RunTheNumbersPodcast #IPOmistakes #CFOinsights #SPACs #FinanceLeadership This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cjgustafson.substack.com
Dagens ämnen: 0:00 Intro 5:03 Rapporter 9:07 AI 14:53 Evolution 19:10 Kinnevik 22:13 Guld 30:55 SPACs och government shutdown 34:35 Eutelsat 37:48 Index 39:12 Veckans Fill or Kill www.instagram.com/fillorkillpodden Tack @savr! www.savr.com
Don and Tom tackle a mix of market mania and listener questions, skewering speculative fads like meme stocks, SPACs, private credit ETFs, and covered-call funds. Don opens with a scam text story before the duo dive into the absurdity of “get-rich” products during a record-breaking market. They stress discipline, diversification, and turning off CNBC — repeatedly. Listener questions include Roth conversions in high tax brackets and funding a home purchase without wrecking retirement plans. The show ends on a hilarious tangent about listeners wearing backpack banners to promote Talking Real Money. 0:04 Scam text from Colorado and the hazards of living alone in a studio 1:09 Market highs and the illusion of perfect timing 2:35 Stock concentration, meme stock mania, and the “Magnificent Seven” dominance 3:34 Listener call: investing in a soccer team partnership promising 15–30% returns 5:12 Why “too good to be true” often is — scams and speculative traps 6:09 Covered-call ETFs (JEPI, GPIQ) explained and debunked 9:39 New private credit ETF (PCR): high fees, low transparency, huge risk 12:49 CNBC hype vs. reality — why turning off financial TV is sound advice 16:21 Listener question: Roth conversions and tax traps in the 30% bracket 19:26 Another listener: funding a new home without derailing retirement 21:47 Don's rant on overpricing homes — “every house sells at the right price” 23:24 Real estate emotion vs. math — the price always tells the truth 24:31 Episode wrap-up: humor, gratitude, and an absurd “wearable banner” promo idea Learn more about your ad choices. Visit megaphone.fm/adchoices
We started the program with discussions about SPACs and space-focused investments, where Andrew shared his expertise on SPAC performance and the UFO ETF's methodology. The discussion explored various aspects of space industry investment trends, including index criteria, the evolution of space technology, and the intersection of nuclear and space technologies. The conversation concluded with insights about the flow of investment capital between AI and space industries, along with discussions about regulatory changes and the future opportunities in space exploration.After the introductions and announcements, Andrew discussed his experience with SPACs in some detail, noting that while some have been successful, others have not performed well. He explained that SPACs are not inherently good or bad but rather depend on how they are structured and managed. Andrew shared his personal interest in SPACs dating back to his early career and mentioned that his firm had considered launching a space-focused SPAC but ultimately decided against it due to market conditions. He advised potential investors to conduct thorough due diligence and emphasized the importance of believing in the team behind a SPAC.Andrew explained the origin of the ETF's name “UFO,” which was chosen for its memorable three-letter ticker and availability. He then discussed the fund's performance, noting that it tracks a rules-based index and has exposure to a diverse range of space-related companies, including both well-known and lesser-known names. Andrew also highlighted the fund's global approach and the changing landscape of the space industry, which has led to new investment opportunities. He mentioned that the fund currently holds about 47 companies, up from 30 at launch, and has seen some new space names enter the public markets recently.The discussion focused on space investment trends and index criteria. Andrew explained that private space investments grew from $1.1 billion with 8 investors from 2000-2005 to $10.2 billion with 93 investors from 2012-2018, noting that foreign governments are increasingly seeking space solutions independently of SpaceX. John Jossy inquired about index criteria, and Andrew clarified that the index evaluates space revenue metrics, market cap, and liquidity, with companies needing either majority space revenue or specific revenue thresholds to qualify. Andrew also explained that companies can be removed or re-added to the index based on meeting methodology standards, using Avio as an example of a company that was removed but later re-added when it met the criteria again.The discussion focused on comparing SPACs and UFO ETFs, with Andrew explaining that UFO tracks the S Network Space Index, a global space index launched in 2019 that focuses on companies with significant space-related revenues. Andrew emphasized that unlike traditional ETFs like QQQ, UFO has minimal overlap with other funds and is managed by former Space Foundation Director of Research Micha Walter Range, who developed the methodology for quantifying space industry revenues.We put more focus on the UFO index, its methodology, and potential inclusion of private funds like SilverLake. Andrew explained that the index currently only considers publicly traded securities and does not include private investments. He also discussed trends in commercial space investment, noting the impact of geopolitical events on the industry. Andrew highlighted how conflicts and political shifts have created both challenges and opportunities for space technology companies, potentially leading to more nationalistic approaches in the industry.Andrew discussed the importance of national security and defense in space, highlighting the potential for U.S. companies to win contracts for projects like Golden Dome and potentially share technologies with allies. He noted a strong investor appetite for space companies, citing improved fundamentals and better access to investment opportunities. David asked about the impact of Artemis' success and the race to the moon on investment trends, to which Andrew responded that the moon's strategic importance could influence access and development, mentioning potential data centers and micro-economies on the moon.Given comments by Dr. Kothari, our discussion focused on the intersection of nuclear and space technologies, with Ajay highlighting the potential for thorium-based molten salt reactors to address both energy and climate challenges, noting significant thorium reserves in the US and China. Andrew acknowledged the potential of these technologies while emphasizing the importance of energy for space exploration and the historical benefit of space technologies transferring to Earth applications. David mentioned the emergence of several potential industries from cislunar development and low Earth orbit manufacturing, emphasizing the need for revenue generation beyond seed capital. Andrew said in some cases the fund lists pre-revenue companies. Don't miss his comments on this topic.We looked at many of the space-focused companies and their inclusion in investment indices. Andrew explained that while pre-revenue companies could be included if publicly traded, they typically need to meet specific metrics and be publicly traded to be considered. John Hunt mentioned a potential investment opportunity with a PE of 25 and a dividend of 0.9%. Andrew emphasized the importance of finding a reliable index methodology when investing in specific industries. The conversation also touched on regulatory changes in the ETF industry and Andrew's advice for young entrepreneurs considering space as an investment opportunity.Andrew summarized the space industry's opportunities and challenges, emphasizing the importance of capable workforce and diverse investment strategies. He highlighted the potential for unexpected opportunities in the space sector, citing the EchoStar story as an example. The group also touched on the impact of tariffs on the space industry and the shifting investment landscape, with AI being seen as a major competitor for investment dollars.Note that this program is archived both at www.thespaceshow.com and doctorspace.substack.com for audio. The Zoom video is on the same Substack site for this date, Friday, Oct. 10, 2025.pecial thanks to our sponsors:Northrup Grumman, American Institute of Aeronautics and Astronautics, Helix Space in Luxembourg, Celestis Memorial Spaceflights, Astrox Corporation, Dr. Haym Benaroya of Rutgers University, The Space Settlement Progress Blog by John Jossy, The Atlantis Project, and Artless EntertainmentOur Toll Free Line for Live Broadcasts: 1-866-687-7223For real time program participation, email Dr. Space at: drspace@thespaceshow.comThe Space Show is a non-profit 501C3 through its parent, One Giant Leap Foundation, Inc. To donate via Pay Pal, use:To donate with Zelle, use the email address: david@onegiantleapfoundation.org.If you prefer donating with a check, please make the check payable to One Giant Leap Foundation and mail to:One Giant Leap Foundation, 11035 Lavender Hill Drive Ste. 160-306 Las Vegas, NV 89135Upcoming Programs:Broadcast 4443 Jack Kingdon | Sunday 12 Oct 2025 1200PM PTGuests: Jack KingdonJack discusses his paper “3 months transit time to Mars for human missions using SpaceX Starship” Get full access to The Space Show-One Giant Leap Foundation at doctorspace.substack.com/subscribe
Ever wondered why companies like Airbnb, Spotify, and WeWork chose such different paths to the public markets? In this episode of Corporate Finance Explained on FinPod, we break down the three main ways companies go public: the traditional IPO, the disruptive Direct Listing, and the volatile SPAC.We'll unpack the mechanics, the trade-offs, and the key factors that drive a company's leadership to choose one door over the others.This episode covers:The IPO: The classic route for raising billions in capital, but we reveal the hidden costs and why it led to Airbnb's "money left on the table" problem.The Direct Listing: The cheaper, faster, and more transparent alternative. We explore why it was the perfect fit for companies like Spotify and Slack who wanted liquidity, not capital.The SPAC: The "wild west" of going public. We explain its appeal for speed and why it's a high-risk gamble that ultimately couldn't save WeWork's flawed business model.By the end of this episode, you'll be able to quickly analyze any public offering and understand the strategic choices behind it.
This week, we sit down with Dynamix Corporation (NASDAQ:ETHM) CEO and Chair Andrejka Bernatova, who is about to list her third SPAC in the past four years with one business combination already completed and another pending. She tells us how it has been quite the ride through the last two SPAC cycles and where she thinks SPACs at times went wrong during the last one. Right now, she is focused on completing Dynamix's combination with crypto treasury firm The Ether Machine. She explains why her team zeroed in on Ethereum as the cryptocurrency with the most promise and how that vision was informed by her team's history of dealmaking in the oil and gas space. How will these crypto treasury plays continue to differentiate as they become more numerous? And, what opportunities does she see around the bend for Dynamix III?
It's the busiest year for SPACs since 2021. The unusual case of rate cuts with bank stocks at record highs, and the President says quarterly reports aren't necessary. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of Investor Connect, Hall Martin welcomes Abe Kwon, a partner at Lowenstein Sandler, LLP, a renowned national law firm with a strong focus on emerging companies, venture capital funds, and investors. Abe Kwon shares his extensive experience as a startup venture capital lawyer and provides key insights into the critical legal considerations for early-stage fundraising, especially regarding venture capital, SPACs, and alternative capital vehicles. The conversation delves into best practices for governance and board structuring as companies grow, emphasizing the importance of trusted legal counsel in navigating these complex waters. Abe Kwon discusses the growing trend of cross-border investments and the complexities early-stage startups face when hiring contractors or employees abroad. He highlights the resurgence of crypto and digital securities, providing his perspective on evolving legal requirements and the importance of staying updated with regulations. The episode also covers strategies for preparing for M&A and IPO events, stressing the importance of having a solid legal framework from day one to ensure smooth exits. Abe Kwon shares lessons learned from challenging deals and offers practical advice for founders in choosing the right legal partner and preparing for due diligence. The discussion wraps up with an exploration of trends in the ESG and impact investing space and how legal frameworks are adapting to sustainability-based capital. Abe Kwon also touches on his involvement in national innovation ecosystems and the impact on local startup communities. Visit Lowenstein Sandler LLP at Reach out to at ; _______________________________________________________ For more episodes from Investor Connect, please visit the site at: Check out our other podcasts here: For Investors check out: For Startups check out: For eGuides check out: For upcoming Events, check out For Feedback please contact info@tencapital.group Please , share, and leave a review. Music courtesy of .
SPACs have been highly active lately in taking public a new generation of nuclear technology companies, but in order for those companies to meet the power demand being driven by the boom in datacenter deployments, they are going to need a steady supply of uranium. This week, we speak with Mark Mukhidja, CEO of uranium mining firm Eagle Energy Metals, and Chris Sorrells, Chairman and CEO of Spring Valley Acquisition Corp. II (NASDAQ:SVII). The two announced a $312 million dollar combination back in July that would create a unique stock that is a pure play on US-based uranium production. Mark explains how the US went from being a leading uranium producer to one that imports nearly all of its uranium and how Eagle Energy's Aurora project has the potential to start turning that around. Chris tells us why this upstream nuclear play is a logical follow-up to Spring Valley I's successful combination with small nuclear reactor developer NuScale (NYSE:SMR) and why he believes this transaction is structured with both Eagle Energy's short-term and long-term financing needs in mind.
Good morning from Pharma and Biotech Daily: the podcast that gives you only what's important to hear in Pharma and Biotech world.Novartis has increased its commitment to its partnership with Argo BioPharma by an additional $5.2 billion, focusing on RNAi agreements targeting cardiovascular diseases. This highlights the ongoing advancements and challenges in the biopharmaceutical industry. Biotechs are turning to special purpose acquisition companies (SPACs) as a way to go public amid the IPO freeze. Gene therapy, with its potential to cure deadly diseases, is still facing challenges in terms of insurance coverage in the U.S. The industry is seeing a shift with some of the biggest biotech SPACs from the 2021 bubble no longer on the market. Meanwhile, Cytokinetics' cardiac myosin inhibitor, aficamten, has shown promising results in a phase III study for patients with obstructive hypertrophic cardiomyopathy. RFK Jr. has announced plans to reorganize chronic disease programs in the US to address high COVID-19 death rates. Companies like Novartis and Arrowhead are making significant commitments to various programs, while Trump's efforts to shore up the pharma supply chain with U.S. API are being questioned. Novartis continues its cutting spree with layoffs in New Jersey.These developments shed light on the evolving landscape of the biopharmaceutical industry.
In this special edition of the RiskReversal Podcast, Dan Nathan welcomes Kristin Kelly and Jen Saarbach from The Wall Street Skinny. They discuss their content creation journey, initially aimed at demystifying Wall Street sectors for newcomers and evolving to cover broader market dynamics. The conversation covers the Federal Reserve's independence, implications of political influence on monetary policy, and potential market repercussions. They also delve into the resurgence of SPACs, exploring their mechanics and market impact during zero-interest rate environments. The episode wraps up with discussions on generative AI's market role and potential financial crises linked to the Fed's actions. Go follow TWSS! Website: https://thewallstreetskinny.com/ Instagram: https://www.instagram.com/thewallstreetskinny/?hl=en TikTok: https://www.tiktok.com/@thewallstreetskinny —FOLLOW USYouTube: @RiskReversalMediaInstagram: @riskreversalmediaTwitter: @RiskReversalLinkedIn: RiskReversal Media
In this SPACInsider Podcast REPLAY, we go back to January 2022 when the abrupt end of 2021's SPAC euphoria was setting in, and new strategies were needed to weather the storm. We sat down with Niccolo de Masi of the dMY SPACs to get his takes on how SPACs were going to roll with the punches and his own vision for when a refreshed SPAC cycle would reemerge. Now that SPACs are back, which of these predictions came to fruition and what lessons from the down market have teams brought into the new cycle? Give it a listen
The retail earnings flood hit this week and it told us a lot about consumer spending, plus the market is once again buying into meme stocks and SPACs. Is this time different? Travis Hoium, Jon Quast, and Matt Frankel discuss: - Retail earnings and takeaways for investors - Opendoor's pop - The return of SPACs - Meta's new AI strategy Companies discussed: Meta Platforms (META), Alphabet (GOOG), Dollar General (DG), NXP Semiconductor (NXPI), Walmart (WMT), Target (TGT), Home Depot (HD), Lowe's (LOW), TJ Maxx (TJX), Costco (COST), On Holding (ONON), Nike (NKE). Host: Travis Hoium Guests: Jon Quast, Matt Frankel Engineer: Dan Boyd Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We're committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
SPACs, or Special Purpose Acquisition Companies, are back – with aerospace & defense tech startups embracing the moment. Merlin, a startup focused on deploying AI into cockpits, is the latest to do so. The company announced a reverse merge with a SPAC led by Inflection Point Asset Management, valuing the company at $800 million pre-money and raise hundred of millions of dollars in proceeds. CEO Matt George joins Morgan Brennan to discuss the prospects of going public.
SPACs, or Special Purpose Acquisition Companies, are back – with aerospace & defense tech startups embracing the moment. Merlin, a startup focused on deploying AI into cockpits, is the latest to do so. The company announced a reverse merge with a SPAC led by Inflection Point Asset Management, valuing the company at $800 million pre-money and raise hundred of millions of dollars in proceeds. CEO Matt George joins Morgan Brennan to discuss the prospects of going public.
Second quarter earnings results have been littered with slumping sales and disappointing guidance. Wal Mart threw that narrative on its head when it said it was raising sales guidance for the rest of the year. What's in Wal Mart's secret sauce? Also, investing lessons from Meta's AI strategic changes, a smorgasboard of market news, and stocks on our radar Tyler Crowe, Matt Frankel, and Jon Quast discuss: - Wal Mart's increased sales guidance standing out from its peers - Meta's hiring freeze - Chipotle drone delivery? - Cracker Barrel's rebranding - SPACs are back? Companies discussed: WMT, TGT, META, CMG, CBRL, TRIP, TREX Host: Tyler Crowe Guests: Matt Frankel, Jon Quast Engineer: Dan Boyd Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We're committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Agenda: 00:00 – Databricks hits $100B: Bubble or just the beginning? 03:15 – Is Databricks actually undervalued at 25x revenue? 07:40 – Are we on the verge of the biggest IPO wave ever? 11:30 – Can Andreessen's Databricks bet return $30B+? 18:10 – Who really gets rich when mega-unicorns IPO? 19:30 – Is the return of Chamath's SPACs the ultimate bubble signal? 28:00 – Should OpenAI staff be cashing out billions in secondaries? 33:30 – Founder raises $130M… then walks away. Is this the new normal? 36:30 – Nubank's $2.5B profit: The best FinTech in the world? 48:00 – On Running at $15B: Can consumer brands still be VC-backed rockets? 52:00 – CoreWeave takes on $11B in debt: smart bet or ticking time bomb? 1:11:00 – Will AI spend really hit trillions—or is it all hype?
A consumer vibes indicator, in the form of two Q2 earnings reports: TJX (which owns TJ Maxx, HomeGoods, and Marshalls) raised its outlook for the remainder of the year after beating expectations. Over the same period, Target reported declining same-store sales. In this episode, today's consumers are choosing off-price bargain hunting over a big-box staple. Plus: Retailers sneak in price hikes, SPACs make a return, and the labor market's got some regional variation.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
A consumer vibes indicator, in the form of two Q2 earnings reports: TJX (which owns TJ Maxx, HomeGoods, and Marshalls) raised its outlook for the remainder of the year after beating expectations. Over the same period, Target reported declining same-store sales. In this episode, today's consumers are choosing off-price bargain hunting over a big-box staple. Plus: Retailers sneak in price hikes, SPACs make a return, and the labor market's got some regional variation.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
Is Altman right about an AI bubble? … Plus, Powell at Jackson Hole… Alphabet (GOOG) and Amazon's (AMZN) latest power deals… Why Target (TGT) is selling off... The government's Intel (INTC) deal... And SPACs are back. In this episode: The PGA Tour: Scheffler's odds are laughable [0:39] Sam Altman says AI is in a bubble—is he right? [3:30] Powell at Jackson Hole: What to expect from the Fed Chair [11:40] Why Google is scaling up its stake in this highly shorted stock [15:05] Amazon's FLEX deal: What are cashless warrants? [21:30] Why Wall Street is punishing Target's executive move [27:55] The government's stake in Intel: Good for taxpayers or slippery slope? [33:50] SPACs are back—will retail investors get screwed again? [47:33] Did you like this episode? Get more Wall Street Unplugged FREE each week in your inbox. Sign up here: https://curzio.me/syn_wsu Find Wall Street Unplugged podcast… --Curzio Research App: https://curzio.me/syn_app --iTunes: https://curzio.me/syn_wsu_i --Stitcher: https://curzio.me/syn_wsu_s --Website: https://curzio.me/syn_wsu_cat Follow Frank… X: https://curzio.me/syn_twt Facebook: https://curzio.me/syn_fb LinkedIn: https://curzio.me/syn_li
Send us a textThis week on The Skinny on Wall Street, Kristen and Jen dive into the resurgence of SPACs and yes, Chamath Palihapitiya, the once-dubbed “SPAC King,” is back with a brand-new deal. We unpack what a SPAC really is, why they exploded in 2020, changes in the new structures and more. From warrants and dilution to sponsor “promotes” and fee double-dips, we break down the mechanics in plain English and debate whether this latest wave feels like opportunity or déjà vu all over again.Alongside the SPAC talk, we zoom out to explore what's happening in broader markets. Jen highlights shifts in global rates, including why European long-term yields are rising even as the ECB cuts, while Kristen points to an unusual disconnect with U.S. credit spreads at 30-year lows. We also touch on a wild market day sparked by headlines about AI pilot program failures, raising the question of whether investors are once again pricing perfection into risk assets.Finally, we share some exciting updates from The Wall Street Skinny itself. We've launched live interactive events like AI-Proof Your Career on LinkedIn and YouTube, giving listeners the chance to learn and engage with us directly. Plus, in the spirit of “back to school,” we're running a limited-time flash sale on our finance courses, from express Excel bootcamps to deep-dive investment banking and private equity technicals. Whether you're a student, a new hire, or just a finance junkie, this episode blends timely Wall Street analysis with practical ways to sharpen your own skills.Find courses HERE and use code AUG25FLASH for 20% off through August 24, 2025!For a 14 day FREE Trial of Macabacus, click HERE For 20% off Deleteme, use the code TWSS or click the link HERE! Our Investment Banking and Private Equity Foundations course is LIVEnow with our M&A course included! Shop our LIBRARY of Self Paced Online Courses HEREJoin the Fixed Income Sales and Trading waitlist HERE Our content is for informational purposes only. You should not construe any such information or other material as legal, tax, investment, financial, or other advice.
LISTEN and SUBSCRIBE on:Apple Podcasts: https://podcasts.apple.com/us/podcast/watchdog-on-wall-street-with-chris-markowski/id570687608 Spotify: https://open.spotify.com/show/2PtgPvJvqc2gkpGIkNMR5i WATCH and SUBSCRIBE on:https://www.youtube.com/@WatchdogOnWallstreet/featured SPACs are back—and so is the stupidity. In this episode of Watchdog on Wall Street, Chris unloads on the latest “American Exceptionalism Acquisition Corporation” and explains why investors keep falling for the same garbage.Here's what you'll hear:The outrageous Trump “no crying in the casino” disclaimer written into a SPAC prospectusHow Chamath Palihapitiya's so-called “SPAC empire” has left investors with losses of 70–98%Why penny stocks and boiler room scams are suddenly popular againThe ugly truth: it's not illegal, just gross—and gullible investors keep lining upWhy chasing “get rich quick” plays will always end the same wayInvestors aren't unlucky. They're just stuck on stupid.
In this episode of Decentralize with Cointelegraph, venture capital investor and Bitcoin advocate Tim Draper joins Cointelegraph reporter Vince Quill for a deep dive into the shifting tides of Bitcoin adoption. From the slow but inevitable embrace by institutions to the macroeconomic headwinds threatening the US dollar, Draper lays out why FOMO, regulatory clarity and technological freedom are converging to push Bitcoin into the mainstream.He also shares his take on whether Bitcoin's famous four-year halving cycle still matters, or if bigger macro forces are now in play. Tune in to hear his takes!(01:00) Why institutions are moving into Bitcoin (02:59) Institutional FOMO and bank custody scramble(04:44) Is Bitcoin FOMO risky? Treasuries, El Salvador and “gunpowder” analogy (07:30) Retail still lagging; “dinosaur” risk for holdouts (08:35) How they buy: boardrooms, SPACs, MicroStrategy, Fidelity (10:17) Why Big Tech rejected BTC treasuries (11:19) “Irresponsible not to own Bitcoin” (12:10) Dollar vs Bitcoin: Inflation, satoshis, escape valve (15:34) Halving cycle damped: Macro drivers take over (17:07) Dollar extinction? Could BTC be a budget fix?This episode was hosted by Vince Quill and produced by Savannah Fortis, @savannah_fortis.Follow Cointelegraph on X @Cointelegraph.Check out Cointelegraph at cointelegraph.com.If you like what you heard, rate us and leave a review!The views, thoughts and opinions expressed in this podcast are its participants alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast's participants may or may not own any of the assets mentioned.
Regardless of what uncertainty exists in the market or from where it originates, deals continue to get done. However, doing deals during more turbulent times takes some strategic maneuvering. Riveron's Alex Shahidi, Transaction Services Leader, Jeff Bernstein, Equity Capital Markets Leader, and Ryan Gamble, Tax Advisory Leader, explore the current deal environment, and the factors affecting public and private dealmaking. They talk tariffs, capital and debt markets, legislative changes affecting tax, IPO, SPACs and other factors that affect the execution of M&A deals in uncertain times.
Subscribe to the new Bits + Bips channels!
An in-depth discussion with Sebastian Bea, President & Head of Investments at ReserveOne, and Vik Mittal, Managing Member at Meteora Capital Partners If there has been one hot summer trend among SPACs, it has been the crypto treasury business combination. To help better understand this new genre of SPAC deal, we sat down with Sebastian Bea, President and Head of Investments at ReserveOne, which announced a $1 billion combination with M3-Brigade Acquisition V Corp (Nasdaq: MBAV) last month. We're also joined by Vik Mittal, Managing Member at Meteora Capital Partners. They discuss why this particular play has come to the forefront of the market now and how the market has reacted as more and more entries of this deal type have been announced. Sebastian also explains how ReserveOne plans to generate returns greater than the value of its underlying assets and are earmarking a portion of its portfolio for private investments. How will ReserveOne and other companies of this type continue to differentiate themselves as their cohort grows? And what happens if the US government changes its attitude on crypto once again? ---- IMPORTANT DISCLOSURES: This podcast is for informational purposes only and does not constitute investment advice, a recommendation to buy or sell securities, or a solicitation of any kind. The views and opinions expressed by the guests are their own and do not necessarily reflect the views of their respective firms or affiliates. Past performance discussed is not indicative of future results. All investments involve risk, including potential loss of principal. Any performance figures mentioned have not been independently verified and may not reflect actual client experiences or net returns after fees and expenses. The guests may have financial interests in companies, securities, or investment strategies discussed. Sebastian Pedro Bea is associated with the M3-Brigade V/ReserveOne transaction mentioned in this discussion. Vik Mittal serves as Managing Member of Meteora Capital, LLC and principal of numerous SPACs. These relationships may create conflicts of interest. Nothing in this podcast should be construed as personalized investment advice. Listeners should consult with qualified financial professionals before making investment decisions. Market predictions and forward-looking statements are speculative and subject to significant uncertainty.
Mark Whittington on Tuesday, 7-29-25I introduced Mark, who discussed the current turmoil at NASA, describing the agency as "rudderless" due to the stalled nomination of billionaire Jared Isaacman as Administrator. Isaacman, known for funding private missions like Inspiration 4, was nominated by Donald Trump and had garnered bipartisan support, including backing from former NASA Administrator Jim Bridenstine. However, his nomination unraveled after a post on Truth Social falsely labeled him a Democrat and criticized his connection to Elon Musk. Influenced by low-level staffer Sergio Gor—reportedly motivated by personal grievances—Trump withdrew his support. As a result, the nomination collapsed, and NASA remains without permanent leadership. Transportation Secretary Sean Duffy is currently serving as interim Administrator while also handling his existing responsibilities. Mark talked about Sean so don't miss his commentary on this subject.Mark talked about NASA facing significant budget cuts and internal conflict over long-term strategy. Mark mentioned the administration Artemis plans, potential commercial alternatives and the fact Congress is fighting to maintain SLS, Gateway, and more of the original NASA funding. Mark then delved into how personal and political tensions are derailing progress in U.S. space policy. Our mentioned the feud with Musk and Trump. In addition, Mark talked about how the administration's Sergio Gor appears to have played a key role in shaping Trump's negative stance toward both Musk and Isaacman, reportedly out of personal jealousy. Our guest said that these internal feuds underscore how politics—rather than merit—are influencing critical space policy decisions.More was said about Artemis and any program timelines, especially getting back to the Moon by 2028. Mark mentioned China targeting a lunar landing by 2030, which could undermine the U.S. space legacy if successful. As for lunar human landers, Mark discussed both the SpaceX effort and the Blue Origin effort as to which lander will be ready first. Our guest reported rumors suggesting SpaceX may be developing a scaled-down, crew-only version of Starship in response to mission complexity and reliability concerns. If SpaceX continues to struggle with full-scale Starship, NASA may pivot to Blue Origin's Blue Moon lander, which appears to have a more manageable development path in the near term.The proposed Golden Dome missile defense initiative became a topic of discussion. Our guest said it would depend heavily on commercial space providers for deployment. Companies such as Rocket Lab, United Launch Alliance (ULA), and Blue Origin stand to benefit from potential launch contracts. The project evokes comparisons to Reagan's Strategic Defense Initiative, fueled by rising geopolitical tensions and incorporating AI-based targeting systems. A caller raises concerns about the unchecked expansion of satellite constellations like Starlink, Amazon's Kuiper, and similar efforts from China and Europe. I noted that regulation remains minimal, and key issues—including satellite collisions, space debris, light pollution, and traffic management—are largely unaddressed. While international treaties exist, enforcement is weak. Mark pointed out the risks and that meaningful regulation may only come after a major incident.I asked Mark about the growing interest in space-related IPOs and SPACs, with companies like Firefly and Redwire gaining attention. However, caution was urged with Mark warning that the sector may be in a speculative bubble reminiscent of the dot-com or AI booms. He predicted a “winnowing out” where only the strongest companies survive and advises listeners to consult financial experts rather than invest based on hype.Mark was asked about his previous reporting of SpaceX working on a new line of autonomous, reentry-capable space capsules designed for orbital manufacturing and research. These capsules would operate independently in low-Earth orbit and return high-value products, such as microchips, to Earth. Launched via Starship, they could offer cheaper, crewless alternatives to space stations, with the added benefit of protecting intellectual property. SpaceX hopes to begin operations by 2030. The new company effort is named Starfall.Mark reported a CBS poll showing public interest in lunar and Martian missions is growing across all age groups, with the strongest support coming from younger generations. Livingston and Mark envision immersive experiences for future missions, including virtual reality feeds from astronaut helmets and live Zoom sessions with schoolchildren—potentially turning lunar exploration into a highly engaging and educational global event.As we were approaching the end of the program, I asked Mark about NOAA cuts. Mark was critical of proposed funding cuts to NOAA, particularly during hurricane season when weather forecasting is most critical. He views the cuts as shortsighted and part of a broader rollback of climate-related policies, such as the decision to stop classifying CO₂ as a pollutant. While he supports continued climate monitoring, he is skeptical of some regulatory changes—such as updated HVAC refrigerant rules—that impose high costs on consumers, especially in warmer states.Mark said he is writing a new book titled How We Got Back to the Moon, documenting the political and programmatic shifts driving the Artemis program. He argues that past delays were primarily due to politics and poor messaging rather than technological limitations. He supports maintaining the Wolf Amendment, which prohibits NASA-China cooperation, and sees bipartisan momentum around commercial space partnerships as a positive sign. Still, he emphasized that sustainable lunar efforts will require clear goals, stable leadership, and long-term investment.Special thanks to our sponsors:Northrup Grumman, American Institute of Aeronautics and Astronautics, Helix Space in Luxembourg, Celestis Memorial Spaceflights, Astrox Corporation, Dr. Haym Benaroya of Rutgers University, The Space Settlement Progress Blog by John Jossy, The Atlantis Project, and Artless EntertainmentOur Toll Free Line for Live Broadcasts: 1-866-687-7223For real time program participation, email Dr. Space at: drspace@thespaceshow.comThe Space Show is a non-profit 501C3 through its parent, One Giant Leap Foundation, Inc. To donate via Pay Pal, use:To donate with Zelle, use the email address: david@onegiantleapfoundation.org.If you prefer donating with a check, please make the check payable to One Giant Leap Foundation and mail to:One Giant Leap Foundation, 11035 Lavender Hill Drive Ste. 160-306 Las Vegas, NV 89135Upcoming Programs:Broadcast 4408: Hotel Mars with Megan Masterson from MIT | Wednesday 30 Jul 2025 930AM PTGuests: John Batchelor, Dr. David Livingston, Megan MastersonMegan discusses her paper on star-shredding black holes hiding in dusty galaxiesBroadcast 44 09: Andrew Chanin | Friday 01 Aug 2025 930AM PTGuests: Andrew ChaninAndrew returns with Procure, UFO EFT & space investment newsBroadcast 4410: Michael Gorton, scientist & author | Sunday 03 Aug 2025 1200PM PTGuests: Michael GortonMichael talks physics, science, Sci-Fi & his new book series, Tachyon Tunnel series. Be sure to see his full bio on our websiteLive Streaming is at https://www.thespaceshow.com/content/listen-live with the following live streaming sites:Stream Guys https://player.streamguys.com/thespaceshow/sgplayer3/player.php#FastServ https://ic2646c302.fastserv.com/stream Get full access to The Space Show-One Giant Leap Foundation at doctorspace.substack.com/subscribe
In this July 2025 episode of Yet Another Value Podcast, host Andrew Walker shares his latest market reflections. He opens with sharp takes on the speculative surge in crypto-linked equities and questions about hidden leverage. Andrew dissects the potential rise of a new SPAC bubble and lays out a hedge strategy using SPACs at trust value. He then transitions into a deep dive on pattern recognition in investing—its power, its risks, and when it turns into harmful stubbornness. From Warren Buffett's historical lens to Talon Energy and personal investing biases, Andrew probes how past experiences shape investor behavior. The episode closes with musings on CEO arrogance and the importance of open dialogue. As always, Andrew invites feedback and thoughtful conversation from listeners.____________________________________________________________[0:00:00] Intro and episode overview[0:01:21] Sponsor message and host greeting[0:02:01] Recording issues and July intro[0:02:54] Casino market and Bitcoin premiums[0:08:07] Leverage signs and Tesla example[0:08:56] SPAC bubble and trust value[0:10:28] Market views and SPAC options[0:12:58] Pattern recognition in investing[0:16:58] Buffett's experience and pattern use[0:20:58] Pattern vs. stubbornness examples[0:27:21] Talon Energy hesitation explained[0:34:03] Overreliance on old investment patterns[0:37:23] Industry arrogance and founder syndrome[0:41:36] Why Andrew does these ramblesLinks:Yet Another Value Blog: https://www.yetanothervalueblog.com See our legal disclaimer here: https://www.yetanothervalueblog.com/p/legal-and-disclaimer
Today's guests are Wes Gray, founder, CEO and Co-CIO of Alpha Architect, and Srikanth Narayan, founder and CEO of Cache, which he started in 2022 after experiencing concentrated stock positions in his own portfolio. Previously, he served in engineering and product leadership positions at Uber and Alphabet. In today's episode, Meb, Wes & Srikanth share some big news about a new idea involving both exchange funds and 351 ETF conversions. Srikanth explains the mechanics of exchange funds, the risks associated with stock concentration, and the launch of his newest initiative. The discussion also touches on tax efficiency, fees, the competitive landscape of asset management, and more. Learn more about 351 ETF Exchanges or email us to chat! (0:00) Starts (1:10) Discussion on Section 351 (4:33) Explanation of exchange funds and stock concentration risk (16:25) Case study of the new exchange fund model (20:06) Onboarding and cost structure of the new exchange fund (23:13) Qualifying illiquid assets and cost in tax strategies (31:03) Use cases and investor education on ETF tax benefits (34:11) FAQs and the importance of tax alpha (45:23) Reflections on SPACs and speculative markets ----- Follow Meb on X, LinkedIn and YouTube For detailed show notes, click here To learn more about our funds and follow us, subscribe to our mailing list or visit us at cambriainvestments.com ----- Follow The Idea Farm: X | LinkedIn | Instagram | TikTok ----- Interested in sponsoring the show? Email us at Feedback@TheMebFaberShow.com ----- Past guests include Ed Thorp, Richard Thaler, Jeremy Grantham, Joel Greenblatt, Campbell Harvey, Ivy Zelman, Kathryn Kaminski, Jason Calacanis, Whitney Baker, Aswath Damodaran, Howard Marks, Tom Barton, and many more. ----- Meb's invested in some awesome startups that have passed along discounts to our listeners. Check them out here! ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). ----- To determine if this Fund is an appropriate investment for you, carefully consider the Fund's investment objectives, risk factors, charges and expense before investing. This and other information can be found in the Fund's full or summary prospectus which may be obtained by calling 855-383-4636 (ETF INFO) or visiting our website at www.cambriafunds.com. Read the prospectus carefully before investing or sending money. The Cambria ETFs are distributed by ALPS Distributors Inc., 1290 Broadway, Suite 1000, Denver, CO 80203, which is not affiliated with Cambria Investment Management, LP, the Investment Adviser for the Fund. Investing involves risk, including potential loss of capital. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to To the Extent That, presented by the ABA. Today's From Boardroom to Courtroom features Kurt Wolfe—a Partner in Quinn Emanuel's SEC Enforcement group, former in Securities co-host, and adjunct securities law professor at the University of Richmond. We'll discuss the SEC's landmark Stable Road / Momentus enforcement, examine newly minted SPAC accounting and disclosure reforms, and explore the surprising 2025 SPAC resurgence.
(0:00) Intro(1:14) About the podcast sponsor: The American College of Governance Counsel(2:00) Start of interview(2:36) Erik's origin story(4:14) Discussing Foreign Private Issuers (FPIs): His article "SEC Revisits Foreign Private Issuer Eligibility" (June 2025)(16:45) The Rise of AI and Its Implications. Discussion on "AI Washing"(19:30) Distinguishing statutory mandates between the SEC, FTC, and DOJ on regulatory oversight of AI(20:40) The evolving crypto regulatory landscape "It's a pretty big sea change" "[Now it's] all about bright line rules (vs flexible standards) and trying to provide a lot more certainty to the market."(23:24) Cybersecurity Threats and Board Responsibilities. Two requirements from SEC: 1) public companies must disclose material cybersecurity incidents within four business days after determining that that incident was material, and 2) disclosure in a company's annual report about its risk management strategy and governance around cybersecurity. "The real focus is on the material cybersecurity incident reporting."(29:43) Current Trends in IPOs, SPACs and M&A (Liquidy Exits)(32:32) SEC Priorities in 2025 and beyond. "The SEC leadership has underscored a back-to-basics approach. What this means is focusing more on clear fraud and fraud that is scienter-based." "They're [also] going to emphasize much more quantitative materiality rather than qualitative materiality." "[This] is another example of how this SEC is focused on bright line rules." (36:51) SEC Enforcement in Private Markets *Mention of the Startup Litigation Digest.(40:31) The Shift from Delaware to Nevada, Texas, and Impact of Delaware's SB21.(48:08) Books that have greatly influenced his life:Against the Gods: The Remarkable Story of Risk, by Peter L. Bernstein (1996)A Random Walk Down Wall St, by Burton Malkiel (1973)The Sound and the Fury, by William Faulkner (1929)(48:54) His mentors(50:16) Quotes that he thinks of often or lives his life by.(50:48) An unusual habit or an absurd thing that he loves.(51:13) The living person he most admires.Erik Gerding is a Capital Markets partner at Freshfields advising on securities regulation, financial markets and corporate governance. Until the end of 2024, Erik served as the SEC's Director of the Division of Corporation Finance. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
In Episode 169 of The Investor Professor Podcast, Dr. Ryan Peckham and co-host Cameron take stock of the wild ride that was the first half of 2025. From a market nose-dive in April to a remarkable recovery and new all-time highs, the duo unpacks the key events that shaped investor sentiment, including political shakeups, tariff turbulence, anchoring bias, and the velocity of market swings. With the S&P 500 now up over 5% year-to-date, they examine how investor psychology and retail participation continue to drive volatility—and how disciplined strategies like dollar-cost averaging remain the antidote to reactive behavior.The episode also touches on Amazon Prime Day, new IPO momentum (including Figma's blockbuster numbers), the return of SPACs, and the looming impact of interest rates and political appointments on the back half of 2025. Beyond markets, Cameron shares personal updates—including his upcoming MBA journey—while Ryan reflects on goals, growth, and the importance of making the most of the year's second half. Whether you're chasing gains or chasing purpose, this episode is a timely reminder: don't mistake activity for achievement.*This podcast contains general information that may not be suitable for everyone. The information contained herein should not be construed as personalized investment advice. There is no guarantee that the views and opinions expressed in this podcast will come to pass. Investing in the stock market involves gains and losses and may not be suitable for all investors. Information presented herein is subject to change without notice and should not be considered as a solicitation to buy or sell any security. Rydar Equities, Inc. does not offer legal or tax advice. Please consult the appropriate professional regarding your individual circumstance. Past performance is no guarantee of future results.
On this week's Stansberry Investor Hour, Dan and Corey welcome their colleague Bryan Beach back to the show. Bryan is the editor of Stansberry Venture Value and a senior analyst on Stansberry's Investment Advisory. Bryan kicks things off by discussing passive investing, the stock market's "relentless bid," and what could derail passive investing in the future. He points out that the total assets invested passively surpassed those invested actively last year. Not only is this an important fundamental change, but Bryan says that this alters the dynamic between investors and Mr. Market that legendary economist Ben Graham outlined 70-plus years ago. Then, using Microsoft as an example, Bryan analyzes whether it's realistic to expect the Magnificent Seven companies to return to lower multiples. (0:47) Next, Bryan talks about all the headwinds Apple has faced in the past six months and why he believes the stock would be down much more than it is today if it weren't receiving so many passively invested dollars. He says the size of the relentless bid reached a critical mass during the pandemic, and now the S&P 500 Index will continue to grind higher indefinitely. The only thing that can offset this natural inertia is bad economic news (such as tariffs), and even that is temporary. As Bryan points out, many passive investors aren't aware of what they're doing, so it would take legal changes to fix the problem. (19:32) Finally, Bryan explains that this relentless bid does not apply to every corner of the market. He says small caps and microcaps are still great places to find value. Plus, Bryan discusses the unique situation Tesla is in today, makes a bullish case for restaurant-operations company PAR Technology, and discusses what he got wrong with special purpose acquisition companies ("SPACs") back in 2022. (42:56)
Episode 611: Neal and Toby dive into the FICO taking into account consumers Buy Now, Pay Later loans into their credit scores. Also, Novo Nordisk cuts ties with telehealth company Hims & Hers, accusing the mass sale and promotion of Wegovy copycats. Then, job seekers have been using AI to build their resumes for jobs at a rapid pace that is starting to overwhelm job screeners. Meanwhile, Toby examines the trend of SPACs becoming bitcoin treasuries. Finally, a rundown of the latest market reactions from the conflict in Iran. 00:00 - Images of the universe 3:30 - Market update from Middle East conflict 5:00 - Buy now, pay later impacts FICO score 8:30 - Novo Nordisk breaks up with Hims & Hers 12:00 - AI resumes flood the job market 17:00 - Toby's Trends: SPAC Bitcoin treasuries 21:30 - Sprint Finish! Check out https://domainmoney.com/mbdaily and start building your financial plan today We are current clients of Domain Money Advisors, LLC (Domain). Through Domain's sponsorship of Morning Brew Daily, we receive compensation that included a free plan and thus have an incentive to promote Domain Money. Subscribe to Morning Brew Daily for more of the news you need to start your day. Share the show with a friend, and leave us a review on your favorite podcast app. Listen to Morning Brew Daily Here: https://www.swap.fm/l/mbd-note Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow Learn more about your ad choices. Visit megaphone.fm/adchoices
(0:00) The Besties welcome Thomas Laffont! (3:26) State of LA, Hollywood's decline, positivity around GDP growth and AI productivity (10:19) Zuck on tilt over AI: $100M offers, Scale AI deal, hiring spree (23:58) Mag 7 AI Showdown: Ranking the most likely AI winners, biggest stock divergences, and more (42:41) Why Apple is fumbling AI and how they can fix it? (57:02) IPOs and M&A heating up in 2025 (1:16:18) State of liquidity: SPACs, Direct Listings, and more (1:25:40) Amazon's "kingmaker" position, job displacement (1:37:47) Sacks joins to discuss the GENIUS Act passing the Senate (1:52:13) Animal trailer Follow Thomas Laffont: https://x.com/thomas_coatue Animal Trailer: https://www.youtube.com/watch?v=8NNW5r63oXU Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://techcrunch.com/2025/06/17/sam-altman-says-meta-tried-and-failed-to-poach-openais-talent-with-100m-offers https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html https://www.nytimes.com/2025/06/12/technology/meta-scale-ai.html https://scale.com/blog/scale-ai-announces-next-phase-of-company-evolution https://www.reuters.com/business/meta-talks-hire-former-github-ceo-nat-friedman-join-ai-efforts-information-2025-06-18 https://techcrunch.com/2012/09/11/mark-zuckerberg-our-biggest-mistake-with-mobile-was-betting-too-much-on-html5 https://www.reuters.com/technology/china-launch-new-40-bln-state-fund-boost-chip-industry-sources-say-2023-09-05 https://x.com/JoannaStern/status/1933564098291048764 https://www.youtube.com/watch?v=wCEkK1YzqBo https://x.com/chamath/status/1932157508698919320 https://www.renaissancecapital.com/IPO-Center/Stats/Pricings https://www.aboutamazon.com/news/company-news/amazon-ceo-andy-jassy-on-generative-ai https://x.com/chamath/status/1935369326321877153 https://x.com/chamath/status/1935740807925100853 https://www.google.com/finance/quote/COIN:NASDAQ https://www.google.com/finance/quote/SPOT:NYSE https://x.com/ylecun/status/1935108028891861393 https://x.com/ben_j_todd/status/1934284189928501482 https://apnews.com/article/election-2024-senate-ohio-brown-moreno-74c4b91e5866215d4201377fefcadad0 https://companiesmarketcap.com/microsoft/revenue https://apnews.com/article/election-2024-senate-ohio-brown-moreno-74c4b91e5866215d4201377fefcadad0 https://www.youtube.com/watch?v=8NNW5r63oXU
Apple's WWDC 2025 hits and misses; it's not about being first, it's about being best. Jason Moser and Matt Frankel discuss: - The latest news from Apple's Worldwide Developers Conference. - Are Opendoor's best days behind it? - The two stocks we just bought! Companies discussed: AAPL, OPEN, AMD, WM Host: Jason Moser Guests: Matt Frankel Producer: Anand Chokkavelu Engineer: Dan Boyd Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, "TMF") do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. Learn more about your ad choices. Visit megaphone.fm/adchoices
Financial innovations are demonized all the time, and perhaps David should be grateful because a low regard for capital markets is what motivated the creation of this podcast. But today he looks at SPACs and the criticisms of them as an example of how fallacious defense of central planning works. He makes the case that protection against fraud is one thing, but protection against loss is another. And ultimately, he makes the case that a system of robust financial markets has gains and losses, and that is a good thing.
Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and special guest David Hoffman break down the biggest stories in crypto. This week: MicroStrategy clones are popping up, with Bitcoin-backed SPACs trying to replay Saylor's playbook. Meanwhile, Trump launches a memecoin for dinner invites, Zora kicks off a new era of “content coins,” and Ethereum faces an existential pivot. David Hoffman joins the crew to debate whether crypto's future is real innovation—or just financial theater. Show highlights