Podcasts about UI

  • 4,315PODCASTS
  • 11,547EPISODES
  • 47mAVG DURATION
  • 1DAILY NEW EPISODE
  • Mar 8, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about UI

Show all podcasts related to ui

Latest podcast episodes about UI

ComixLaunch: Crowdfunding for Writers, Artists & Self-Publishers on Kickstarter... and Beyond!
New Kickstarter Feature Alert: Adding Free Items to Backer Pledges

ComixLaunch: Crowdfunding for Writers, Artists & Self-Publishers on Kickstarter... and Beyond!

Play Episode Listen Later Mar 8, 2026 45:19


In this session, Tyler trains creators on Kickstarter's new feature for gifting free items to backers by adding existing reward items to individual backers or filtered groups via the Backer Report, with some UI quirks requiring multiple clicks and scrolling.

Analysen und Diskussionen über China
China and Russia – a deepening relationship? With Minna Ålander, Filip Rudnik and Eva Seiwert

Analysen und Diskussionen über China

Play Episode Listen Later Mar 6, 2026 29:04


In these uncertain times of historic geopolitical tensions, it is all the more important to understand the nature and elements of the relationship of two of the world's most influential powers: China and Russia.Three researchers that investigate the relationship of these two countries join Johannes Heller-John in this episode: Minna Ålander, Analyst at the Stockholm Centre for Eastern European Studies at the Swedish Institute of International Affairs (UI), Filip Rudnik Senior Specialist at the Centre for Eastern Studies (OSW) in Poland and Eva Seiwert, Senior Analyst at MERICS.All three of them contribute to the China-Russia Dashboard, which aims to foster a better understanding by tracking and analyzing the economic, political, security, and societal dimensions of China-Russia relations and their changing quality over time. The Dashboard is a collaboration of MERICS, OSW and UI, which also includes the Swedish National China Centre.

Tech Deciphered
74 – The Prediction Episode

Tech Deciphered

Play Episode Listen Later Mar 5, 2026 62:52


Who dares to make predictions in the current landscape? We do!  Our Predictions are back. Will our track-record continue on a high or will we be fundamentally wrong? Listen in to our Predictions for 2026 Navigation: Intro What will 2026 be all about? AI, AI and … more AI The big Hardware movements Of Start-ups and VCs Regulatory & Geopolitical Headwinds… and the Wars Fintech, Crypto and Frontier Tech Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show:   Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Bertrand Schmitt Introduction Welcome to Tech Deciphered Episode 74. That would be an episode about some predictions about 2026. What will be 2026 all about? I guess this year is probably starting with a bang. We saw the acquisition of xAI by SpaceX. We saw an acquisition from Grok by NVIDIA. What’s your take about what would be the big themes in 2026? I guess it would be for sure about AI and space. Nuno Goncalves Pedro What will 2026 be all about? Yeah. I predict a year that will be a little bit more of a year of reckoning in some way. There will be a lot of things that I think we’ll start seeing through. The fact that we are in the midst of an amazing transformational era for technology, the use of AI, but at the same time, obviously, a ridiculous bubble that is going alongside it as we’ve discussed in previous episodes. I think that we’ll start seeing some early reckonings of that, companies that might start failing, floundering, maybe a couple of frauds along the way, etc. I’ll tell you what I will not make many predictions about today, which is geopolitics. Geopolitics, I will not make predictions at all. Who the hell knows what’s going to happen to the world this year in 2026? I don’t dare making any predictions on that. Back to things where I would make predictions. I think on AI, we’ll have a little bit of reckoning. We’ll talk about it a little bit more in detail during this episode. Interesting elements around the hardware and physical space. Physical space, we just dedicated a full episode to it. We won’t go into a lot of details on that, but definitely on the hardware side, we’ll talk a little bit more about it. The VC landscape is going through an incredible transformation. We’ll talk about it today as well and some of our predictions for this year. What will happen to the asset class? It seems to be transforming itself dramatically. Obviously, that has a very direct impact on startups, so we’ll talk about that as well. And then to close a little bit the chapter on this, we will address some regulatory and geopolitical, let’s call it, headwinds without making maybe too many complex predictions. We shall see. Maybe by that time of the episode, we will be making some predictions. You guys should stay and listen to us, and maybe we will actually make some predictions about the geopolitical transformations that we will see this year in the world. Then last but not the least, we’ll talk about fintech, crypto, frontier tech, and a couple of other areas before concluding the episode. A classic predictions’ episode. We normally have a pretty good track record on some of these, but right now, the world is going a bit interesting, not to say insane. Bertrand Schmitt Yes, and going back to some news, Groq technically was not acquired, but, practically, it’s as if it got acquired. I’m talking about Groq, G-R-O-Q. The AI semiconductor company focused on inference AI, and it was late December. It was a way to end the year. This year, we started again with an acquisition of xAI by its sister company, SpaceX. I guess that’s where we are starting. AI, AI and … more AI We are going to start on AI. That’s definitely the big stuff. Everything these days, I guess, is about AI or has to have some connection with AI, or it doesn’t matter. I think every company in the world has seen that. You have to have the absolute minimum on AI strategy. You better execute on this strategy and show results, I would say. For the companies that were not AI native, you truly have to have a way to transform yourself. I guess at some point, the stretch might be too much, and it’s not really reasonable. Then you maybe better stay on what you are doing, especially if you’re in tech, you better be moving faster to AI. Nuno Goncalves Pedro Just to highlight, and I think throughout the episode, you’ll see that there’re obviously a lot of implications that would manifest themselves into capital markets. I mean, we’ll specifically talk about VCs and startups later on. But the fact that everything needs to be AI, the fact that there’s so much innovation happening right now, in my opinion, and this is maybe the first pre-topic to AI, is we’ll see a tremendous increase in M&A activity this year across the board. I mean, we’ve seen already some big acquihires we mentioned in some of our previous episodes, but we’ll see a lot more activity on M&A this year. Normally, that’s a precursor to the opening of capital markets. I predict also that there will be a reopening of the IPO market that never really reopened last year, to be honest. M&A, a lot more, reopening of the IPO market. Normally, it happens in the second or third quarter of the year. That’s what my M&A friends tell me. First quarter of year, everyone’s figuring out stuff. Then last quarter of the year, things should be more or less closed. Maybe the third quarter is the big quarter. We shall see. But definitely, as a precursor to our conversation today, I think we’ll see a lot of M&A, and we’ll see reopening of the IPO mark. Bertrand Schmitt I guess last year was not as big as you could expect on M&A given the tariff situation announced in April and May. I mean, it became quite tough to do IPO in such market conditions. Definitely, we can hope for something dramatically different in 2026. I guess talking about public markets and IPO, I guess the big one everyone is waiting for is SpaceX. SpaceX getting even more interesting with its xAI acquisition. Nuno Goncalves Pedro Do you think that because of the acquisition, it’s more likely that it will happen this year, or because of the acquisition, it’s less likely that it will happen this year? Bertrand Schmitt That’s a good question. My guess is the acquisition of xAI is all about xAI needing more financing and cheaper financing. This acquisition is a pathway to that. SpaceX being a much bigger company, a company that is also making much more revenues. I could bet that there is higher probability that, actually, SpaceX will go public in order to finance itself. At the same time, will it have enough time to prepare itself for the IPO given this acquisition just happened? Can they do that in 6 months? I mean, if anyone can do it, I guess it’s Elon Musk. It’s a strategy to present an even more attractive company with an even more interesting story, a story of vertical integration from AI to space. I guess the story as it’s presented itself right now, it’s one about having your AI data centers in space. Because in space, you have much better solar energy production with solar panels. You have a perfect cooling situation because you are in space. Thanks to Starlink, you have the mean to communicate between the satellites and with Earth itself. I think if someone can pull up a story like AI data center in space, I guess Elon Musk can. There is, of course, a lot of questions about is it practical? Is it economical? Yes. I certainly agree. I’m not clear on the mass, and can you make it work? Again, I mean, Elon Musk single-handedly, with SpaceX, managed to transform the space market on its head. I mean, they are the biggest satellite launching company in the world. They have the most satellites in the world. I mean, I’m not sure I would bet against him, and I guess I would probably believe that he could pull up something. Time frames, different story. The 2-3 years data center in space for AI as cheap as on Earth, I have more trouble with that one. I mean, it’s a usual suspect with Elon Musk. You promise something unachievable in a few years, but, ultimately, you still manage to reach it in 5 or 10. Again, I would not bet against the strategy. Nuno Goncalves Pedro Yeah. I’ve talked to a couple of space experts, people that have launched rockets, and have worked JPL, NASA, and a couple of other places, etc. For what it’s worth, their feedback is, “No way in hell, and we’re decades away.” We’ll see. I mean, to your point, Elon has pulled very dramatic stuff. Not as fast as he normally says he’s going to pull it, but within a time span that we all see it. Difficult to bet against him. In terms of actually the prediction, maybe to respond to the prediction as well, will SpaceX IPO? I’m going to make a prediction that has a very high likelihood of missing the mark, but I think Tesla’s going to buy and merge them both into it. It’s going to become a public company through Tesla. That’s my hypothesis. Bertrand Schmitt No. That’s supposed to be it. That’s how you solve that. Nuno Goncalves Pedro And Elon controls the whole universe. X, xAI, Tesla, SpaceX, all under one umbrella beautifully run. And SolarCity is well in there, of course, so wonderful. Bertrand Schmitt That’s possible. Certainly, you are not the only one thinking Tesla will acquire or merge with SpaceX. To remind everyone, Tesla is around 1.3, 1.5 trillion market cap. Depending on the day, SpaceX seems to be valued at similar range, 1.2, 1.3 trillion. It looks like it’s the most valued private company at this stage. These are companies of similar size, so that’s one piece of the puzzle. When you think about the combined company, we could be talking about a 3 trillion entity. Playing right here with the biggest companies in the marketplace today. Nuno Goncalves Pedro With a couple of tweets from Elon, it will rapidly get to 4 to 5 trillion. Bertrand Schmitt That’s so tricky. Nuno Goncalves Pedro Yes. On AI and back to AI, one thing I think that we’re about to see is this will probably be the year of agentic AI. Obviously, we predict a lot of growth on that side of the fence, in particular on the enterprise B2B side. We see a lot of opportunities coming through. From our perspective, at least at Chamaeleon, we generally believe that there’s going to be a lot of movements on agentic AI. It’s also going to be probably the year of the first big fails of agentic AI that will be newsworthy. There will be some elements about that loop and how it gets closed that will happen. I think we might see some scandals already. We’re already seeing the social network of bots talking to bots. We will see other scandals going on this year even in the consumer space and in the bot to bot space, which we now can talk about or in the AI agent to AI agent space. My prediction is we will see some move forwards. There’ll be some dramatic funding rounds along the way. We’ll see a couple of really cool things out of the gates coming out that are really impressive, but we’ll also see the first big misses of the technology stack. I don’t think we’ll go fully mainstream yet this year, so it’s probably maybe something more for 2027 along the way. That would be my prediction again. I think enterprise will lead the way. We’ll definitely see a lot of stuff on consumer as well that is cool. Then we’ll all have our own personal assistance in our hands, basically, literally in our phones. Bertrand Schmitt Going back to agentic AI, we also started the year with some pretty dramatic move. I mean, the launch of Clawdbot, renamed OpenClaw. I mean, this stuff took fire in like a week or 2. It was coded by just one person who actually didn’t even code the product but used AI to build the product, 100% used AI, proposing some new ways also to leverage AI to do coding. He has a pretty unique approach. It’s not vibe coding. I would say it’s a better way to do that. Then the surprising evolution with the launch of a social network for AI agents, Moltbook. I mean, this stuff, probably there is some fake in it. But at the same time, I think it’s quite impressive because it’s the first time we see truly 100,000 plus agents communicating directly to each other. Yeah. I mean, that’s the first time we see surfacing the possibility of some sort of hive mind on the Internet. It’s pretty surprising. Right now, all of this is a hack done in a few days. By end of year, by 2 years, 3 years, we might discover that, actually, the best approach to AI might not be the AI assistant like we are doing today, but a combination of hundreds of thousands of AI working closely together. We might be witnessing the first sign of new intelligence in a way. Nuno Goncalves Pedro Things like this social network might either be Skynet, the beginning of Skynet. They might be the beginning of Her, or they might just be a fad and nothing really happens. It’s just interesting to see what these agents are doing. Bertrand Schmitt Totally. Nuno Goncalves Pedro Obviously, there are real and clear and present dangers of some of the integrations of AI we’re seeing in the market. Interesting enough, and I’ll ask you for your prediction a bit, Bertrand. I think we’ll probably see the first big mishap of AI being used in some infrastructural decision in the age of AI. I mean, we’ve seen AI issues in the past and software issues in the past. We talked in previous episodes about that as well. Mishaps of software that have led to people dying. But I think probably the first big mishap will happen this year as well. Very public mishap of the use of AI and serve its interactions with infrastructure or something that’s very platform related, etc, that will have big impact that everyone will notice. That’s my prediction for the year as well. We’ll have the first big oops moment, as I would call it, for AI in this new age of full on AI. Bertrand Schmitt I would say first some perspective. I think today, people are not using AI directly for life and death decision, at least not that I’m aware. We’re not going to let AI fly a plane, for instance, tomorrow so you can be, reassured. At the same time, given there is such a race to AI, there definitely might be some mistakes. We were talking about the social network for AI agents, Moltbook. Apparently, all the keys used to secure the AI were shared by mistake because it was not properly locked down. We can see that indirectly, mistakes will be made for sure. Two, it’s highly probable that some people will trust AI too much to do some stuff, and this stuff might not work and might have some grave consequence. Hopefully, there is not so much of this. Hopefully, it’s mostly AI used for the good. But you’re right. I mean, at some point, the more we use the technology, the more there would be issue. I mean, it’s highly probable. Nuno Goncalves Pedro That will lead me to another prediction, which is, and we’ll talk about more of it later, but it probably will lead to the first significant movement in terms of regulatory environment certainly in the US at some point if it happens in the US in particular, where there will be some movement that will be like, “Hey, you guys can’t do this anymore.” Because this will probably emerge from mismanaged interfaces. From systems having access to stuff that they shouldn’t have access to in the first place. Talking a little bit more about what’s happening in AI. You’ve already mentioned some of the issues that relate actually to security and cybersecurity. We keep talking about AI. We keep talking about all these infrastructure pieces and platforms that are being built. I think we’ll have a lot more incidents like the one you just mentioned where things will be shared that shouldn’t have been shared, where people will break systems and get into it, etc. Let’s see where that takes us, which is a little bit ironic because, obviously, with AI, the promise is that cybersecurity becomes more robust as well because there’re agents working on our behalf on the cybersecurity side. There’s also agents working on the other side. Bertrand Schmitt It’s a constant race. It’s the attackers, defenders. Each time you have new technology, you have a new race to who is going to attack or defend the best. Each new wave of technology, it’s an opportunity to challenge the status quo. Nuno Goncalves Pedro The attackers have been winning, and I feel they’ll continue winning in 2026. I think it’s going to still be a year of attack. We’ll see more and more breaches, more and more stuff that will happen. Bertrand Schmitt I don’t know if they will win. I mean, it’s normal that they win once in a while. For sure, some infrastructure is not updated as it should. Some stuff are not managed as it should, so there will always be breaches. I don’t know if things are dramatically going to change because, again, everyone who cares who is going to update his infrastructure with AI for defense. There is no question that you have no choice. We will see. That I don’t know. For sure, AI will be used to attack directly with AI. Maybe you’re able to do bigger, larger scale attack. Or thanks to AI, you are simply able to create new type of attacks more easily. AI can be used behind the scene as a way to prepare and organise new type of attacks, even if it’s not used directly live in the battle. Nuno Goncalves Pedro One topic that we’ll come back to later is the geopolitics of everything, but maybe more broadly. On the geopolitics of AI, it’s very clear that we have an arms race going on. Obviously, the US on the one hand, China on the other hand is the two extremes, putting tremendous amount of capital into data centers just at the base of that infrastructure. Chipset development, chipset access, a huge theme in terms of the export restrictions, etc, that are being forced by the US. I think it will continue. From a European standpoint, obviously, they’re stuck between a rock and a hard place, to be very honest. Let’s see what happens on that side of the fence. My view of the world is that certainly from a US and China perspective, we’re going to see a lot more movements in 2026, like big movements. The Chinese movements we always see in delay.  It takes us a couple of months, sometimes even more than that to understand exactly what’s going on. I think we’re going to see some huge moves this year in terms of the States, the United States of America, and China really pouring capital into the creation of the next big winners around AI. I think the US is obviously more visible. We see a lot of these companies. We’ve just discussed xAI and its acquisition by SpaceX or merger. I don’t know what they’re calling it exactly. Effectively, on the China side, the movements I think are already very big. As I said, it will take a while to figure out exactly what those moves are. One thing that I propose is that at some point, China will have very little dependency on chipsets from the US. I’m not sure it’s going to happen this year, but I think the writing is on the wall. Irrespective of any other geopolitical issues that is coming to the fore at this moment in time. That’s one of the key areas or in arenas of fight. Bertrand Schmitt It makes sense. If you are China, you will look at what happened. You would think that you cannot just depend on the largest of one country. It makes rational sense, the same way it makes rational sense for the US to limit exports to China because there is value to delay some peer pressure that could use these technologies for good but also for bad. If you were an ally of the US, that would be one thing. But when you are not an ally of the US, that certainly should be a different perspective. Maybe one last point concerning agents, I think there will be a lot that will revolve around coding. We can see OpenAI with Codex. We can see Cloud with code. There was, of course, [inaudible 00:18:28] that was trying to be big on agentic coding. I think agentic coding was one of the big transformation in 2025 and is going to get bigger in 2026. I think for a lot of people who do coding, there was a radical transformation in terms of what you can achieve, what you can do, how much you can trust AI to help you code. I start to think we might see this year, the replacement of not just one AI replace one coder, but one AI replace a full team because of the new ability to manage that at scale. Coding might be a common activity where you are going to think about outcomes, think about objective, think about how you organise, but not really coding by itself anymore. A big change, like you used to code, directly your hand on the stuff, but step by step, everyone is going to become a manager of agent. I think in one year, we saw enough transformation to think that in the coming year, the transformation can be even more dramatic. Nuno Goncalves Pedro The big Hardware movements Now switching gears to hardware. Obviously, a lot of movements in 2025 and over the last few years. One piece of thesis that we’ve had long-standing at Chamaeleon is that we will see the emergence of AI devices. Some of them have been tremendous failures as we discussed in the past. I predict that we’ll have a couple of really interesting full stack AI devices in the market this year. Why does that matter? Because, as many of you know, obviously, there’s compute that can happen in data centers and cloud infrastructure all over the world, but also there’s compute that can happen at the edges. The more you can move to the edges and the more you can create devices that actually allow you to have user experiences that are very distinctive at the edge, the more powerful some of these devices might become. I predict Apple will not be the first to launch anything on this. I predict probably OpenAI, after the acquisition of IO, will maybe not launch something this year, but will announce something this year. I’ll step back on that prediction. They’ll announce something this year, but maybe not launch. But we’ll start seeing some devices that have some interesting value in the market, probably devices that are AI devices, but they are very focused on very specific user flows, and so very much adequate to specific activities. I won’t make a prediction on that, but I think areas that would make sense for that to happen would be obviously around fitness, health, et cetera, et cetera, where we already have the ascendancy of products like Oura Ring and others out there. Definitely, that’s one area that might have quite a lot of developments. I think AI-first devices, devices that are very focused on compute at the edges, providing user flows that are AI-enabled to end users, we’ll see a lot more of that and a lot more activity this year. Again, I don’t think Apple will be necessarily ahead of the game. Again, maybe OpenAI will give us something to at least think about and look forward to. Bertrand Schmitt First, I’m not sure it will be that transformational because if it’s not in your phone, in your pocket, there is only so much you can do with it, and there is only so much computing power you will have. I’m doubtful it would be really impactful this year. Nuno Goncalves Pedro I feel we’ve been discussing this shift of paradigm in input and output. For me, some of these devices could lead to that shift. Because, again, a mobile phone is not a great long-term paradigm for the usage that we have because it’s really constrained by the screen. The screen is really what takes most of the battery life away. If we didn’t have that screen, what could we do? If we have the block that is as big as a mobile phone, and it didn’t have a screen, it was just compute, that’s a mini computer, a microcomputer. Bertrand Schmitt That’s a fair point, but I don’t see that transformation this year. That’s really more my point. I can see that you can have AI-enabled smart glasses, and it’s clear there is a race to AI-enabled smart glasses. My point is more to go beyond the gadget, it would take quite a while. It would need to have cameras. It would need to analyse what you see. It would need to hear what you hear. Again, it might come, but then at some point, it would be okay, what do you do with it? We have the example of the movie Her. That’s showing Her what it could be. There are definitely possibilities. It’s clear that if you take the big VR headset like the Apple Vision Pro, there is a failure from that perspective in the sense that I think it’s a great, amazing device. The big problem is that it’s doing way more that makes sense. I think there will be a clearer separation between your smart AR glasses that has to be light, that has to be always unconnected, and that’s primarily there to help you make sense of the world around you. The true VR headset that doesn’t really require much in terms of AI, and it’s just there to immerse you in a different world. For this, we know, unfortunately, in some ways, that there is not a lot of demand for it. Maybe there is little demand because you are too hidden in your own world. The technology is not working well enough yet. There are a lot of reasons. But I think Apple trying to do both at the same time, AR and VR, with the Vision Pro, was a pretty grave structural mistake. I think we would see a clearer line of separation between the two. There is bigger market opportunity for AR glasses. That, I certainly agree. There is opportunity to connect that to a computing device. As you talk about, your glasses are your screen, your phone becomes something in your pocket connected to your glasses. Nuno Goncalves Pedro For me, Apple has their way of doing things. From the perspective of what you said, they normally really plan their devices. Even if it’s a big shift in terms of a new area, like they tried with the Vision Pro, and we criticised them for launching it as a device that should have been more of a dev device that they really launched as a full-on device, but that’s their playbook, classically. I think Apple needs to change how they put products out and how they experiment with those products, et cetera. I think they have enough money to be doing everything all the time and figuring it out. If they don’t want to put it out, then they need to do a lot more hell of testing internally with their silos, but they should be playing across all these arenas, VR, AR, everything. They just should put devices out that are either ready for prime time, or they should call it something else. They should call it like this is a dev device or whatever it is. Bertrand Schmitt I agree with you. My complaint is more that it was marketed as a consumer device when it was not. It was a true developer device. Two, they tried to mix the two at once, and it made no sense. No one is going to walk in their home or in the street with their Vision Pro on their head. You have to be deranged, quite frankly, to have use cases like this. I think that for me is a crazy mistake from a company like Apple that prides itself in pure UI, pure user interface, very well-designed device for one specific use case, not mixing the two use cases. We still don’t have Macs with a touchscreen, you know?  We still don’t have an iPad with a good OS that makes use of this great hardware. For some strange reason, they decided to mix everything in the Vision Pro with a device that weighs a ton on your head and is so uncomfortable. That’s why, for me, I’m like, “Guys, what is wrong? Why did you let this team run crazy?” I hope at some point, Apple will go back to the drawing board. My understanding is that that’s what they are doing. They are going to have two devices, one smart glasses, an evolution of the Vision Pro, just focus on VR. They might actually abandon the concept of the pure VR-oriented headset. Because, from a market size perspective, it might not be big enough for Apple, quite frankly. Nuno Goncalves Pedro I read on all of the above, and people at this point was like, “Why are then players like Samsung and others not doing it. LG, et cetera?” Because those players historically have not invented new categories. They’re amazing at catching up once the category is invented, and then they scale the hell out of it, and that’s what these companies have been exceptional at. I wouldn’t see a dramatic innovation, I think, in terms of devices coming from any of the big ones on that side of the fence. Not to disrespect them in any way, but I think that’s not been their playbook ever. Again, if the origination doesn’t come from a start-up or from an Apple, I don’t see those guys going after it. My bet is that we’ll see some start-up activity and, again, hopefully, some announcement from IO now within the OpenAI world. Bertrand Schmitt I would slightly disagree with you. I see where you are coming from. But take the Samsung Galaxy Note, that sudden much bigger headphone that no one was doing that was launched by Samsung, at some point, it forced Apple to launch an iPhone Max. Let’s look at the Z Fold that Samsung launched 7 years ago, copied by everyone. Now Samsung launching a trifold. Apple has still not launched their foldable phone. I think there is a mix, actually, of sometimes- Nuno Goncalves Pedro For me, that’s not a proper new category. It’s still a mobile phone. It just happens to have a screen that folds in half. Bertrand Schmitt The iPhone was still a mobile phone, you could argue.  Nuno Goncalves Pedro No. I think the iPhone was…  I could actually agree with you on that point. Maybe Apple is not as innovative in that case. I think what Steve Jobs was exceptionally good at in terms of his ability as this master product manager was to be an exceptional curator of user flows and user experiences, and creating incredible experiences from devices based on that. That was his secret sauce. Could you say, “Wasn’t all of this stuff already around?” It was. You just put it all together very neatly and very nicely. But if you’re talking about significant shifts in how a category is done, the iPhone was a significant shift in how the category was done. The Fold is still an interesting device. I actually have a Fold right now in front of me. The 7 that you highly recommended to me that we both got, the Z Fold 7. I think they do amazing devices. I don’t think they normally are the most innovative players. Then, when they come to innovation, it comes from technology edges. Obviously, they have Samsung Display, there’s a bunch of other things. They had the ability to do foldable screens in-house themselves. Bertrand Schmitt I don’t disagree with you. I think there is an interesting situation where some companies have some strengths, another one has some strengths. My worry with Apple is that this was not demonstrated with the Vision Pro. The Vision Pro was a hot pot of technologies barely integrated together, with use cases absolutely not well-defined and certainly not something that makes sense for most of us. There is a question of has Apple lost it? While Samsung actually keeps doing their own stuff, that, yes, might be more minor improvements, but at least they are doing it. Because it looks like Apple is missing the train on even the minor improvements. By the way, you might not be aware, but Samsung launched its Vision Pro competitor. Interestingly enough, it might be a better product in some ways, being much lighter and much more comfortable. Nuno Goncalves Pedro We should play around with that and report back to our listeners. Of Start-ups and VCs Moving to venture capital and the startup ecosystem and what’s happening there, I think it is very much a bifurcated environment, and it’s bifurcated for both VCs and for startups. If you’re a startup in the AI space, and you have the hottest team since sliced bread, and you can create FOMO at the speed of light, you can raise ridiculous rounds. Five hundred million at the $3 billion, or $4 billion, or $5 billion valuation, and you still haven’t really even started. First round, you can raise 500 million. That’s back to the whole discussion on Bubble and where are we, et cetera. Some of these companies might actually become huge, some of them might not. But definitely, we are seeing really the haves and have-nots on the startup ecosystem with incredible teams raising a lot of money very, very early on or mid-stage if they’ve already existed for a while, and then the rest not being able to raise. We see a lot of non-necessarily AI sectors, some of the areas of SaaS that don’t necessarily have AI in it, or fintech, or the consumer space that are really, really struggling. If you don’t have an AI story for your startup right now, it’s extremely difficult to raise money unless your numbers are just the best numbers ever. That’s, I think, the first part of the element of bifurcation that we’re seeing today. The second element of bifurcation that we’re seeing today in terms of fundraising is for VCs themselves, and really propelled by the large VC firms raising more and more capital in recent orbits, announcing 15 billion across funds raised. Lightspeed, I think, had made an announcement a couple of weeks ago as well. They’ve raised a bunch of money as well. The big guys are all raising a lot of money. At some point in time, the question some of you might ask is, “These VCs are redeploying more and more money if they have a couple of billion for a VC fund. How does that look like? Is that still VC?” My perspective, I’ve shared before in some of our previous episodes, is that that’s no longer venture capital. At that point in time, we’re talking about something else. Private equity hedge funds, if you want to call them, maybe funds that are really driven by growth investment or late-stage investment. If you have a couple of billion under management, you’re not going to make your returns by writing a $3 million check in a series seed and leading that round.  That has implications for everyone in the ecosystem. It has implications for smaller funds that obviously have a lot more difficulty in raising capital. It’s difficult to differentiate. Last but not least, also for startups that really continue searching for that capital that is out there. Andreessen Horowitz, for example, runs Speedrun, which is a great program for companies around consumer in particular. Initially, it was a lot for gaming. But at some point in time, Andreessen Horowitz could decide that they don’t want to invest more in you. They just put money from Speedrun, which is obviously a very small check compared to the very large checks they could write mid to late stage and that will have an effect on you as a startup. What happens at that point in time if Andreessen Horowitz is not backing you up in later stages? More than that, what happens if I can’t get these big funds interested in me? Are the small funds still valuable to me? Punchline, my view is yes. Obviously, we’re a smaller fund, so there’s parochial interest in what I’m saying. Small funds can still create a ton of value for you, also in terms of credibility, ability to accompany you in those first stages of investment, and the ability to bring other larger investors later down the road as well. There’s definitely a big movement happening in terms of the fundraising for VC funds, which we shouldn’t neglect, which is the big guys are raising a lot more capital and are therefore emptying the market to smaller funds that are having more and more difficult raising at this point in time. We had discussed that there would be a need for concentration in the industry, that micro funds would need to concentrate, and we didn’t have the space for so many micro funds as we had around. But the way it’s happening is extremely dramatic at this moment in time. I think it will continue through 2026. Bertrand Schmitt Remember a few years ago, with the rise of AI, there was more and more of the question about, “What’s the point of SaaS at this stage?” Because SaaS was around for 15 years. Basically, how do you come up with something new that was not already tested, validated by the market? How do you bring something new? We say this was reinforced to the power of 10. If your product is not clearly built from the ground up for a new use case enabled by AI, anyone could then might have built your product 5, 10 years ago, and therefore, why now has no clear answer, and it’s a big problem. I’m still surprised myself to still see some entrepreneurs where you talk to them about AI because you don’t see them in the deck, and they explain to you, “It’s not yet there,” and you’re like, “What’s wrong with you guys?” Fine. Do whatever you want. Do a small business and whatever, but don’t think you can come up pitch and raise without an AI story. The second category is people who come with an AI story, but you can feel very quickly, I guess you saw that many times, Nuno, where just a story layered on top with little credibility. It’s not better. It’s not enough to just have a story. Your business needs to be radically built differently or radically proposing some brand-new use cases that were impossible to solve 5 years ago. Nuno Goncalves Pedro To stack up on that, absolutely in agreement. If you’re just adding to the story, and it’s an afterthought, and you’re just trying to make the story somehow gel, once you go into one or two layers of due diligence, your investors will very quickly realise that you’re not really AI-first or dramatically AI-enabled or whatever. It’s just you’re sort of stacking something on top of another thesis. It needs to make sense from the product onwards. It’s not just, let’s just put it together with chewing gum, and magically, people will give you money. It was true also if we remember the good old crypto blockchain days, where everyone’s investing in crypto. A lot of stories that didn’t make much sense. In that sense, it’s not very different. I would go one step further. I think in the world of the VC winter that we’re a little bit in, where it’s more and more difficult if you’re a smaller fund to raise your fund at this moment in time, there’s a lot of sources of distinctiveness still talked about, like proprietary networks, access to deal flow, fast track record, all that stuff that really, really matters. But our bet continues at Chamaeleon continues being that you need to be AI-first as a VC fund yourself. You need to have core advantages in using not only readily-available AI tools or third-party available AI tools, data sources, technology stacks, but actually building your own stack over time, which is what we did with Mantis at Chamaeleon. Again, just to reinforce that, I think we’re at the beginning of that stage. We, Chamaeleon, are ahead of the game, but we think that the rest of the market will have to move towards that as well. Still, to be honest, very surprising to me to see that many significant large players are doing very little still around some of these spaces. They have data scientists. They’re running some tools. They’re running some analysis and all that stuff, but it’s still, again, back to the point I was making for startups, all glued up with chewing gum. It doesn’t all come together nicely, which it does need to from a platform standpoint. Bertrand Schmitt It’s quite surprising. I agree with you that some VC funds might think that they can do business as usual in that brand-new world. It’s difficult to believe. Nuno Goncalves Pedro Maybe moving a little bit toward the capital formation piece. We already discussed the M&A space really accelerating. We’ve also discussed the IPO market and some predictions on that. Secondaries, there’s obviously a lot of liquidity coming from secondaries from mid to late stage. I think it will continue throughout the rest of 2026. A lot of activity in buying, selling in secondaries as some asset managers are becoming more distressed, as some very high net worth individuals and family offices are becoming more distressed as well, at the same time, where there’s a lot of opportunities to potentially arbitrage around some investments. I believe a lot of money will be made and lost this year by decisions made this year, just to be very, very clear in terms of equity, purchases, et cetera. Exciting year ahead of us. Definitely a very, very interesting market ahead of us. Secondaries, M&A, growth, and late-stage investing, also, early-stage investing will continue just for those that were wondering. Last but not least, the public markets, the IPO market as well. Bertrand Schmitt One of the big questions for the IPO market would be, will SpaceX go public? Would it be good for the startup ecosystem? Because suddenly that they go public, it would be to raise money. If they raise money, will there be any money left for anybody else? That would be an interesting test of the market. For sure, it would be proof that market are risk on financing a new IPO like this one. Or as you said, maybe there is no IPO, and it’s a merger with Tesla. Time will tell. Nuno Goncalves Pedro Regulatory & Geopolitical Headwinds… and the Wars Moving maybe to our topic of regulation and geopolitical headwinds, as we’re seeing … definitely not tailwinds. The Google antitrust verdict and, obviously, the remedies are expected to come forward now, and a lot of people are saying, “There are some risks of structural separation.” What do you think? Is it cool, but nothing will happen in the end dramatically? Alphabet or Google? I’m not sure, actually. It’s Google LLC. I think that’s the case. It’s The United States versus Google LLC. Bertrand Schmitt I’m not sure. Personally, I’m not a big fan. I think there needs to be a better way to manage some anticompetitive behavior. I’m not a big fan. There was this temptation to do that for Microsoft 25 years ago. Look at what happened. No one needed to buy Microsoft to leave space for others. I see the same with Google, and I guess they are happy to not be the number 1 in AI today, but to have an open AI in front of them. Even if they are doing a great job, by the way, to move forward and go faster and faster. Personally, quite impressed now with some of what they have released. Gemini 3 is doing great from my perspective. I’m not a big fan of this. I think to be clear, it’s important that bigger companies don’t behave anticompetitively, but at the same time, we need to find the right approach where it’s not about breaking these companies, and it’s also not about forbidding them to do acquisitions. Because then you end up with what NVIDIA just did with a $20 billion acquihire IP licensing type of acquisition, because they didn’t want to have the uncertainties. They didn’t want to wait 1–2 years in order to acquire the people and the technology, so they organised it in a different way. But I don’t like that. I think they should be able to acquire companies without facing so much uncertainty. To be clear, it’s not new. Uncertainty when you are Google, NVIDIA, or others, it happens. It has happened for a decade plus, 2 decades. I think there needs to be, for sure, some safety valves. At the same time, we want an efficient capital market. An efficient capital market need companies that can acquire other companies. If you don’t do that efficiently, it will be worse for the entrepreneurs, it will be worse for the investors, it will be worse for everybody. I think we have not reached a good equilibrium from my perspective. We need more efficient acquisition process. And at the same time, we need to also enforce faster anticompetitive behavior. Because what you talk about concerning Google, this is a case that was what? That is 10 years old. You see what I mean? This is way too long. If you’re a startup, you are dead by then. It’s like the story of Netscape facing Microsoft. They were dead long after the fact. I think we need a different approach. I’m not sure the best answer. I’m not sure we’ll get a better approach. There are probably too many vested interest. My hope is that it will get better with this current administration because, certainly, the past administration was very anti acquisition and efficient markets. Nuno Goncalves Pedro We’ve talked about the European Union AI Act a bunch of times, so I don’t want to spend too many cycles on that. The only effect that I would say is we are seeing in very slow motion the splitting of the Internet. I once had Tim Berners-Lee, by the way, shouting at me that we were going to break the Internet when we were applying for the .mobi top-level domain. I was part of that consortium that eventually did get the .mobi top-level domain, and I had him shouting at us. But, apparently, this is going to split the Internet, Tim. So in case you’re listening. Because it will create all these different rules. If your data is relating to consumers there, then it’s treated in a different way, and The US is… Well, obviously, we have the case of California with its own rules and laws. I don’t know. I feel we’re having a moment of siloing that goes beyond economic and geopolitical siloing. It will also apply to the digital world, and we’ll start having different landscapes around it. We’ll see how this affects global expansion of services, for example, around AI, particularly for consumer, but I don’t foresee anything dramatically positive. Recently, we had the whole deal around TikTok finally having a solution for their US problem where there’s now a US conglomerate magically that owns it. The conglomerate doesn’t magically own it, they just straight up own it for the US. But it was driven by many of these concerns around data ownership. Where’s the data? Where is it based? I think a lot of other concerns that have to do with the geopolitics of China, obviously, being the basis of ByteDance, the owner of TikTok, that still is a significant owner, by the way, in TikTok in US. Then also the interest in the economics of making money out of something as powerful as TikTok, to be honest, in The US. Just to be clear, I don’t think this was all about the best interests of consumers. It was also about money. Just follow the money. Bertrand Schmitt There are for sure, some powerful interest at play. But let’s be clear. I think one is data, as you rightfully said, but the other one is algorithm. It’s not as if China is authorising any competitor on its territory. They have blocked access to most of the Internet platforms from the US, either finding new rules or just trade blocking them. So I don’t think it’s fair competition. You don’t want some of that data in China about the US or European consumer. Three, it’s about the algorithm. If suddenly, you are a foreign power, and you can as we know in China, you better follow what’s required of you from the Chinese Communist Party. You cannot take a chance with influencing other stuff like elections in other countries. It’s fair from the US perspective. One could even argue it’s fair from a Chinese perspective to want that. I think the only one in the middle who doesn’t really know what they want is Europe because on one side, they want to benefit from American platforms, on the other end, they want to have some controls. On the other end, they don’t create the environment for startups to flourish. So in that weird situation where they have to accept some control by the big US providers and either provider of underlying infrastructure or provider of consumer business facing services. Then they try to regulate them. But I think they are misunderstanding the power relationship, and I think some of this regulation would get some blowback, at least by the current administration. Just, I believe, this morning, there was some news around X being under a criminal investigation in France. This is not going to end well for the French startup and VC ecosystem. This is not going to end well for France and Europe when you depend so much from your American friends. Nuno Goncalves Pedro Regulation will be weaponised. Regulation constraints around exports, all of this will be weaponised geopolitically, and the bigger guys will normally win. I think that’s normally what we’ve seen. Just on TikTok just to… And you guys, if you’re listening to us, just see if you see a pattern here, but obviously, 19.9% still owned by ByteDance of the TikTok entity in the US. It was initially said that 80% of the TikTok entity is owned by non-Chinese investors. Initially, people were saying US investors, and then they changed it to non-Chinese because MGX, I think, has 15% of it. MGX is based in the UAE, connected obviously to Mubadala, the Abu Dhabi sovereign wealth fund. Silver Lake is in there, I think, with 15% as well. Oracle as well with 15%. Those three are the big bucket owners together, 45%. Silver Lake having collaborated with MGX before, and I’m sure a lot of connectivity there. Then you still see a pattern in this in terms of shareholders. If you don’t, then just Google it. Dell Family Office, Vastmir Strategic Investments, which is owned by billionaire Jeff Yass, Alpha Wave Partners, obviously involved with a bunch of things like SpaceX and Klarna, Virgoli, Revolution, which is Steve Case’s, a former founder of AOL, is also in there. Meritway, which is managed by partners, I think, of Dragonair. Vinova from General Atlantic, an affiliate of General Atlantic. Also, NJJ Capital, which I believe is Xavier Nil, the French billionaire that founded Iliad. Mostly American, I think, if the math is correct. 80% non-Chinese, which was what mattered, I think, in many cases. But do see if you saw a pattern in most of those investors. I won’t say anything more than that. Maybe moving to other topics, maybe just to finalise on regulation and geopolitics. In geopolitics, we should talk about wars if we predict anything. Not that we are nasty and one want to be negative, but what the hell is going on? Will we have ending to the wars we already have ongoing or not? But before that, the struggles on the App Stores, I think, will continue both for Apple and for Google Play Store. The writing’s on the wall, the EU keeps pushing it dramatically and Apple keeps just doing stuff. I’m on the board of an App Store company. Apple just creates all these things that basically make you not really… It doesn’t work. You can’t provision then an App Store on Apple devices. On iPhones, et cetera. We’ll see how that will continue going, but I feel the writing’s on the wall. Both Apple and Google will have to open up a bit more of their platforms. I’m not sure it will have a huge impact in the medium to long term, but definitely we need to see more openness in access to apps as given by the two big platform owners, Apple and Google, out there. Bertrand Schmitt Let’s be clear. Google is way more open than Apple. We both have Android devices. You can install alternative app stores. It’s a different ballgame by very far. Nuno Goncalves Pedro Google does other nasty stuff. It’s public. You can check which board I’m a part of. You can see what that company has done towards Google over time. But to your point, yes. It is true that Google has been more open than Apple, but Google has done their own things. Just to be very clear, so I’ll just leave that caveat bracketed there for people to think about it and maybe read a little bit about it as well. Bertrand Schmitt I can say that, me, from my perspective, that path of total control that Apple has been going through on all their devices, that includes macOS, pushed me to, over the past 2, 3 years, to completely live and abandon the Apple ecosystem. I just couldn’t accept that level of control, that golden handcuff approach of the Apple ecosystem, each their own obviously, they are golden, their handcuffs, but they are still handcuffs. Personally, that pushed me way more to Linux, Android, Windows, back to Windows after all these years. I just couldn’t stand it anymore. I want to pick my devices. I want to pick what I install on them, and I don’t want to be controlled like this by just one entity for all my tech devices. For me, at some point, it was just not acceptable anymore. It’s still very warm, very golden handcuffs, but for me, they were just handcuffs at this stage. Yes, what they are doing with the App Store is very typical of that mindset. I think it’s quite sad because I think it started with good intention in some ways. “We need a new computing paradigm, we need to make things smoother and safer,” but it has really become a way to control your clients. For me, it has reached a point where it’s just way too much. Nuno Goncalves Pedro There’s obviously the great power comes great responsibility that uncle Ben told Spider-Man or Peter Parker. But there’s also with great power comes shitload of money, and control. So it’s like, “Yeah. Should we open the server? Do we want to delay opening it up?” “Yeah.” Anyway, it is what it is. Maybe let’s end on the more difficult note of the episode, which is going to be around wars. What’s our prediction? Will we have an end to the Gaza situation with Israel? Will we have an end to Ukraine and, obviously, Russia? What will happen in Iran? Those are the three big, big conflicts right now. Then, obviously, if we want to add just bonus points, what’s going to happen to Greenland, and what’s going to happen to Taiwan, and what’s going to happen to Venezuela? Let’s throw the whole basket in there. We’ve never had like… Let’s talk about all these territories and all these countries. At some point in time, I’m saying this in a light manner, but it’s obviously more tragic than it should be light, and people are dying, and there’s a lot of implications of all of that that is happening right now. Do you have any predictions, Bertrand, for this year? Bertrand Schmitt No. It’s tough to predict on an individual basis. I think on a more bigger picture basis is on one side, obviously, the rise of China on one side. You have also the rise of other countries like India, while very indirectly connected to some of these conflicts are still part of the game, buying oil from Russia, for instance. At the same time, I think overall, the US is more clear about with the sheriff in town. I think it’s good because in some ways, you cannot pay for the goods, you cannot have such a massive advantage versus nearly every other country on earth and just not be clear about who is the boss in some ways. As a result, what are the rules of the game and how it should be played? The US is not alone, obviously, you have China, you have Russia, you have India, you have Europe. You have different other countries. But at some point, it’s not good when countries are not rational and are not clear. I think I prefer the current situation where things are more clear and where you have to assume responsibilities about what you are doing. It’s time to be rational again about how the world behave. Yes, the concept of power and balance of power. I think there has been that dream, maybe mostly coming from Europe, about the end of history. I think that’s simply not the case. It’s not the end of history. It’s still about the balance of power. It has always been about the balance of power. If you are dumb enough to think it was not about that anymore, I just have a bridge to nowhere to sell you. I don’t have specific prediction, but I think it’s clear there is a new sheriff in town. There is a new doctrine about the Western Hemisphere that has been in some ways resurrected on the [inaudible 00:51:35] train, and I think we’ll see more of it. I think at this point, the biggest question is for the Europeans. What do they want to do? Because right now, their position of being a dwarf militarily while being a pretty big giant economically, I don’t think it works. Nuno Goncalves Pedro I agreed on everything that you said. I do have predictions. I’ll stick a flag on the ground just with my predictions. Bertrand Schmitt Good luck. Nuno Goncalves Pedro They are mostly positive. I do think we’ll see an end or, for the most, end to the two big conflicts, the one in Gaza and the one in Ukraine. I think Ukraine will end up in readjustment of territory and splitting between Russia and the Ukraine, but the end of hostilities, I think that we will see an end to the conflict in Gaza also with a readjustment on what that will mean for the Palestinian territories and the Palestinians in general. That I’m not sure, but I feel that there will be an end to those two big conflicts. Iran, I have no clue. I will not put a stick on the ground that I have no clue. There are so many things that could go wrong there. I’ve been reading some really interesting thoughts about even some aggressive thoughts that this might be the time to really change regimes in Iran and for the US to have a bit more of an aggressive stance. I really don’t have a perspective. Obviously, there’s a lot at stake there. Then, if we talk about the other parts, Greenland, I will not opine too much on. Maybe we’re done for now. Maybe there’ll be some other concessions to the US that weren’t already there in the ’50s. Taiwan, I won’t bet either. I’m sad to say I think it might happen at some point in time, but I’m not sure when and what would drive it. Last but not the least, Venezuela is my only really negative prediction. I feel it will continue to be a significant dictatorship as it was before managed enough by other people with the difference now that it has a tax to be paid to the US in the form of oil of some sort, etcetera, and maybe gas, maybe other things as well that it didn’t have before. That’s probably my most negative prediction for the coming year on the geopolitical side. Bertrand Schmitt Without going into detail, I would mostly agree with what you shared. At least that makes sense. But as we know, it’s not always what makes sense, but what might happen. I can tell you 100% I would not have guessed this operation against Maduro. This was so well done, well executed, and shocking at the same time that it’s… I think it shows that it’s hard to guess some of this stuff because there are certainly some new ways to wage limited war, for instance. So it’s certainly interesting, and we certainly need to get used to pretty bombastic statements. But for Venezuela, I don’t think it can be worse than what it was before. I’m probably more optimistic that gradually it can get better. Nuno Goncalves Pedro Just to put perspective on why we’re not making predictions on some of these elements, I think this is a funny story, but I was in Madeira. Actually, first time I was in Madeira, although I’m originally from Portugal. I’ve never been to the islands. Obviously, as you guys know, or some of you might know, there’s a lot of connection between Madeira and Venezuela. There’s a lot of immigration from Madeira Islands to Venezuela. One of my Uber or Bolt drivers there in Madeira was Venezuelan. Was born in Venezuela, but Portuguese descent, et cetera. He was telling me this was still last year. Late last year. Because I told him I lived in US, et cetera, and he was like, “Oh, hopefully, Trump will get Maduro out of there.” In my mind, I was like, “Dude.” No disrespect to the gentleman, but it’s like, “Okay. Mike, your perspective on geopolitics is maybe a little bit exaggerated.” And a couple of days later, we know what happened. When geopolitical decisions are better predicted by some probably very astute Uber drivers, you’re like, “Maybe I shouldn’t make a bet. I have no clue what’s going to happen, no clue what’s going to happen in Greenland, et cetera.” Anyway, a couple of predictions on that element. Bertrand Schmitt That’s why it’s so right. You have to be careful with the prediction, but it doesn’t remove the fact that I think nations and companies that have to play a global game have to understand in some ways what is the game, what are the powers in place, what could happen potentially, but also be realistic. Not be about wish and dreams, but more about, what’s the power relationship? Who has the money? Who has the means? Who has the capacity to do this or that? Because if you start that way, at least the scope of what’s possible, what’s reasonable is more and more clear more quickly. Some stuff like happened with Maduro, I would never have predicted, but for sure, if there’s one country that can do this sort of stuff, it’s the US. I’m not sure anyone has a technology and the means in terms of support infrastructure to do something like this. It’s tough to predict what will happen a year from now for any specific country, but I think that even trying to get a better understanding about the forces in play and their capacity and understanding and accepting that at some point, it’s all about real politic and relationship of power, the more your eyes would be wide open about what’s possible versus simple, wishful thinking. Nuno Goncalves Pedro Fintech, Crypto and Frontier Tech Moving maybe to our last section around fintech, crypto, and frontier tech. For me, just two very quick predictions, views of the world. I think on the frontier tech side, I won’t make a prediction. I will just tell you all to go and listen to our episodes, the one on infrastructure, which is immediately prior to this one, and the episodes that we’ve had around a couple of other topics including AI, what’s the future of your children, because I think they illustrate a lot of the points that we’re seeing and manifesting themselves over the next year and over the next 2 or 3 years as well beyond that. I feel those tomes are complete in and out of themselves, so you can just go and listen to them. Then my second comment is on crypto. I feel crypto has become of the essence, particularly under the current administration in the US, very favored. Obviously, we are now in a world where crypto is just part of the economic system, and I think we’ll see more and more of that emerging, and in some ways, crypto is becoming mainstream. Question is what blockchains will be the blockchains of the future? Obviously, there’s a bunch of bets put out there. We, ourselves, as Chamaeleon, have one investment in one of the significant bets in the space. But besides that, who’s going to win or not, we feel that we’re past the crypto winter. It’s now mainstream days, and we’ll see a lot more activity in there. Bertrand Schmitt I must say with crypto, I’m a bit confused. As you say, we are past the crypto winter. There is much less uncertainty in regul

Embedded Insiders
Trends in Embedded: LVGL Innovation & A Countdown to embedded world 2026

Embedded Insiders

Play Episode Listen Later Mar 5, 2026 53:08


Send a textOn this episode of Embedded Insiders, Rich and Vin sit down with Gabor Kiss-Vamosi, the Founder and CEO of LVGL, to discuss the open-source embedded Light and Versatile Graphics Library, designed to support both elaborate and simple UI design. Next, Rich and Axel Sikora, Chairman of the embedded world Technical Committee, discuss some highlights of the upcoming event.But first, we're giving you a glimpse into what Embedded Computing Design has planned for the quickly approaching 2026 embedded world  Exhibition&Conference in Nuremberg, Germany.For more information, visit embeddedcomputing.com

Le Shop des Titans - Le meilleur de Shopify dans un podcast
SimGym : Faut-il Déléguer ses A/B Tests à l'IA ? | Les Actus Shopify

Le Shop des Titans - Le meilleur de Shopify dans un podcast

Play Episode Listen Later Mar 5, 2026 16:53


L'IA est-elle vraiment prête à générer vos A/B tests ? Dans cet épisode du Shop des Titans, on décortique 5 mises à jour majeures du back-office Shopify qui vont impacter votre feuille de route technique. On vous partage d'abord notre retour d'expérience concret sur SimGym, le nouvel outil d'A/B test par IA, après l'avoir éprouvé sur nos clients. Au programme de cette revue d'actus : la migration obligatoire vers les nouveaux comptes clients, l'intégration très attendue des Metafields dans Analytics, l'explosion de la limite d'architecture à 1250 blocs, et les vraies capacités de "vibe coding" de l'assistant Sidekick.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Game Dev Unchained
0370: Roundtable News | High Guard's Downfall, Marathon's Release, Xbox's Future

Game Dev Unchained

Play Episode Listen Later Mar 3, 2026 82:05


Starting with Wildlight's HighGuard, which drew heavy criticism for inconsistent art direction, weak gameplay choices, lack of public playtesting, poor performance, and rapidly shrinking player counts, leading to major layoffs and a much smaller remaining team. We argue modern live-service shooters face intense competition and need earlier public feedback, noting broader post-COVID overfunding has produced many failed projects and layoffs, alongside debate over what “indie” means amid publisher funding like Tencent. They compare Eastern and Western development strengths, then review Bungie's Marathon playtest (strong gunfeel, UI/readability issues, clearer art direction improvements) and its competition with Arc Raiders. We close on Xbox leadership changes, the viability of consoles vs software-first strategies, exclusives, and industry uncertainty.Connect with us:•

Re:platform - Ecommerce Replatforming Podcast
EP321: Reframing Ecommerce's Build vs. Buy Debate - Practical Uses Of AI To Clean & Optimise Product Data

Re:platform - Ecommerce Replatforming Podcast

Play Episode Listen Later Mar 3, 2026 40:26


“In just an hour, I built a UI to interrogate my data, and it handled most of the heavy lifting for a client project."Chris Marshall, Director & Co-founder, OnState.Optimising Ecommerce Data with AI: Real-World ApplicationsYes, we talk about AI a lot on the podcast. It's inevitable, AI is weaving its way into so many ecommerce processes and tasks.This episode is highly practical.We cover real-world examples of how AI tools are being used to speed-up product data tasks whilst reducing the need to rely on expensive licences for specialist tools.SummaryEcommerce businesses are increasingly turning to AI to enhance their data management processes. The pod explores how AI tools are being used to clean, enrich, and structure product data, providing real-world examples that highlight their practical applications.The Build vs. Buy Dilemma Is Being ReframedBusinesses often face the decision of whether to build custom solutions or purchase existing platforms. In the context of AI for product data, building allows for tailored solutions using tools like Google Sheets and AI models such as ChatGPT for tasks including data transformation and HTML cleaning. On the other hand, buying involves using specialized AI-enabled tools or outsourcing, which can save time but may incur higher costs.Practical AI Strategies Discussed:DIY data cleaning: AI models can automate data cleaning tasks, such as reformatting unstructured HTML and standardising attributes, saving significant manual effort.Automating data structure: AI can analyse complex datasets, infer attribute types, and suggest categorisation rules, streamlining the setup of dynamic product groups.Hybrid approaches: combining DIY methods with outsourcing can optimise resources, allowing businesses to handle unique projects efficiently.Tune in to hear how AI is transforming data migration and management by automating previously manual tasks, increasing speed and allowing for continuous learning. Chapters[00:30] The Build vs. Buy Debate in AI Data Management[03:20] AI in Data Migration: Practical Use Cases[06:15] Transforming Data with AI Tools[09:20] The Role of AI in Content Management[12:20] Engaging with Data Structures[15:00] Building Custom AI Tools for Specific Needs[17:45] Tactical Middleware: A New Approach[20:35] Speeding Up Data Transformation Processes[23:20] Validating AI Outputs and Managing Expectations[26:15] The Future of AI in Ecommerce Data Management

The ST Podcast
#67 (2025) TouchGFX 4.26: After emulated framebuffers, ST brings quality-of-life improvements

The ST Podcast

Play Episode Listen Later Mar 3, 2026 50:31


What's new in TouchGFX 4.26? It includes UI tweaks to help with repetitive or habitual tasks, like copying typographies or reordering and cloning interactions.

MacVoices Video
MacVoices #26087: Live! - iOS Adoption: Truth, Fiction, and Misdirection

MacVoices Video

Play Episode Listen Later Mar 2, 2026 20:20


Apple has debunked media reports claiming low adoption of iOS 26. Chuck Joiner, David Ginsburg, Marty Jencius, Web Bixby, Jim Rea, Eric Bolden, and Jeff Gamet discuss the assertion that the data was skewed by Apple's privacy-driven device misreporting. They review how official numbers show strong uptake among eligible devices and debate whether criticisms of the new interface are overblown. While some UI concerns are acknowledged, the group agrees the release offers meaningful improvements and is far from the failure some headlines suggested.  This edition of MacVoices is brought to you by our Patreon supporters. Get access to the MacVoices Slack and MacVoices After Dark by joining in at Patreon.com/macvoices. Show Notes: Chapters: 0:00 iOS adoption controversy introduced0:28 Claims of low adoption challenged2:04 Telemetry and agent string misreporting3:50 Evaluating Apple's official adoption numbers5:13 Privacy-driven device obfuscation explained6:30 Clickbait and misinterpreted analytics reports7:57 Debate over “liquid glass” interface complaints9:03 Objective UI usability concerns raised12:03 Design philosophy and Apple's UI direction13:18 System Settings and long-term interface frustrations15:49 Improvements and benefits in iOS 2618:35 Broader reflections on Apple UI evolution19:12 Overall assessment: not a disaster, but debated Links: Apple Reveals How Many iPhones Are Running iOS 26https://www.macrumors.com/2026/02/13/apple-shares-ios-26-adoption-stats/ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support:      Become a MacVoices Patron on Patreon     http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:     http://macvoices.com      Twitter:     http://www.twitter.com/chuckjoiner     http://www.twitter.com/macvoices      Mastodon:     https://mastodon.cloud/@chuckjoiner      Facebook:     http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:     http://www.facebook.com/macvoices/      MacVoices Group on Facebook:     http://www.facebook.com/groups/macvoice      LinkedIn:     https://www.linkedin.com/in/chuckjoiner/      Instagram:     https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes     Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #26087: Live! - iOS Adoption: Truth, Fiction, and Misdirection

MacVoices Audio

Play Episode Listen Later Mar 2, 2026 20:21


Apple has debunked media reports claiming low adoption of iOS 26. Chuck Joiner, David Ginsburg, Marty Jencius, Web Bixby, Jim Rea, Eric Bolden, and Jeff Gamet discuss the assertion that the data was skewed by Apple's privacy-driven device misreporting. They review how official numbers show strong uptake among eligible devices and debate whether criticisms of the new interface are overblown. While some UI concerns are acknowledged, the group agrees the release offers meaningful improvements and is far from the failure some headlines suggested.  This edition of MacVoices is brought to you by our Patreon supporters. Get access to the MacVoices Slack and MacVoices After Dark by joining in at Patreon.com/macvoices. Show Notes: Chapters: 0:00 iOS adoption controversy introduced 0:28 Claims of low adoption challenged 2:04 Telemetry and agent string misreporting 3:50 Evaluating Apple's official adoption numbers 5:13 Privacy-driven device obfuscation explained 6:30 Clickbait and misinterpreted analytics reports 7:57 Debate over "liquid glass" interface complaints 9:03 Objective UI usability concerns raised 12:03 Design philosophy and Apple's UI direction 13:18 System Settings and long-term interface frustrations 15:49 Improvements and benefits in iOS 26 18:35 Broader reflections on Apple UI evolution 19:12 Overall assessment: not a disaster, but debated Links: Apple Reveals How Many iPhones Are Running iOS 26 https://www.macrumors.com/2026/02/13/apple-shares-ios-26-adoption-stats/ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession 'firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Zealots of Nerd Entertainment
Megaton Musashi: Heart, Steel, and the Cost of Survival

Zealots of Nerd Entertainment

Play Episode Listen Later Mar 2, 2026 14:15 Transcription Available


What makes a mecha story hit harder than metal-on-metal? We break down Megaton Musashi's secret sauce: a near-extinction battlefield where giant Rogues carry more than missiles, and a lead who throws real punches inside and outside the cockpit. Yamato Ichidachi isn't a clean-cut hero—he's rough, stubborn, and loyal in a way that sets the emotional stakes before the first clash. That humanity is why the fights thump, shock, and linger.We get into the art of impact: why these battles feel heavy, how the CGI supports rather than distracts, and the small production choices that add polish without noise. Clean silhouettes, smart lighting, and UI that stays in its lane make every set piece readable and stylish. The soundtrack does real work too—an opening that primes the pulse and cues that swell at the moment resolve hardens. If you've rolled your eyes at sloppy 3D or same-face character design, this series is a welcome correction.Character threads cut deep. Reiji's coerced path into a cockpit and Kota's life as an android built to be bullied raise tough questions about control, empathy, and what war makes acceptable. A quiet, tender moment of found family with Ryugo re-centers the story on care rather than carnage. And yes, we talk about the absurdity of a broad-daylight assassin wreaking havoc on camera—it's wild, it's pointed, and it says plenty about spectacle culture in a dying world. No spoilers on the twisty bits; the plot earns its turns by forcing choices that leave marks.We land on a clean 8.5, with a nudge to give Megaton Musashi a fair shot if you want mecha with heart, style, and substance. Next up, we're teeing a spoiler-heavy revisit of Gun X Sword, so catch up if you want to ride with us. If you're vibing with the pod, tap follow, share it with a friend who swears they “don't do mecha,” and drop a review—tell us your favorite hard-hitting robot fight and why it stuck. Your support helps us build more for this community.Text us for feedback and recommendations for future episodes!Support the showWe thank everyone for listening to our podcast! We hope to grow even bigger to make great things happen, such as new equipment for higher-quality podcasts, a merch store & more! If you're interested in supporting us, giving us feedback and staying in the loop with updates, then follow our ZONE Social Media Portal to access our website, our Discord server, our Patreon page, and other social media platforms! DISCLAIMER: The thoughts and opinions shared within are those of the speaker. We encourage everyone to do their own research and to experience the content mentioned at your own volition. We try not to reveal spoilers to those who are not up to speed, but in case some slips out, please be sure to check out the source material before you continue listening!Stay nerdy and stay faithful,- J.B.Subscribe to "Content for Creators" on YouTube to listen to some of the music used for these episodes!

Arcade Couch
Pokémon Winds & Waves Reactions + Resident Evil Requiem First Thoughts

Arcade Couch

Play Episode Listen Later Mar 1, 2026 65:20


We talk about everything from the 2026 Pokémon Day, Pokémon Presents, including the reveal of Pokémon Wind & Waves.  Also on the show, we give our impressions on both Resident Evil: Requiem, as well as Marathon, after jumping into the server slam.  SHOW DOT POINTS The Pokémon Presents event celebrated the 30th anniversary of the franchise. New game features and updates were announced, including Pokémon Home support. The 10th generation of Pokémon games is set to release in 2027. Visual performance is a major concern for upcoming Pokémon titles. The hosts hope for a more polished experience with the new games. Bluepoint's potential Bloodborne remake was a hot topic of discussion. The gaming community is eager for more announcements throughout the anniversary year. Voice acting in Pokémon games is a desired feature among fans. The hosts reflect on the impact of leaks on excitement for new games. The conversation highlights the balance between nostalgia and innovation in gaming. Bluepoint's potential projects are still under discussion. The God of War series is generating mixed reactions online. Xbox is facing significant challenges with its branding strategy. Resident Evil Requiem balances horror and action effectively. Marathon's gameplay mechanics are engaging but require teamwork. Overwatch's return has revitalised interest in the franchise. The UI in Marathon is a major point of frustration for players. Upcoming game releases in March include several highly anticipated titles. The hosts emphasise the importance of community in multiplayer games. Resident Evil Requiem is noted for its stunning visuals and sound design. YOUR HOSTS

アシカガCAST
なぜAIとWebサイト制作の相性はいいのか?(第850回)

アシカガCAST

Play Episode Listen Later Mar 1, 2026 17:26


AIとWebサイト制作の相性がいい理由を解説しました。AIにコーディングを任せて感じたことなど実体験をもとにした話もしています。=== 目次 ===00:00:00 はじめに00:01:14 AIがプログラミングが得意な理由00:03:08 UIのパターンがほぼ標準化されている00:05:35 HTML・CSSのコードは著作権が認められにくい00:06:51 昔から他人のコードを日常的に使っている00:08:11 ツールの進化の延長線上00:10:05 最近仕事でもコードを書くのはAI00:13:42 これからWeb受託制作への参入は厳しそう00:16:20 エンドトークnote記事「なぜAIとWebサイト制作の相性はいいのか」https://note.com/ashikagacast/n/nc92d99dd5e3e【感想・質問・取り上げてほしいテーマ大歓迎です】✉️メールアドレスashikagacast@icloud.com

The Courtenay Turner Podcast
Ep.497: Hidden Mechanics of Belief: The Method Behind Modern “Awakening”

The Courtenay Turner Podcast

Play Episode Listen Later Feb 27, 2026 192:40


In this supercharged continuation, Courtenay Turner sits down with Michael King to map a repeatable architecture of influence that operates across eras, institutions, and ideologies. Rather than focusing only on “who did it,” Michael argues for tracking method—how populations get unmoored from inherited certainty, pushed into contradiction, and then offered relief narratives that become default lenses for reality. We explore: skepticism as the first stage of initiation; “controlled dialectics” where visible conflict can be structured toward predictable outcomes; the Enlightenment as a filtering mechanism; the philosophical pipeline from Descartes to Marx; cognitive dissonance as a governance lever; the role of education and “whole child” frameworks; noosphere/Aquarian-era framing as a spiritual UI for consent; and how environments—from the Palais-Royal to modern mass events—can function as initiation-by-ambience. We close with a practical protocol for spotting contradiction traps, definition games, and the soft-power packaging that precedes enforcement. Enjoyed this episode? Check out Part 1 of this discussion: https://courtenayturner.substack.com/p/ep486-unveiling-metaphysics-the-roots Connect with Michael King YouTube: https://www.youtube.com/@michaelking1091 X: https://x.com/miketheking1517 All Courtenay links (Substack, conference, socials, support, sponsors, listen links): https://Courtenay.Show Listen on Apple / Spotify (direct): Apple Podcasts: https://podcasts.apple.com/us/podcast/the-courtenay-turner-podcast/id1545606136Spotify: https://open.spotify.com/show/4VU1A6PFqmFlo5UBq0m37N What stood out most? Drop a comment with one “relief narrative” or “definition game” you've noticed lately. Secure your copy of Courtenay's book “The Final Betrayal: How Technocracy Destroyed America” Here: https://amzn.to/4oWBfDR Courtenay's Podcast is a viewer-supported show. To receive new posts and support my work, consider becoming a free or paid subscriber. Disclaimer: This content is for inspiration & entertainment. We aim to inform, inspire & empower. Guest opinions are their own and do not necessarily reflect the host. Do your own research. ©2026 All Rights Reserved Learn more about your ad choices. Visit megaphone.fm/adchoices

SorareData Podcast
Sorare Has a Competition Problem | SorareAndrews

SorareData Podcast

Play Episode Listen Later Feb 27, 2026 93:26


Sorare keeps adding new competitions — Hot Streaks, Arena, Classic, In-Season, region-specific leaderboards, scarcity splits — and it's starting to feel… confusing.In today's SorareAndrews, we're breaking down:• Why lineup selection feels harder than ever• Whether more competitions actually means more opportunity• If the UI is keeping up with the gameplay changes• How managers can simplify their approach• Whether this is a short-term adjustment… or a long-term problemThere's a ton of action on the platform right now — MLS, Europe, J League, K League, rotating specials — but at what point does “more options” turn into “too much to manage”?

Category Visionaries
Why organic referrals drive 80% of Clockwise's growth after a decade of marketing experiments | Matt Martin

Category Visionaries

Play Episode Listen Later Feb 27, 2026 26:01


Clockwise is pioneering intelligent time management for knowledge workers, addressing the fundamental constraint that limits all knowledge work organizations: how teams allocate their most finite resource. Founded in 2016, the company has spent a decade solving the problem of calendar inefficiency and meeting overload that fragments productive time. In a recent episode of BUILDERS, we sat down with Matt Martin, Co-Founder & CEO of Clockwise, to learn about the company's journey from a three-year build cycle to serving major software organizations through a product-led growth motion, the strategic decisions behind targeting software engineers as their wedge market, and why the time management problem remains largely unsolved despite being obvious to anyone who's worked in a large organization.Topics DiscussedWhy time remains the primary economic constraint in knowledge work despite a decade of tooling evolutionThe three-year pre-launch build period and deliberate four-year path to monetizationTargeting software engineers as the wedge: ROI clarity in heads-down time versus meeting-heavy rolesThe graveyard of calendar productivity startups: UI-focused plays, consumer pivots, and buyer/user misalignmentTransitioning from pure PLG to blended motion with enterprise inbound and pilot programsThe stubborn reality of organic growth: why referrals dominate despite extensive channel experimentationBuilding toward AI-powered personalized time agents that embrace individual complexity//Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.ioThe Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co//Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

Software Sessions
Bryan Cantrill on Oxide Computer

Software Sessions

Play Episode Listen Later Feb 27, 2026 89:58


Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ

Bitcoin for Millennials
Millennials Will Be Stuck In Debt Forever If They Don't Act Today | Adam O'Brien | BFM235

Bitcoin for Millennials

Play Episode Listen Later Feb 26, 2026 46:42


Adam O'Brien is a serial entrepreneur and CEO of Bitcoin Well, a non-custodial Bitcoin platform on a mission to enable worldwide independence.› https://x.com/adamobrienPARTNERS

The Talk Show With John Gruber
441: ‘Serious Opinionators', With Adam Engst

The Talk Show With John Gruber

Play Episode Listen Later Feb 25, 2026 130:46


Adam Engst returns to the show to talk, in detail, about certain of the UI changes in iOS 26 and Apple's version 26 OSes overall. In particular, the new Unified view in the Phone app, and the Filter pop-up menu in both the Phone and Messages apps. Also: a shoutout to Balloon Help.

Coffee with Butterscotch: A Game Dev Comedy Podcast
[Ep561] Indie Devs Discuss "Mewgenics"

Coffee with Butterscotch: A Game Dev Comedy Podcast

Play Episode Listen Later Feb 25, 2026 62:52


In episode 561 of 'Coffee with Butterscotch,' the brothers dig into Mewgenics, exploring its development history, gameplay quirks, and the ways players actually engage with it. The game becomes a springboard for a broader look at how UI, quality-of-life choices, and genre-blending shape player trust and expectations. The conversation closes on the realities of indie launch windows, where timing can matter just as much as design when it comes to standing out.Support How Many Dudes!Official Website: https://www.bscotch.net/games/how-many-dudesTrailer Teaser: https://www.youtube.com/watch?v=IgQM1SceEpISteam Wishlist: https://store.steampowered.com/app/3934270/How_Many_Dudes00:00 Cold Open00:25 Introduction and Welcome01:13 Exploring Mewgenics: A Game Overview02:58 Nailed It or Whiffed It: Game Critique06:29 Player Engagement and Game Longevity10:18 Quality of Life Issues in Gameplay12:23 User Experience vs. Developer Intent15:28 Cognitive Load and Player Frustration18:34 The Role of UI in Game Design26:32 Humor and Theme in Game Design30:17 Developer Insights and Future Improvements39:27 The Disconnect in Game Development Quality41:57 Trust and Player Expectations in Game Design46:02 The Balance of Jank and Fun in Multiplayer Games51:26 The Impact of UI on Game Accessibility57:25 Launch Strategies and Market Timing for Indie GamesTo stay up to date with all of our buttery goodness subscribe to the podcast on Apple podcasts (apple.co/1LxNEnk) or wherever you get your audio goodness. If you want to get more involved in the Butterscotch community, hop into our DISCORD server at discord.gg/bscotch and say hello! Submit questions at https://www.bscotch.net/podcast, disclose all of your secrets to podcast@bscotch.net, and send letters, gifts, and tasty treats to https://bit.ly/bscotchmailbox. Finally, if you'd like to support the show and buy some coffee FOR Butterscotch, head over to https://moneygrab.bscotch.net. ★ Support this podcast ★

Frekvenca X
Parmy Olson: Umetna inteligenca skrenila s poti za dobro dobička, ne človeštva

Frekvenca X

Play Episode Listen Later Feb 25, 2026 45:36


Začelo se je s plemenito vizijo o tehnologiji za dobrobit človeštva, končalo pa z mastnim zaslužkom največjih tehnoloških velikanov. Tako nekako lahko strnemo osrednjo idejo knjige Prevlada avtorice Parmy Olson o orodjih umetne inteligence, ki so v zadnjih letih obrnila svet na glavo. Prisluhnite intervjuju z njo, v katerem strnemo zgodbo ustanoviteljev podjetij DeepMind in OpenAI Demisa Hassabisa in Sama Altmana, ki stojita za orodji, kot sta Chat GPT in AlphaGo, razmišljamo pa tudi o tem, ali lahko takšna tehnologija sploh kdaj zares uide korporativnim interesom. Gostja: Parmy Olson, novinarka (Bloomberg) in avtorica knjige 'Prevlada: umetna inteligenca, ChatGPT in tekma, ki bo spremenila svet'. Knjiga je v prevodu Sama Kuščerja dostopna tudi v slovenskem jeziku. V Xpertizi (39:31) se predstavlja Anita Bolčevič, raziskovalka na področju turizma, FKBV UM. Avtorstvo fotografije na naslovnici podkasta: Kim Farinha     Poglavja: 00:00:01 Uvod 00:01:53 Parmy Olson in kaj jo je navdušilo za poročanje o tehnologiji 00:05:38 Kdo sta Sam Altman in Demis Hassabis 00:11:24 Na prizorišče stopita Google in Microsoft 00:14:41 Kakšna je bila vloga Elona Muska? 00:16:43 Google in njegov Goljatov paradoks 00:17:45 Kitajska noče zaostajati 00:20:55 Kakšna je dejanska tržna vrednost umetne inteligence 00:24:39 Zakaj je regulacija umetne inteligence tako težavna? 00:30:06 Negotov položaj novopečenih diplomantov ali kdo bo opravljal prakso? 00:33:30 Umetna inteligenca, njena 'empatija' in skriti interesi v ozadju 00:36:27 UI uporabljamo za preverjanje lastnih idej, ne njihovo generiranje 00:39:31 Xpertiza: Anita Bolčevič

Citizen Central
Star Citizen: 2 Steps Forward, 1 Step Back (Again) - (Olli43, Morphologis, MrKraken, Tom Beckhauser)

Citizen Central

Play Episode Listen Later Feb 25, 2026 111:04


Is Star Citizen in a better place than it was a few years ago, or are we still stuck in the “2 steps forward, 1 step back” cycle? In this episode of the roundtable Citizen Central podcast, I'm joined by Olli43, Morphologis, MrKraken, and Tom Beckhauser for a community roundtable on what's improving, what's regressing, and what's missing for the game to truly *click*.We dig into crafting and the “era of industry,” why crafting only matters if the economy and item sinks exist, the state of inventory and UI friction, Pyro's shortcomings, and whether Squadron 42 will bring in new players that the PU can actually keep.Today's Guests:Olli43YouTube: https://www.youtube.com/user/Olli43Twitch: https://www.twitch.tv/olli43MorphologisYouTube: https://www.youtube.com/morphologisTwitch: https://www.twitch.tv/morphologisMrKrakenYouTube: https://www.youtube.com/c/MrKrakenTwitter: https://x.com/RealMrKrakenTom BeckhauserYouTube: https://www.youtube.com/@UCUpW5imB8Qi9cOjotpLfp-g Twitch: https://www.twitch.tv/tombeckhauserToC:00:00 Introductions05:15 Why Do You Play?21:40 Is Star Citizen Getting Better?29:00 Immersion and “Gameification”44:00 Are We Finally Moving Beyond Combat?54:00 Is Crafting A Big Deal?01:32:00 Squadron 42 in 2026Watch on YouTube: https://www.youtube.com/playlist?list=PLvpiPXCO7OVJOlBIclW9tbpb2g29gur3ISupport This Podcast:Patreon Paypal Ko-FiFollow Space Tomato on social media:Website  Youtube  My Other YoutubeInstagram Twitter Facebook Discord

The Digital Story Photography Podcast
Snapseed Sprouts a New Camera, and It's Beautiful - TDS Photography Podcast

The Digital Story Photography Podcast

Play Episode Listen Later Feb 24, 2026 32:22


This is The Digital Story Podcast 1,040, Feb. 24, 2026. Today's theme is, "Snapseed Sprouts a New Camera, and It's Beautiful" I'm Derrick Story. Just when you think it's dead, Snapseed springs to life with additional editing tools, a refreshed UI, and a new camera app. And just like with some of our favorite mirrorless brands, we can capture images choosing from a variety of film simulations. And just like that Snapseed is relevant again. More about that, plus other interesting stories, on today's TDS Photography Podcast. thenimblephotographer.com, click the box next to Donating a Film Camera, and let me know what you have. In your note, be sure to include your shipping address. Affiliate Links - The links to some products in this podcast contain an affiliate code that credits The Digital Story for any purchases made from B&H Photo and Amazon via that click-through. Depending on the purchase, we may receive some financial compensation. Red River Paper - And finally, be sure to visit our friends at Red River Paper for all of your inkjet supply needs. See you next week! You can share your thoughts at the TDS Facebook page, where I'll post this story for discussion.

TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
AI Test Automation: Ship Twice as Fast with 10x Coverage with Karim Jouini

TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation

Play Episode Listen Later Feb 24, 2026 42:21


AI test automation is evolving fast — but most tools still generate brittle code that breaks with every UI change. See it for yourself now: https://links.testguild.com/Thunders In this episode of the TestGuild Podcast, Joe Colantonio sits down with Karim Jouini, founder of Thunders, to explore a radically different approach to AI testing: executing test automation in plain English without generating Selenium or Playwright code. Instead of "auto-healing selectors," Thunders interprets natural language directly — allowing teams to: Ship twice as fast Achieve 10x test coverage with the same resources Reduce regression cycles from weeks to days Eliminate massive automation maintenance overhead Karim shares real-world case studies, including: A European bank that reduced a 3-year core banking upgrade testing effort to 4 months A SaaS company that transitioned from a traditional QA team to AI-assisted product-led testing We also discuss: Whether AI test agents replace QA roles How QA managers must shift from individual contributors to AI managers The risks of adopting AI without a defined success metric The future of shift-left testing in the AI era If you're a software tester, automation engineer, QA lead, or DevOps leader trying to understand what's hype versus real ROI in AI testing — this episode breaks it down. Try it for yourself and see how AI testing fits into your pipeline. Get personal demo: https://links.testguild.com/Thunders  

In Touch with iOS
409 - Home Cameras & Missing Person Cases — Safety or Surveillance? Vision Pro F1 & Apple's March Mystery

In Touch with iOS

Play Episode Listen Later Feb 24, 2026 81:28


The latest In Touch With iOS with Dave he is joined by Jill McKinley, Chuck Joiner, Jeff Gamet, Eric Bolden, Marty Jencius, Guy Serle. Apple teases a mysterious March 4 event as rumors swirl about colorful MacBooks and M5 updates. We break down VisionOS 26.4 beta, iOS 26.4 AI features, CarPlay updates, Rosetta 2 warnings, and Apple's expanding sports lineup — including MLS now free on Apple TV+. Plus, Emergency SOS via satellite saves skiers in Lake Tahoe. The show notes are at InTouchwithiOS.com  Direct Link to Audio  Links to our Show Give us a review on Apple Podcasts! CLICK HERE we would really appreciate it! Click this link Buy me a Coffee to support the show we would really appreciate it. intouchwithios.com/coffee  Another way to support the show is to become a Patreon member patreon.com/intouchwithios Website: In Touch With iOS YouTube Channel In Touch with iOS Magazine on Flipboard Facebook Page BlueSky Mastodon X Instagram Threads Summary In episode 409 of In Touch With iOS, Dave and the panel dive into Apple's newly announced "special experience" event scheduled for March 4 in New York, London, and Shanghai. With no official details revealed, speculation runs high. Could we see colorful, lower-cost MacBooks powered by A-series chips? M5 Pro and Max MacBook Pros? Updated iPads? The panel debates whether Apple may stage a staggered release week or unveil everything in a single coordinated announcement. The discussion shifts to Vision Pro, where rumors suggest Apple could demonstrate immersive Formula 1 experiences just days before the 2026 F1 season begins. With Apple's expanding sports footprint, including IMAX screenings of F1 races, the possibility of spatial sports broadcasting feels closer than ever. The panel also reviews VisionOS 26.4 beta updates, including refined UI elements, reorganized settings, early foveated streaming support for developers, and expanded 8K playback capabilities on newer hardware. iOS 26.4 beta brings one of the busiest update cycles in recent memory. Highlights include AI-powered playlist creation in Apple Music, enhanced podcast video playback directly inside the Apple Podcasts app, CarPlay integration with third-party AI tools like ChatGPT, Claude, and Gemini, improved hotspot usage visibility, battery charge limit automation through Shortcuts, and Stolen Device Protection becoming enabled by default. The panel weighs in on whether security features should be opt-in or automatically enforced. On the Mac side, Rosetta 2 warnings now alert users when launching Intel-based apps, signaling Apple's continued push toward full Apple Silicon adoption. The conversation explores legacy software challenges and developer responsibility during platform transitions. Additional stories include Toyota adding Apple Wallet car key support, Tesla's rumored CarPlay integration delays, and a powerful real-world example of Emergency SOS via satellite saving skiers in a Lake Tahoe avalanche. Finally, Apple's sports strategy takes center stage as MLS Season Pass becomes free for Apple TV+ subscribers, joining F1 and Friday Night Baseball in Apple's expanding live sports ecosystem. Breaking News Apple Announces Special Event in New York, London, and Shanghai on March 4 Apple Event on March 4: Here's What to Expect Upcoming Low-Cost MacBook May Come in Yellow, Green, Blue, and Pink F1 races to screen live in IMAX theatres in 2026 as Apple TV unveils new US viewing experience Topics and Links In Touch With Vision Pro this week.  Could Apple Demo Immersive F1 on Vision Pro at Its March 4 Event? visionOS 26.4 Beta Release Notes visionOS 26.4 unlocks new 'foveated streaming' feature for apps and games Beta this week. iOS 26.4 Beta 1 was released this week.  Apple Seeds First Betas of iOS 26.4 and iPadOS 26.4 to Developers Everything New in iOS 26.4 Beta 1 iOS 26.4 Adds Average Bedtime Metric and Restores Blood Oxygen to Health App Vitals Graph Apple Removes iTunes Movies and TV Shows Apps in tvOS 26.4 iOS 26.4 Brings CarPlay Support for ChatGPT, Claude and Gemini In Touch With Mac this week First macOS Tahoe 26.4 Beta Now Available for Developers Apple Releases First watchOS 26.4, tvOS 26.4 and visionOS 26.4 Betas  macOS Tahoe 26.4 Displays Warnings for Apps That Won't Work After Rosetta 2 Support Ends Other Topics Android-to-iPhone AirDrop Transfers Now Supported on Pixel 9 Tesla's CarPlay Plans Delayed by Apple Maps Compatibility Issue  Jeff met with Omni Group and reviews their 2026 road plan for OmniGraffle and OmniFocus for iPad and iPhone.  Omni Links News Toyota Rolling Out Apple Wallet Car Keys on iPhone  iPhone's Emergency SOS via Satellite Feature Helped Rescue Skiers Caught in Lake Tahoe Avalanche Apple TV Sports Content Including F1, MLS, and Friday Night Baseball Coming to Bars and Restaurants MLS 2026 Season Begins February 21 on Apple TV With Free Access for Subscribers Announcements Macstock X is here celebrating its 10th anniversary! With Three Full Days of expert-led Presentations and Workshops, Macstock's sessions are crammed full of productivity-enhancing content. NEW this year is a partnership with sponsor Ecamm. Ecamm Creator Camp: Mac Edition on July 9, 2026 there are only 100 tickets available for the bundle. There are 2 passes available: Macstock weekend pass July 10,11,12, 2026 or the Macstock Ecamm Bundle starting July 9 (only 100 tickets available)  Come join us. Register HERE Our Host Dave Ginsburg is an IT professional supporting Mac, iOS and Windows users and shares his wealth of knowledge of iPhone, iPad, Apple Watch, Apple TV and related technologies. Visit the YouTube channel https://youtube.com/intouchwithios follow him on Mastodon @daveg65, , BlueSky @daveg65  and the show @intouchwithios   Our Regular Contributors Jeff Gamet is a podcaster, technology blogger, artist, and author. Previously, he was The Mac Observer's managing editor, and Smile's TextExpander Evangelist. You can find him on Mastadon @jgamet Pixelfed @jgamet@pixelfed.social and Bluesky @jgamet.bsky.social‬ Podcasts The Context Machine Podcast  Retro Rewatch Retro Rewatch His YouTube channel https://youtube.com/jgamet Marty Jencius, Ph.D., is a professor of counselor education at Kent State University, where he researches, writes, and trains about using technology in teaching and mental health practice. His podcasts include Vision Pro Files, The Tech Savvy Professor and Circular Firing Squad Podcast. Find him at jencius@mastodon.social  https://thepodtalk.net  Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him by email at eabolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast.   Jill McKinley works in enterprise software, server administration, and IT A lifelong tech enthusiast, she started her career with Windows but is now an avid Apple fan. Beyond technology, she shares her insights on nature, faith, and personal growth through her podcasts—Buzz Blossom & Squeak, Start with Small Steps, and The Bible in Small Steps. Watch her content on YouTube at @startwithsmallsteps and follow her on X @schmern. Find all her work at http://jillfromthenorthwoods.com  Chuck Joiner is the host of MacVoices and hosts video podcasts with influential members of the Apple community. Make sure to visit macvoices.com and subscribe to his podcast. You can follow him on Twitter @chuckjoiner and join his MacVoices Facebook group. Guy Serle is one of the hosts of the new The Gmen Show along with GazMaz and email GMenshow@icloud.com  @MacParrot and @VertShark on X  Vertshark on YouTube, Google Voice +1 Area code  703-828-4677

UXpeditious: A UserZoom Podcast
How staff designers can lead without being managers with Catt Small

UXpeditious: A UserZoom Podcast

Play Episode Listen Later Feb 23, 2026 44:10


Episode web page: https://bit.ly/4tH0nSl Leading without the title: The real power of the staff designer What does it take to grow your impact as a designer—without becoming a manager? In this episode of Insights Unlocked, host Jason Giles sits down with Catt Small, staff product designer, game maker, and author of The Staff Designer, to unpack the evolving role of senior individual contributors in design organizations. Catt shares her unconventional journey from creating digital dress-up dolls as a kid to shaping products at Etsy and Asana—and how those experiences shaped her perspective on leadership, influence, and creative confidence. At the heart of the conversation: a mindset shift. Moving from being told what to design to diagnosing what matters most. What you'll learn in this episode The misunderstood role of the staff designer: Catt explains why the staff-level IC role often feels ambiguous—and how influence, not authority, becomes your primary tool. She breaks down what “building influence” actually means in practice and why it's more intentional than mystical. Invisible work and strategic impact: From relationship building to cross-team alignment, much of a staff designer's impact happens behind the scenes. Catt explores how to prioritize the work that truly moves the business forward—and avoid getting stuck in “glue work” that doesn't drive career growth. From craft to communication: Design leadership at the IC level requires a shift from pixel perfection to clarity of thinking. Catt shares why low-fidelity diagrams and conceptual artifacts often create better alignment than polished UI—and how to coach teams away from jumping into high fidelity too soon. Navigating politics with integrity: If you've ever felt “allergic to politics,” this conversation reframes the idea. Catt explains how understanding motivations, fears, and power dynamics is less about manipulation—and more about empathy, curiosity, and emotional intelligence. Managing energy like a product: Influence takes energy. Catt shares practical strategies for auditing your calendar, designing your workweek intentionally, and partnering with your manager to balance short-term execution with long-term strategy. AI as a tool, not a replacement: AI is another tool in the designer's toolkit—but you're still the creative director. Catt discusses how to use AI to accelerate research and exploration without outsourcing your thinking or critical judgment. A key takeaway: Leadership is a mindset One of the most powerful themes in this episode is confidence. Staff-level designers aren't waiting for permission—they step into leadership by trusting their experience, sharing their perspective, and partnering across the organization. As Catt reflects, the transition is uncomfortable at first. But the shift from execution to influence starts with believing you belong in the room. Resources & links Catt Small on LinkedIn (https://www.linkedin.com/in/cattsmall/) Catt's website (https://cattsmall.com/) Catt's Maven page (https://maven.com/catt-small/staff-designer) The Staff Designer book page — 20% off with code UserTesting until Feb 28, 2026 (https://rosenfeldmedia.com/books/the-staff-designer/) Nathan Isaacs on LinkedIn (https://www.linkedin.com/in/nathanisaacs/) Learn more about Insights Unlocked: https://www.usertesting.com/podcast

HomeTech.fm Podcast
Episode 563 - Auto-Man

HomeTech.fm Podcast

Play Episode Listen Later Feb 21, 2026


On this week's show: Ring's Super Bowl ad fallout keeps getting worse as Search Party, Flock, Axon, and leaked emails raise bigger surveillance questions, Fire TV gets its biggest UI update ever, Eufy promises five-year motion sensors, and Third Reality drops new Zigbee gear. Ubiquiti goes industrial with a new Cloud Gateway, Shelly leaves garage doors wide open (literally), and OpenAI picks up the founder of OpenClaw. All of this, a pick of the week, project updates, and so much more!

The Real Python Podcast
Exploring MCP Apps & Adding Interactive UIs to Clients

The Real Python Podcast

Play Episode Listen Later Feb 20, 2026 69:18


How can you move your MCP tools beyond plain text? How do you add interactive UI components directly inside chat conversations? This week on the show, Den Delimarsky from Anthropic joins us to discuss MCP Apps and interactive UIs in MCP.

狗熊有话说
545 /《Refactoring UI》:UI 做不好,往往不是审美问题

狗熊有话说

Play Episode Listen Later Feb 20, 2026 11:50 Transcription Available


如果只推荐一本书给开发者和创作者,我几乎不会犹豫,答案是 Refactoring UI。不是因为它教你“什么风格好看”,而是因为它直接告诉你:什么地方一定是错的。很多人卡在 UI 上,并不是审美差,而是不知道问题出在哪。界面看着别扭,却只能凭感觉反复修改。这一期里,我聊的是《Refactoring UI》如何用大量前后对照,把 UI 从“玄学”拉回到可判断、可修正的层面。如果你做产品、写代码、独立开发,或者经常需要在没有设计师的情况下做界面,这一期会非常实用。 • 为什么我会把《Refactoring UI》推荐给开发者和创作者 • UI 为什么是产品设计里最容易被误解的一块 • “界面不对劲,但说不出来”的根源是什么 • 《Refactoring UI》的核心方法:错误示例 vs 正确示例 • 为什么它更像一次 UI 的 code review • 为什么我不建议从头到尾把这本书当小说读 • 这本书真正的价值在于反复翻和对照 • 当你带着书里的例子去看熟悉产品时,会发生什么 • 高级感背后,其实是非常朴素的原则 • 颜色、字体、间距和留白的真实作用 • UI 的基本功,为什么比风格更重要 • 从单个页面,转向系统性的思考 • 设计原则看起来简单,但执行并不容易 • 从功能出发,而不是从布局出发 • 限制选择,而不是不断增加选项 • 这些“默认决定”,为什么不是设计选择 • UI 改进为什么几乎每天都能用得上 • 把 60 分的界面,稳定拉到 80 分 • 最后的判断:少犯错,本身就是专业度Support this podcast at — https://redcircle.com/beartalk/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

ui refactoring ui
Bigdata Hebdo
Episode 226 : Starlake.AI avec Hayssam Saleh

Bigdata Hebdo

Play Episode Listen Later Feb 20, 2026 55:40


Vincent Heuschling reçoit Hayssam Saleh, créateur de **Starlake**, une plateforme data open source française née de la factorisation de projets clients depuis 2017-2018. L'épisode intervient dans un contexte de consolidation du marché (rachat de DBT et de SQLMesh par Fivetran), qui invite à challenger les solutions établies.Starlake se distingue par une approche **entièrement déclarative** (YAML + SQL natif, sans Jinja) couvrant toute la chaîne data engineering : ingestion, transformation, orchestration et qualité des données. L'outil s'appuie sur les moteurs sous-jacents des plateformes cibles (Snowflake, BigQuery, Spark) et génère automatiquement les DAGs pour les orchestrateurs du marché (Airflow, Dagster, Snowflake Tasks).Parmi les fonctionnalités marquantes : le **data branching** (branches de données à la manière de Git), l'inférence automatique de schémas YAML à partir de fichiers sources, un **transpiler SQL** multi-plateformes, et l'extraction du lineage depuis du SQL brut sans annotation. L'intégration récente de **DuckLake** ouvre la voie à des architectures on-premise souveraines à coût maîtrisé (sous 300 €/mois sur OVH, Scaleway, Clever Cloud).Le modèle économique repose sur le support, la formation, et le consulting : Starlake s'installe dans le cloud du client, avec mise à jour automatique gérée par l'équipe, sans accès aux données.**Chapitres****00:00:27** – Introduction : consolidation du marché data (rachat de DBT et SQLMesh par Fivetran) et présentation de l'épisode**00:03:13** – Hayssam et la genèse de Starlake : parcours Spark/Scala, POC à 4 000 formats de fichiers (2017-2018)**00:09:51** – Architecture et philosophie : load, transform, orchestration unifiés en déclaratif (YAML + SQL natif, pas de Jinja)**00:00:18:18** – Starlake vs DBT : différences philosophiques, composabilité, fonctionnalités 100 % open source**00:00:22:20** – Data branching, Starlake Labs (pipe syntax, transpiler SQL, lineage) et expérience développeur (DuckDB local, UI point-and-click)**00:36:35** – Modèle open source et économique : licence Apache, support, formation, marketplace cloud souveraine**00:43:42** – DuckLake : alternative on-premise/cloud souverain (OVH, Scaleway, Clever Cloud) et comment contribuer / démarrer**Le BigdataHebdo**Le BigdataHebdo est le podcast Francophone de la Data et de l'IA.Retrouvez plus de 200 épisodes https://bigdatahebdo.comRejoignez la communauté sur le Slack https://join.slack.com/t/bigdatahebdo/shared_invite/zt-a931fdhj-8ICbl9dbsZZbTcze61rr~Q

B2B Marketers on a Mission
Ep. 208: How AI Agents are Disrupting the AdTech Landscape

B2B Marketers on a Mission

Play Episode Listen Later Feb 19, 2026 38:27 Transcription Available


How AI Agents are Disrupting the AdTech Landscape Semantic content classification driven by AI agents is currently transforming digital advertising and B2B content monetization as we know it. When leveraged the right way, marketers can classify B2B content into actionable signals and find the most relevant content across the open web. This shift toward AI-native advertising allows for a more sophisticated approach to targeting that moves beyond traditional cookies. So, how can brands strategically implement these tools to generate impactful results, and what does the rise of autonomous agents mean for the future of your digital marketing strategy? That's why we're talking to Brendan Norman (Co-Founder and CEO, Classify), who shares his expertise and experience on how AI agents are disrupting the AdTech landscape. During our conversation, Brendan discussed the evolution of digital advertising and the critical integration of AI and cloud-based tools to automate manual tasks and improve campaign optimization. He also elaborated on the massive shift from human-centric to agent-centric traffic, predicting that agent traffic will surpass human traffic within 18-24 months. Brendan also explained why he believes that the future belongs to marketers who can blend audience and contextual signals to monetize human and agent attention. He highlighted how new AI-native tools are democratizing advanced ad tech, significantly reducing costs and improving efficiency for large and small advertisers. https://youtu.be/yVobWZTmwco Topics discussed in episode: [03:01] Beyond Keywords: How semantic understanding allows advertisers to target the nuance of a page (like “snow removal” vs. just “winter”) rather than broad categories.  [06:46] Optimizing for AI Agents: Why “Generative Engine Optimization” (GEO) complements traditional SEO, and how brands must prepare for agents retrieving information instead of humans.  [12:34] The Shift in Web Traffic: The prediction that agent traffic will surpass human traffic on the web in the next 6 to 24 months.  [15:50] The Power of Context + Audience: Why the best advertising strategy combines who the user is (audience) with what they are consuming in the moment (context).  [20:47] Democratizing Ad Tech: How AI agents and new frameworks will allow smaller brands with smaller budgets to access sophisticated programmatic advertising tools.  [26:54] High-Fidelity Curation at Scale: How AI reduces the cost of processing massive data sets, making real-time optimization and curation accessible and sustainable.  [33:44] The “Middleman Tax”: A look at the inefficiency of current ad tech where only 35 cents of every dollar reaches the publisher, and how AI can fix this.  Companies and links mentioned: Brendan Norman on LinkedIn  Classify  Bluefish AI Agentic Advertising Org  IAB Tech Lab Transcript Brendan Norman – Classify, Christian Klepp Brendan Norman – Classify  00:00 I think overall, jobs will change. I think that people will have to spend a lot less time doing a lot of the manual, rote tasks that they’re doing today. You know, kind of in parallel with what we’re seeing in terms of vibe coding and people’s ability to build product really quickly, design new web pages really quickly, like get ship things out quickly. I think a lot of the infrastructure layer tools, or just call them like, like, chatGPT style, cloud based tools, LLMs (Large Language Models), we’ll see a lot deeper integration into existing advertising product. And what that does is it helps democratize the whole ecosystem. So I think it frees up people’s time, you know, to not have to do a lot of the basic administrative, you know, reporting, manual, campaign, optimization type stuff, and it will help service a lot better insights. Ultimately, I think the industry grows, and I think it scales even faster and cautiously, optimistically. I think that we, we will have back to building on the curation piece, and, you know, the advertiser, outcomes piece, publisher monetization piece, user experience piece, I think that all those things will increase. Christian Klepp  01:07 When done the right way and leveraging the right approach and technology, you can classify B2B content into actionable insights and find the most similar content across the open web. So how can this be done the right way, and what role do B2B Marketers play? Welcome to this episode of the B2B Marketers in the Mission podcast, and I’m your host, Christian Klepp. Today, I’ll be talking to Brendan Norman about this. He’s the Co-Founder and CEO of Classify, a software that organizes the world’s digital content, making a privacy, safe, searchable and monetizable. Tune in to find out more about what this B2B Marketers Mission is, and off we go. I’m gonna say Mr. Brendan Norman, welcome to the show. Brendan Norman – Classify  01:49 Thanks for having me, Christian. Christian Klepp  01:51 Great to have you on. I’m really looking for this conversation because, man, like you know, in our previous discussion, besides talking about snow and bad weather, we did have, we did have we did have some interesting discussions around, I’m going to say, AI machine learning, and how that all has some kind of like strong correlation to content. So let’s just dive in. I’m going to start with the first question here. So you’re on a mission to help publishers increase monetization potential and advertisers target the most relevant, curated inventory. So for this conversation, I’m going to focus on the following topic, and we can unpack it from there. So how B2B brands can optimize their own content. And you know, let’s be honest. Brendan, who the heck doesn’t want to do that, right? So your company classify, if I remember correctly. It’s a software that organizes the world’s digital content, making it privacy, safe, searchable and monetizable. So here’s the two-pronged question I’m happy to repeat. So first one is, walk us through how your software does that and B, how does this approach benefit? B2B companies looking to optimize their own content? Brendan Norman – Classify  03:01 Historically, how a lot of content gets categorized, classified, organized, it’s fairly unsophisticated, and it’s been fairly unsophisticated for a long time, just because, you know, the technology is difficult to do, and we haven’t really had the foundational ability to understand it in a way like a human understands it until fairly recently, and do it at Deep scale. So good analogy for this question is like, if you were having a we were having a conversation just a minute ago about the snow, you know, happening in Canada, and how cold it was and how much snow you got, and, you know, also around the fact that, like you had to shovel your driveway, you have a snow blower you were putting the snow. There’s a lot of different nuance to that conversation. I as a human, and most humans, are able to interpret all of that nuance and kind of positively negatively, understand that there’s a snow blower involved in that snow blower was used to remove the snow historically that conversation, you know, if it was just a blob of text, or if it were a web page, the the basic technology to understand it would have reduced it down to a category like snow or maybe winter, and that’s it, and that’s all the targeting that would have happened to that page. So our conversation, you know, gets transcribed. It gets put on a blog, or it gets put on a news site. The only thing that a machine could understand about it was, you know, snow and then potentially a keyword, tagged snow blower. And that’s all so we took a very different one. One of the reasons why you know that that makes it challenging for advertisers and also for publishers. If you’re the publisher of that content, you’re not able to help advertisers really understand the nuance to like, what are we talking about here? Because maybe an advertiser wants to sell snow blowers for that specific site. Maybe they’re looking to sell ski and since we were talking about removing snow from a driveway, probably not the best application to go sell skis on. What is helpful is to deeply understand all the nuance to like we were talking about a driveway. We were talking about removing snow from that driveway. So we invented, you know, a much better, more sophisticated way to scrape content, classify it according to all of the different, you know, nuances semantic understanding much more like a human would, and then embed all of those different, you know, semantic understandings into, you know, this, this, this file, and then we organize that in a way that makes it searchable and kind of understands all the relationships very quickly. And what that does is it helps advertisers, like if you know, I’m Honda selling snow blowers, which they make, arguably the best snow blower in the market, if they’re looking to reach people that are talking about snow removal from the driveway, they can very quickly see the list of all the different URLs across the internet, and they can build, you know, a deal ID, or they can build a targeting, contextual targeting segment to specifically pinpoint those very specific web pages. And that’s kind of how the technology works, and then also, also why it’s relevant to advertisers. Christian Klepp  06:21 Thanks so much for sharing that Brendan that definitely helps us give, you know, some perspective into, like, what your software does. And you know, just, I’m asking you this from, from somebody who probably has learned to write one or two lines of code, and that’s as far as my dev skills go. But like, how, how is your software different from like GEO (Generative Engine Optimization), or is there some kind of overlap? Brendan Norman – Classify  06:46 It’s fairly complementary. I mean, the problem that GEO, you know, is trying to solve, and we’ve got good friends, advisors, you know, like at Blue Fish AI and like, a really cool company, Andre, I worked with him at live rail. He was the co-founder back then, before we got acquired by Facebook, you know. And I think that the problem that they’re trying to solve is going back to that it was just stay on Honda snowblowers. They’re trying to help Honda understand how they’re represented inside of, inside of an LLM or inside of a chat bot. And what they also do is they help these companies restructure their pages for, you know, better representation inside of the other end of like a chatGPT or a cloud answer. So it is kind of SEO (Search Engine Optimization), but for the generative world where we sit on is kind of on a different side of that. It’s very complimentary, though, and we’re deeply understanding content at scale, and that’s helping, you know, the advertiser understand where to position their ad. We’re also just, you know, very quickly, moving into this new space of, traditionally, advertising technology is focused on a human going to a web page, reading that content, reading the article, watching a video, you know, whatever that content looks like, and then helping the right advertisers show up in a contextually relevant way, so that the human will click on that ad, and they’ll go to another web page, they’ll buy the thing, whatever somebody wants to sell. A very recent development, so back up a year or so, you know, chatGPT Claude when they’re out and their agents and their bots are scraping like going out to the web and they’re retrieving information. They’re doing it to train their models to make their models better at answering questions. But now, you know, fast forward to today. They’re actually spending more time just going to content and then using that content to answer a specific question. So like, what’s the best recipe for, you know, creating soft shell craps. It’ll query a couple different web pages. It’ll find that, it’ll retrieve that information and bring it back that that is not being monetized today. And there’s a really interesting thing that we’re, you know, we’re starting to work on, which is monetizing the attention of an agent. And, you know, it’s, there’s a lot to figure out, but it’s kind of like the early days of a web browser, and like early days of search, when humans would go, you know, to a search engine, they would pop in some keywords, or, like, right out of search, and then, you know, Google would look at their entire index of the web, which was an algorithm that was weighted based on the number of different contextual relevancy plus the number of connections between web pages. So a web page that I might have published in geocities.com that nobody else would link to, Christian Klepp  09:50 wow, GeoCities like… Brendan Norman – Classify  09:54 Throwing way back remember the days of like writing like HTML and you know, creating that, you know, looping in some type of image because nobody else had linked to that, like personalized page that you built, it would never get shown up. And, you know, the top 20 or 30 or probably even couple 1000, or maybe even 100,000 search results. So their algorithm was about contextual relevancy, plus the number of links that other pages that had to your page. And then they started to include advertising in that. So early days of ads in search were literally anything, you know, it’s any advertiser that wanted to advertise to you, and they were just kind of choosing the highest price, trying to figure out, you know, how do we make money? And then it evolved into much more contextually relevant ads and sponsored post or sponsored advertisements. So now you know, if you’re searching for, like, what’s the best, you know, LLM or chat bot, you’re probably going to see a sponsored ad from, you know, Claude and Perplexity and chatGPT. Now you’re also going to see the search results underneath those. What’s changing about that kind of rapidly is how we’re influencing because humans are spending less time going there and doing that, and also within Google, Gemini is also surfacing some AI summary quickly and kind of superseding that, creating a chatGPT experience inside of Google, which is a brilliant way to do it also. But a lot of human interaction with the web now is humans going to chatGPT going to cloud asking questions and kind of treating it like we used to treat search back in the day. So influencing that, influencing that agent, going out to the web and sitting in between. That is another really interesting way that you can help an advertiser tell that story, not necessarily to a human but to the agent who’s retrieving the information and then bringing it back to the human, Christian Klepp  11:56 Right, right, right? And if we’re talking about content, it’s, you know, doing it in such a way that the content shows up in the AI search. Brendan Norman – Classify  12:04 Exactly. Christian Klepp  12:05 Because everybody, everybody’s got those now, right, like Google or Bing, or whatever, they’ve got the, they’ve got the AI summary at the at the very top of the page, right when you, when you, when you key in something. Brendan Norman – Classify  12:17 Yeah. Christian Klepp  12:18 Okay, fantastic. I’m gonna move us on to the next question about because we’re on the topic of optimizing content. So what are some of the key pitfalls that like B2B Marketers and their content teams? What should they be mindful of, and what should they be doing instead? Brendan Norman – Classify  12:34 That would be actually a better question for some of the GEO companies and something like more SEO focused companies about how to specifically optimize like your content. It’s a great question. I haven’t spent as much time, you know, deeply thinking through that. And the problem that we’re trying to solve is more of, you know, at scale, what is the semantic understanding of like, how somebody has built their page and or construct the video, as opposed to advising them on what they should do? You know, to think about it in a way that’s either more engaging. I would pivot that question more to the Geo and SEO focused folks, yeah, but super high level. I mean realizing that now web has two primary users of traffic. There’s humans who are bouncing or reading a, you know, web page or watching a video. But there’s also agents. And now the scale is like, changing very, very quickly. So you know, in the next year, two years, everybody will have lots of agents, kind of doing things on the back end for them. And, you know, we believe that, you know, in the next what, 6,12,18,24 months, Agent traffic will surpass human traffic on the web. So realizing that there’s these kind of two layers that one, humans see a web page and nice pretty pictures, and, you know, they see the layout great, but also having a web page that’s optimized in HTML, markdown, JSON, in ways that agents consume that, and then also knowing the different types of agents. So the cool thing that we’re building right now, in addition to this content graph of all the content, which is effectively like a understanding all the context between the content. It’s a mouthful, an agent graph that helps to inform this is an agent coming to my site. So in a lot of ways, it’s very similar to the folks who over the last decade or so, have built these identity graphs or audience graphs, and they know that like you, Christian versus me, Brendan, they’ve got some profiling on us. They understand our search history, our retargeting, our purchase intent, a lot of things that they’re appending to like you as a specific profile or an IP address. The rapid evolution of all this is mapping out the land. Landscape of different agents, where they come from, and then the personalization of these agents, and basically applying a lot of the similar logic that we’ve used for identity graphs and for audience graphs towards agents to help understand, how do you modify the content on the back end that humans never see, so that when they’re retrieving information, interacting with the content they’re doing it, you’re presenting in a really thoughtful way that drives like the answers and the results that you want to Christian Klepp  15:33 right, right? No, absolutely, absolutely. And in our previous conversation, you talked a little bit about contextual versus audience targeting. So and I mean, I’ve asked you this back then, but do you think one is better than the other, or do you think that they can work together? Brendan Norman – Classify  15:50 They should absolutely work together. Christian Klepp  15:52 And why? Brendan Norman – Classify  15:54 The reason, the reason is, you know, knowing who you are is a very important piece to the puzzle. Like, and if you even take a step back, like, what’s the whole point of advertising? Like, the whole point of advertising is storytelling, so that a brand or a service or a company can help market their brand service to the right person they’re trying to sell them something. The cool thing about the internet is we all now have this, you know, basic shared awareness that, like, there are certain things that are paid for on the internet, certain types of content that are gated. I might buy a subscription to The Economist, you know, I pay Claude a certain amount of money, a lot to be able to use it, you know, a lot and chatGPT, and then a lot of the web is free. Facebook is free, Tiktok is free, Instagram is free, LinkedIn is free. But the economics, it’s very expensive to run these businesses, so they have to, you know, support it through advertising. Ideally, you know, there’s a couple of ways to think about it, and there’s one camp of people on the internet who think that advertising is a necessary evil or a last resort, you know, we just cram it in there and make some money. There’s another camper of folks who actually think that it can be additive to the experience. And one of the reasons why, you know, it’s kind of a meme, and you always hear people talking about, you know, I didn’t need this thing, but I saw an ad for it on Instagram, and just had to buy it because it was really cool. The reason why that exists is that their advertising is phenomenal, and the targeting and optimization is phenomenal. And why it’s phenomenal on the back end is it knows a lot about you know me, who I am, what I’m interested in, based on my history, what I’ve been engaging with, where I’m spending time, you know, what I’m looking at, but it also knows specifically when I’m looking at that thing, you know, it might have a framework of saying, Brendan, really, you know, likes these types of skis, you know, he’s interested in, You know, a couple other, couple other interesting products, but the best time to serve each one of those products might be different, and it’s different depending on what I’m looking at, what I’m thinking about in that exact moment. And to kind of align these, these different graphs, graphs of intent, contextual understanding, and then audience, you know, the best time to serve me an ad for a new pair of skis is when I’m reading an article about skiing or something about the mountains. You know, it’s not necessarily when I’m reading about the Warriors, because I’m not really thinking about skiing when I’m reading about basketball. So to your point, the most effective ads are when you’re combining those two sets. It’s great for the advertiser, because I’m much more likely to click on it and go check out the skis. It’s also giving me a better experience, because it feels more native to the overall content that I’m reading. And that’s why it’s so important. It shouldn’t be an afterthought or a necessary evil or a last resort. It should be something that is intentionally thought about the entire design, because it can, it can actually be a cool experience. Christian Klepp  19:06 Absolutely, absolutely. I mean, you know, you’re talking to somebody that started his career in the in the advertising industry, so, yeah, I’ve heard that one before, and what you’ve been describing in the past couple of minutes sounds to me a little bit like time of day marketing too, right? Because you’re you know, are you the had a guest on, like, a year ago who talked about this? Right? Is, is Brendan, the same guy at eight in the morning and one one in the afternoon and seven in the evening? Right? There’s different different times of the day, different mindset, different motivation, different reason for being on your device or looking at, looking at specific type of content, right? But it is interesting, right? And it’s interesting and sometimes a little bit scary, how, um, how quickly the algorithm picks, picks this stuff up, right? Like, for example, last year, I was researching a lot on Japan, because we went there, right? Family trip and whatnot. And. And that’s what I kept seeing on Instagram, right? Like, because I was looking up specific temples and whatnot and and today I got another push. Like, would you like to invest in a temple that’s an on island in the Sea of Japan, right? Brendan Norman – Classify  20:12 Like, sorry, did you invest? Christian Klepp  20:17 No, I did not. But it was just, it was just funny that I got that ad right, like, it’s, like, Okay, interesting, but like, it’s so like it not, was not on my radar at all, right, Brendan Norman – Classify  20:29 Yeah, Christian Klepp  20:29 Okay, great. From your experience, and you talked a little bit about it now in the past couple of minutes, but like, from your experience, how can leveraging AI agents improve efficiency and save marketing leaders time? Brendan Norman – Classify  20:47 Ooh, there’s a couple different ways to think about that. So you know, part of it is this new agentic framework for how existing tools, you know, advertising and marketing tools, will communicate with each other today. You know, it’s fairly complex. You know, if I wanted to go build a contextual targeting segment to help one of our brands that we work with find the right contextual or inventory to target contextually, I would have to work with them. We build a targeting segment. We would upload that into our one of our SSPs, we would build a deal ID, you know, they would connect it back. And there’s a lot of different pieces that happen along the way. And each one of those pieces you have to go to, you know, a UI, I’ve got to go to a dashboard, I’ve got to push that thing in. Some of it happens through an API, but a lot of it happens like going to a whole bunch of different web pages to make sure this stuff all works. So stuff all works. What’s cool about agents? And I’ll unpack this, and then I’ll go to the more of the consumer focus side too. But what’s really cool about agents using, you know, things like the ACP framework from the Agentic Advertising Org., the ARTF (Agentic Real Time Framework) from IAB Tech Lab is they’re kind of built on some of the existing frameworks that allow humans to use natural language to communicate between these different systems. So there’s still the back end pipes of API pushing data or pulling data from one system to another. But on top of that is more of an agentic framework that allows, you know, a human just to use some prompting, like in chatGPT, to make a request, you know, that talks to a back end system. So that’s one part of the agentic framework for like, you know, how to think about this through the lens of advertising and marketing. And then the other side is, you know, more of the consumer focused. There are so many interesting and very quickly growing tools you know, that you can start to plug in, into Cloud, into Cloud code, and to building things that just rapidly accelerate development of different products and your ability to analyze data quickly. I think in the next, you know, 6 to 12 months, we’re going to have a totally different landscape for how people are buying like trading media also, you know, one more final thought about all of this is that a lot of the sophisticated tooling and pipes that we have are only accessible towards the largest advertisers today. And I think that you’ll pretty quickly see a democratization of the ability for anybody to just buy programmatic ads, whether you’ve got a $20 a month budget or a $20 million a month budget. Now, the ability to similar types of tools to access the right content across the web will start to be available towards a lot more folks outside of the existing, you know, kind of ad tech ecosystem. Christian Klepp  23:55 And I might be stating the obvious when I say this here, but that’s a good thing, isn’t it, because, I mean, I, again, I came out of this industry, and I know that, like, you know, if you wanted to advertise in the New York Times, for example, right? Like, how expensive that would be, or, or anything that was print, right? And then they migrated all that to digital, and then it still wasn’t, it still wasn’t affordable. It was, it was cheaper than print, but still not like, exactly like, you know, yeah, I wonder, wonder if they’ll be worth the investment or not. And then now you have this, this push towards the democratization of all of this through AI and machine learning and, and I do think that you know, for all the the scare mongering that you know people are doing now with, with, oh, you know, all this stuff around AI, I do think that that part certainly will be advantageous to to B2B companies and to marketing in general. Brendan Norman – Classify  24:49 Great. I mean, yeah, optimistically, I think I’m excited about the entire landscape changing because it does a couple things. It allows for much more contextually relevant ads. I know right now there’s only, let’s call it to the magnitude of like, 1000s, 10s of 1000s, maybe hundreds of 1000s, of campaigns and or brands that are able to use these pipes to reach the largest publishers. And all of a sudden you expand that out. You know, I think between meta and Google, they each have somewhere between 15 to 20 million unique advertisers on their platforms, and what that means is, you get really hyper specific ads. And it also means that, like, I might get a local ad for my hometown here for some restaurant that’s launching a promotion that I might only get here, and I might only get to your point, maybe not in the morning, but I’ll get in the evening. There’s a lot of different data sets around my identity, you know, the psychographic profile, contextual understanding of what I’m reading at that exact moment. And what it does a lot of things. It helps smaller brands get more traction, get more visibility. It also just helps improve the publisher experience, and like publishers, make more money. And then the user who’s consuming that content, reading the web page, watching a video, also has just a better experience. And then the other layer of that will continue to just go on, this narrative of agentic, tension, but the agents who are reading that content, watching that video for an end user. On the other side, are also able to interact with advertising content that’s very contextually relevant to the content that they’re consuming again, and it’s good for the storytelling of the advertiser and good for monetization of that publisher too. Christian Klepp  26:38 Absolutely, absolutely. Okay. So how can high fidelity curation? This is the next question, right? How can high fidelity curation make B2B companies more sustainable? And if you can just provide an example, Brendan Norman – Classify  26:54 Curations like, it’s such an interesting term, but you know, effectively, it’s just, it’s helping to use the word and the definition, the definition in the word, curate the right inventory to run an ad campaign on, and curate the right inventory and audiences. So it’s a really important part of the business. I think it involves a couple things. It involves front end targeting, of knowing who’s the back to that question, who’s the audience, and then what’s the right content, and then it also involves a lot of ongoing optimization. And I’ll say that there are some some interesting companies that that are really good at curation, who are building out the right automatic tools to think about more real time optimization, and it’s something that the really big social media companies do very well, like they’re constantly looking at lots and lots of signals when they’re running a campaign, and they’re looking at inventory and stitching together based on the signals that they’re acquiring around. Why certain campaigns do well, to your point, you know, when we’re testing that, selling that pair of skis to Christian, we’re testing a lot of things. We’re testing what he’s reading, you know, we’re testing maybe time of day. We’re testing, you know, where he is. There’s a lot of different elements on the back end that they will ingest and understand and then refeed into that targeting and optimization algorithm. And I think that that is one of the cool things that AI to use, like the air quotes, AI will help enable the processing of a lot of this data to just be a lot faster, be a lot more cost effective, and a lot of these systems that you know previously have been not accessible to the ad tech ecosystem, just because we we operate at such a crazy scale of 10s, hundreds of billions of requests and impressions and transactions that happen every single day. It’s very cost expensive if you’re processing all of that data and all these different signals, with the advancement of how the model cost is getting a lot less expensive, very quickly, not just from an LLM perspective, but then the foundational layers and the infrastructure layers, like we’re doing contextual intelligence as an infrastructure layer. There are inference layers that all kind of sit underneath the LLM and help inform an LLM understanding of that content. As those costs start to decrease, you’ll start to see a lot better performance from curation, just because, you know, it’s not as cost prohibitive, and we’ll be able to find that balance in terms of economics. Christian Klepp  29:45 Yeah, yeah, you hit the nail on the head there. Because, you know, I was just writing this down. You said faster, more cost effective and in my head, and you said it, it’s like, and at scale, like, you can scale this stuff faster, like, when I when I think back, like, years ago, when we, when we launched an ad campaign, and, you know, just the amount of effort, like, for the print and then the cost into, you know, the media placements and all of that and and just alone for like, one city, just just the amount of investment that was involved in all of that, right? Just think, thinking about that. It’s like, gosh, and then now you can scale all of that, like, even faster, because it’s because it’s digital, right? So it’s just such an incredible evolution. Like, I’m getting just as excited as you are man, I’m like, for this next question. Brendan, I’m not sure if you’re the type that likes to do this, but I need you to look into the crystal ball for a second here, right? Because we’re looking at, like, stuff that is, you know, the events that are yet to come, if I’m gonna that, make it sound a little bit suspenseful, but, um, the future of digital advertising, like, how do you think that could become less fragmented and more optimized with everything that we’ve talked about in this conversation. Brendan Norman – Classify  31:04 Yeah, I caution against, like, having any, any specific predictions, and more of, like, a framework for, I mean, for me, at least, yeah, more of a framework for how I think overall, jobs will change. I think that people will have to spend a lot less time doing a lot of the manual, rote tasks that they’re doing today. And, you know, kind of in parallel with what we’re seeing in terms of vibe coding and people’s ability to build product really quickly, design new web pages really quickly. Like, get ship things out quickly. I think a lot of the the infrastructure layer tools, or just call them like, you know, the like, chatGPT style, cloud-based tools, LLMs, we’ll see a lot deeper integration into existing advertising product. And what that does is it helps democratize the whole ecosystem. So I think it frees up people’s time to not have to do a lot of the basic administrative, reporting, manual, campaign, optimization type stuff, and it will help service a lot better insights. Ultimately, I think the industry grows, and I think it scales even faster. And, you know, cautiously, optimistically, I think that we, we will have back to building on the curation piece, and, you know, the advertiser, outcomes piece, publisher, monetization piece, user experience piece, I think that all those things will increase, and I I’m hopeful that with the integration of just better technology, embedding AI into a lot of these systems, it’s going to help steer us towards having better experiences across any type of Publisher content. I think that the advertisers will see better outcomes. I think that the people that are in this industry will get to think more creatively about how they’re, you know, building better creative storytelling, better reaching the right people with those stories. And my hope is that it just continues to expedite and grow the overall industry. Brendan Norman – Classify  33:17 That will be my hope as well. All right, get up on your soapbox here for a little bit. What is a status quo in your area of expertise? So anything that we’ve talked about now in this conversation, what’s the status quo that you passionately disagree with and why? Oh, you must have a ton. Brendan Norman – Classify  33:44 I definitely do. I mean, you know, Christian Klepp  33:48 just name one, just one, Brendan Norman – Classify  33:50 Like in any industry, you know, there’s always, there’s always the early adopters, you know, there’s always the kind of like the middle stack, you know, there’s always, like, the laggards. There’s definitely, you know, a smaller, but growing quickly, minority of folks who are really leaning into, you know, I’ll just call it AI, and then the agentic web, and there’s a lot of discussion right now in ad tech around like, what that means? I’m still hearing that. There’s a lot of skeptics who are kind of making fun of it, or, you know, trash talking about different protocols. Fine, like those are the folks that are absolutely going to get left behind. And I think a lot of those folks on the soapbox in the next 6 to 12 months will look back at, you know what they said, and we’ll all kind of say that didn’t age well, and you were not building this stuff. You weren’t fingers on keyboard or hands on keyboard. Vibe marketing, vibe targeting, building stuff like shipping new product and testing and iterating. What I what I don’t think, is that the really big platforms are just able to be super nimble and adapt to a lot of these new frameworks quickly, totally like the pipes will continue to stay there. I think that there will be startups that are more nimble, that can build and ship things, you know, proof of concepts, prototypes, get things out, learn from them, fail, iterate, and then start to scale meaningful businesses without having to rely on a lot of the existing infrastructure that exists today. Do I think the trade desk is, you know, going anywhere? No, do I think that they will, like, continue to be a valuable piece in this ecosystem, absolutely. And I think that they will ship things. I think that they’ll enable the industry like to build on top of of the pipes that they’ve already built. And at the same time, I think a lot of that rapid advancement will come from startups who are kind of proving that, like they don’t necessarily need the existing pipes and channels to be able to at the end of the day, you know, this whole ecosystem is about helping an advertiser surface their ad against the right content for a human or for an agent. And there have been a lot of folks kind of sitting in the middle for that space for a long time. One of my favorite stats, soapboxy stats, is that if an advertiser puts $1 in to the open web with a programmatic web, 35 cents comes out to a publisher, so 65 cents is being taken by some combination of middlemen, you know, who are collecting a margin for, you know, different services, also some version of fraud. There’s a lot of things that happen in between that and what I’m again, cautiously optimistic about, you know, like the big picture, AI, of facilitating, is the ability to reduce that margin so that, you know, advertiser puts $1 in. A lot more of that dollar comes out towards the publisher, I think big social media, you know, it’s around 70 cents comes out. So they take, you know, somewhere between 25 to 30 cents, which is kind of the value exchange of providing the services, all the targeting, all the technology that goes into supporting that, you know, as a more fair exchange. So I think what a lot of the folks on more of the startup on more of like the front end of the frontier tech in the space we’re excited about is getting to reduce a lot of that inefficiency and a lot of that margin in the middle, and helping more of that dollar show up towards the publisher where it should. Christian Klepp  37:34 Boom and there you have it. Man Brendan, this has been awesome conversation, so thanks again for your time, please. Quick intro to yourself and how folks out there can get in touch with you. Brendan Norman – Classify  37:45 Yeah. Brendan Norman, CEO co-founder at Classify, please. You know, hit me up on LinkedIn or shoot me an email. Check out our website, which is, you know, www.tryclassify.com. I’m happy to connect. You know, if you have questions about advertising from a publisher side, from an advertiser side. Love to chat about it. Christian Klepp  38:06 Sounds good. Sounds good once again. Brendan, thanks for your time. Take care, stay safe and talk to you soon. Brendan Norman – Classify  38:13 Cool. Thanks, Christian. Christian Klepp  38:14 All right. Bye for now.

Supermanagers
AI Launches a Business in 40 Minutes with Samruddhi Mokal of Pace Labz

Supermanagers

Play Episode Listen Later Feb 19, 2026 36:55


This episode is a full “build a business in 40 minutes” demo showing how AI collapses what used to take teams (creative production + sales ops + support) into a handful of prompts. Samruddhi generates a high-production video ad in Google AI Studio using a JSON-style prompt framework, then spins up a working voice sales/support agent in Vapi via Claude Desktop + MCP—so the agent is created from a single prompt instead of clicking through the UI. The conversation also covers why “interfaces matter less” in an agent-first world, why workflow tools (like n8n) still have a role, and how memory layers like Mem0 unify context across channels (email/WhatsApp/etc.) so you can take actions without hunting.Timestamps0:00 — “Single person billion-dollar company” belief + AI driving 10x execution speed1:57 — Plan: create the ad in Google AI Studio (Veo 3.1) + build a voice agent using Vapi MCP via Claude Desktop2:42 — Smithery: marketplace for MCP servers3:39 — MCP for non-technical listeners: “like an API, but agents use it to talk to external services”4:22 — Inside Vapi MCP: tool list = APIs the agent can choose from5:06 — AI Studio setup: video generation playground + select Veo 3.16:16 — JSON prompting framework begins (structure → production-level output)6:28 — Keys: description, style, camera, lighting, environment, elements, motion, ending, text9:05 — Prompts/scripts can be AI-generated (humans provide guardrails)10:41 — Need an API key to generate videos in AI Studio10:54 — Ad review: strong realism; last segment looks AI-ish → iterate prompt13:05 — Install Vapi MCP via npx from Smithery + add Vapi API key13:46 — Claude Desktop: Vapi MCP appears under Connectors/Tools (not Claude web)14:05 — Prompt the agent build: “Fresh Pause” + role, tasks, FAQs, call flows18:23 — Testing: “Talk to assistant” starts a live call simulation19:20 — Deployment: assign a phone number; Vapi provides free/test numbers (up to a limit)21:57 — Mem0 / Supermemory: memory layer across apps/agents to keep context24:13 — Why memory layers help: fewer MCPs → less slowdown/hallucination; no need to specify where to search26:36 — MCPs + slide decks: mention of Gamma MCP via Claude27:34 — Future of n8n/Zapier: they persist, but prompting increasingly generates workflows31:38 — Prediction market trading algos (Kalshi/Polymarket) + AI improves speed/decision-making36:02 — Closing vision: help orgs 10x execution speed, especially non-technical leaders (40+) with domain expertiseTools & technologies mentionedGoogle AI Studio (Video Generation Playground) — Generate an 8-second video ad.Veo 3.1 — Google video model used for “production-level” output.JSON Prompting Framework — Structured key/value prompts for story, visuals, camera, lighting, motion, ending frame.Claude Desktop — Runs connectors/tools (including MCP servers).MCP (Model Context Protocol) — Lets agents call external services/tools based on intent.Smithery — Directory/marketplace for MCP servers.Vapi — Voice agent platform; create agents + assign phone numbers.Vapi MCP Server — Enables Claude to operate Vapi via prompts (create/list/configure).npx — Installs MCP server quickly from the terminal.API Keys — Required for AI Studio generation + Vapi authentication.Mem0 / Supermemory — Cross-channel memory layer to retrieve context automatically.Knowledge Graph — Underlying structure for semantic retrieval across interactions.Glean — Referenced as a comparison point for search/context retrieval.Gamma MCP — Example of generating slide decks via MCP.n8n / Zapier — Workflow automation tools discussed in an MCP-first future.OpenClaw — Mentioned as agent tooling that can help with steps like obtaining API keys.Kalshi / Polymarket — Prediction markets referenced in the trading/AI speed discussion.Subscribe at⁠ thisnewway.com⁠ to get the step-by-step playbooks, tools, and workflows.

Startup Inside Stories
Verifactu 2027 + Agentes IA: nace la “mega gestoría” (¿y muere el backoffice?)

Startup Inside Stories

Play Episode Listen Later Feb 19, 2026 74:19


Este episodio cuenta con el apoyo de Softwariza3Softwariza3 acompaña a las empresas en su día a día, ayudándolas a simplificar la gestión, automatizar tareas y ahorrar tiempo, con soluciones digitales siempre adaptadas a la normativa vigente.Implantación profesional, soporte experto y una forma honesta de hacer las cosas.Conoce más sobre Softwariza3: https://softwariza3.es/podcast-itnig/En esta tertulia nos sentamos con Eloy Montaña (CEO de Softwariza3 y Clavei) para hablar de cómo la regulación (Ley Antifraude / Verifactu) puede reventar, para bien o para mal, la operativa de miles de empresas: facturación “casi en tiempo real”, trazabilidad tipo “cadena” y el fin de los trucos de las facturas intermedias. También comentamos el coste oculto de los retrasos y cómo eso frena producto e innovación justo cuando el mercado va a mil. Luego entramos a lo que de verdad viene fuerte en 2026–2027: IA en asesorías y backoffice.Desde la idea de la “mega gestoría” (consolidación + procesos + IA) hasta por qué hay partes que van a caer antes que otras (y por qué nóminas en España es tan complejo). Y por el camino: cómo modernizas un ERP de décadas para llevarlo a cloud, y por qué el futuro de SaaS apunta más a cobrar por valor que por “asientos”.Cerramos con la parte incómoda: agentes que hacen cosas de verdad… y los riesgos reales. Hablamos de controles, seguridad, prompt injection y qué pasa cuando un agente “lee” internet y alguien intenta colarle instrucciones maliciosas (sí, incluyendo pagos). Y cómo la regulación europea puede ser a la vez freno y ventaja en esta carrera.

Horror Movie Talk
Iron Lung Review with Gina Teeters

Horror Movie Talk

Play Episode Listen Later Feb 18, 2026 64:14


Synopsis Iron Lung is about Markiplier in a submarine. That's the one thing that I can confidently say. there are a lot of other details, but none of them seem as salient. Sure all of the galaxy's, universe’s(?) suns have gone out, and there are factions of the remaining humans fighting for resources but that is really window dressing on Markiplier being in a submarine. Also, he's exploring an ocean of blood on some moon. Now you may ask me Bryce, how could they possibly functions as a society without a sun? How are they making new oxygen? Doesn't blood congeal or separate or something? how did they fine this moon without light? shut up nerd, Markilpier’s in a sub, now sit back and be scared. Review or Iron Lung before i go further, let me answer the main question first, yes this is better than Shelby Oaks. I will say that it's not as bad as I expected, but i wasn't blow. away either. The movie is basically all shot in one room, so that limitation let all the energy go into the story and the performance. It dod hold my attention for the most part, but it didn't deserve a two hour runtime. They could have edited out 40 mins and lose almost nothing. Markipliers performance was better than expected, but definitely leaned heavily into the melodramatic verging on overacting. The production design was well done except for the fact that it faithfully replicated the cartoonish UI of the video game, which I felt was a lazy choice. As far as delivering on suspense it did well. the movie was atmospheric and moody throughout. The ending goes full cosmic horror and feels like a good payoff. Score 4/10

Analytic Dreamz: Notorious Mass Effect
"OVERWATCH SEASON 1 (2026) - SALES & REVIEW ROUND-UP"

Analytic Dreamz: Notorious Mass Effect

Play Episode Listen Later Feb 18, 2026 18:18 Transcription Available


Linktree: ⁠⁠https://linktr.ee/Analytic⁠⁠Join The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: ⁠⁠https://ow.ly/msoH50WCu0K⁠⁠In this segment of Notorious Mass Effect, Analytic Dreamz delivers a concise analytical breakdown of Overwatch Season 1 (2026), Blizzard Entertainment's relaunch dropping "Overwatch 2" for a unified title with annual Season-1 cycles. Available on PC (Battle.net, Steam), PS4/5, Xbox One/Series X|S, and Nintendo Switch—with Switch 2 upgrade planned—it launched February 10, 2026, at 11 a.m. PST (2 p.m. EST, 7 p.m. GMT, etc.).The free-to-play title exploded with a Steam peak of 165,651 concurrent players—over 2x the prior 75,608 record—averaging 30,000+ post-launch, ranking #17 on Newzoo (Jan 2026), surpassing Call of Duty, Battlefield 6, and Marvel Rivals.Steam reviews shifted from "Overwhelmingly Negative" (27% positive) toward "Mixed," praising hero influx, content refresh, and resurgence amid minor UI/balance bugs (e.g., Domina laser) fixed by Feb 13.Core 5v5 PvP features payload/control objectives in the "Reign of Talon" year-long arc (6 seasons). Season 1 adds Conquest meta-event (5 weeks: Overwatch vs. Talon factions, 75+ loot boxes, exclusive Echo skins); 5 new heroes (Tank: Domina—photon beam, shield regen; Damage: Emre—burst rifle, Emre—fire fans, burn amp; Support: Mizuki—ricochet blade, Jetpack Cat—permanent flight, biotic projectiles); sub-role passives (e.g., Tank Bruiser crit reduction); Stadium 6v6 mode; 3D UI/lobby; Mythics (Mercy Celestial, Juno Star Shooter, Mei); Hello Kitty crossover (Feb 10–23).Analytic Dreamz unpacks competitive meta disruption from 5-hero drop (10 planned for 2026), rapid dev pipeline (4–5 months/hero), story integration (map damage, cinematics), and roadmap (Season 2: 10th anniversary; Season 3: Japan Night map). This ecosystem reset boosts engagement, narrative immersion, and counters rivals via content velocity.Tune in for strategic takeaways on player retention and genre dominance.Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/exclusive-contentPrivacy & Opt-Out: https://redcircle.com/privacy

Open Source Startup Podcast
E192: Creating Browser Use, Navigating Hyper Growth & Building in the Competitive Browser Automation Space

Open Source Startup Podcast

Play Episode Listen Later Feb 18, 2026 41:13


In our latest Open Source Startup Podcast episode, co-hosts Robby and Tim talk with Magnus Müller, the Co-Founder & CEO of Browser Use - the platform that makes web agents come to life. Their open source, browser-use, has almost 80K stars on GitHub and is widely adopted. This episode dives into the unexpected rise of an open-source browser automation project that took off during Y Combinator - while many similar projects before and after it never gained traction. The founder reflects on why: delivering a “magical moment” fast. Early demos showing AI controlling a browser, inspired by trends like OpenAI's Operator, and immediately clicked with people. What began as a developer-only Python library evolved into a hosted product as non-technical users - from sales teams to startups - wanted access. Along the way, the team leaned into controversial but compelling use cases, like AI applying for jobs on your behalf, which sparked conversation and accelerated growth. The core challenge they focused on solving was reliability: unlike deterministic automation scripts, AI agents can behave unpredictably, making trust and repeatability central problems to overcome.The long-term vision goes beyond UI automation toward agents that can skip the browser entirely and interact directly with website servers through structured actions. But the conversation isn't just about infrastructure. The founder admits that early growth came mostly from building and talking to users, while recent months have been dedicated to storytelling and marketing rather than coding. A personal through-line emerges as well: learning to replace defensiveness with curiosity - questioning assumptions, staying open to feedback, and continuously refining both the technology and the narrative around it.

Product for Product Management
EP 148 - AI Tools: V0, Replit and more with Adir Traitel

Product for Product Management

Play Episode Listen Later Feb 18, 2026 59:48


We're keeping the AI Tools series rolling with Adir Traitel, entrepreneur, product leader, and early adopter of just about every vibe coding tool out there. Adir joins Matt and Moshe to share hard‑won lessons from building real apps with v0, Bolt, Replit, Figma Make, and more, all while running his own startup and consulting on product builds across industries.From his early days in project management and mobile app startups, through work with companies like Moovit and across FinTech, AgTech, and credit scoring, Adir has consistently been the “try it first” person for new build tools. In this episode, he breaks down what these platforms actually do well, where they fall short, and how product managers can use them responsibly for experiments, prototypes, and beyond.Join Matt, Moshe, and Adir as they explore:Adir's journey from PM and founder to heavy user of vibe coding tools in his current startupHis 3-layer view of the ecosystem: AI dev assistants (Cursor, Antigravity, Claude Code), front-end mockup tools (v0, Figma Make), and full‑product builders (Lovable, Base44, Bolt, Replit)V0: where it shines for quickly building functional UIs (like his electricity consumption app) and where it starts to crackLovable: great for sites and simple flows, but not ideal for complex SaaS or CRM‑like productsBolt: fun and fast for concepts, but why it never got him close to productionReplit: stronger agents and capabilities, but weaker UI output and surprising backend defaults that can get very expensive very quicklyFigma Make and Google Stitch: when design quality trumps everything else, especially for SaaS interfacesThe real costs of vibe coding: AI token spend, hosting/pricing traps, and why production economics matter as much as build speedWhat his “dream product” would look like, including multi‑agent environments, better security/privacy, and built‑in QA and CI/CDHow all this is reshaping the product management role, and why curiosity and tool fluency are becoming must‑have skillsAnd much more!Want to connect with Adir or learn more?LinkedIn: https://www.linkedin.com/in/adirtraitel/ Website: https://adirtraitel.com/You can also connect with us and find more episodes:Product for Product Podcast: http://linkedin.com/company/product-for-product-podcastMatt Green: https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky: http://www.linkedin.com/in/mikanovskyNote: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️

SaaS Sessions
S10E2 - From Harvard Law to SaaS CEO: Decoding the "Paperless" Future ft Shashank Bijapur, Spotdraft

SaaS Sessions

Play Episode Listen Later Feb 17, 2026 31:00


Shashank Bijapur, co-founder and CEO of Spotdraft, explores the transition from the archaic, manual world of legal practice to the high-velocity domain of B2B SaaS. In this episode, we strip away the jargon surrounding "LegalTech" to reveal how Spotdraft powers the invisible infrastructure of global commerce - from airport leases to ride-sharing agreements. Shashank provides a masterclass on finding product-market fit in the mid-market, the reality of AI's role in high-stakes legal workflows, and the strategic pivot from technical perfection to market-driven iteration.Key Takeaways1. The "Aha Moment": Identifying Stagnation in Essential Industries- Digital Lag: While photography (Adobe) and accounting (Intuit) underwent digital revolutions decades ago, legal innovation peaked in 1993 with Microsoft Word's "Track Changes."- The Opportunity Gap: Identifying ubiquitous, paper-heavy processes that remain manual despite technological advancements is the strongest signal for a SaaS disruption.- Democratic Software: The goal isn't just to replace a lawyer; it's to turn complex legal processes into software that is as accessible and intuitive as a consumer app.2. GTM Strategy: The Power of Mid-Market Focus- Avoid the "Gambler's Fallacy": Shashank emphasizes the importance of trashing unusable early products rather than doubling down on a failing idea.- Homogeneity Matters: The US is the primary target for Indian SaaS due to its massive, homogeneous market, which allows for a repeatable ecosystem and faster flywheels.- The Mid-Market Sweet Spot: Avoiding the high-churn "small business" trap and the "unobtainable enterprise" early on leads to a focused GTM where legal teams (the true buyer persona) have decision-making power.3. The Founder's Dilemma: Accuracy vs. Speed- Legal Training vs. Startup Reality: Lawyers are trained for 100% accuracy; founders must embrace "fail fast." Overcoming the urge to pursue a "perfect product" is essential to gathering user feedback.- Technical Maturity: In 2017, the promise of AI exceeded the technology's capability. Spotdraft pivoted to building robust workflows first, capturing the data needed to make today's LLM integrations effective.- The Talent Moat: When a founder lacks specific functional knowledge (like GTM or engineering), the solution is "talent density"—hiring highly motivated experts who believe in the mission.4. The Future of AI in High-Stakes Legal- The End of "Form Filling": UI is shifting from manual data entry to conversational interfaces where users describe an outcome, and the AI configures the workflow.- Context is King: General LLMs lack company-specific context. AI's value in SaaS comes from mapping global laws against a company's specific historical data and standards.- Humans in the Loop: AI will handle "grunt work" and pattern recognition, but $1M+ deals will still require a human handshake and strategic negotiation for at least the next decade.About Spotdraft:Spotdraft is an AI-driven, end-to-end contract automation platform designed to clear the "madness from quote to cash." It helps businesses of all sizes—from startups to giants like Uber and Airbnb—create, manage, and analyze contracts seamlessly.Chapters:00:10 - Introduction00:50 - Journey from Lawyer to SaaS CEO03:34 - The "Aha Moment" for LegalTech07:09 - Spotdraft's Hidden Role in Everyday Life11:34 - GTM Strategy: Building from India for the US18:24 - Balancing Legal Risk with Founder Speed22:56 - How LLMs are Changing Legal Workflows30:22 - Lightning Round: Lessons Learned & AI ToolsVisit our website - https://saassessions.com/Connect with me on LinkedIn - https://www.linkedin.com/in/sunilneurgaonkar/

MLOps.community
Rethinking Notebooks Powered by AI

MLOps.community

Play Episode Listen Later Feb 13, 2026 26:13


Vincent Warmerdam is a Founding Engineer at marimo, working on reinventing Python notebooks as reactive, reproducible, interactive, and Git-friendly environments for data workflows and AI prototyping. He helps build the core marimo notebook platform, pushing its reactive execution model, UI interactivity, and integration with modern development and AI tooling so that notebooks behave like dependable, shareable programs and apps rather than error-prone scratchpads.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractVincent Warmerdam joins Demetrios fresh off marimo's acquisition by Weights & Biases—and makes a bold claim: notebooks as we know them are outdated.They talk Molab (GPU-backed, cloud-hosted notebooks), LLMs that don't just chat but actually fix your SQL and debug your code, and why most data folks are consuming tools instead of experimenting. Vincent argues we should stop treating notebooks like static scratchpads and start treating them like dynamic apps powered by AI.It's a conversation about rethinking workflows, reclaiming creativity, and not outsourcing your brain to the model.// BioVincent is a senior data professional who worked as an engineer, researcher, team lead, and educator in the past. You might know him from tech talks with an attempt to defend common sense over hype in the data space. He is especially interested in understanding algorithmic systems so that one may prevent failure. As such, he has always had a preference to keep calm and check the dataset before flowing tonnes of tensors. He currently works at marimo, where he spends his time rethinking everything related to Python notebooks.// Related LinksWebsite: https://marimo.io/Coding Agent Conference: https://luma.com/codingagentsHyperbolic GPU Cloud: app.hyperbolic.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Vincent on LinkedIn: /vincentwarmerdam/Timestamps:[00:00] Context in Notebooks[00:24] Acquisition and Team Continuity[04:43] Coding Agent Conference Announcement![05:56] Hyperbolic GPU Cloud Ad[06:54] marimo and W&B Synergies[09:31] marimo Cloud Code Support[12:59] Hardest Code to Generate[16:22] Trough of Disillusionment[20:38] Agent Interaction in Notebooks[25:41] Wrap up

a16z
Anish Acharya: Is SaaS Dead in a World of AI?

a16z

Play Episode Listen Later Feb 12, 2026 81:34


In this episode from 20VC, Harry Stebbings talks with Anish Acharya, general partner at a16z, about the future of SaaS in an AI world. Anish argues that software is completely oversold and that the general story about vibe coding everything is flat wrong. They discuss why SaaS switching costs are actually going down thanks to coding agents, where startups versus incumbents will win, and whether the apps layer or foundation models will capture more value. They also cover agent overhype, the changing UI paradigm, what defensibility looks like now, and why boring wins versus weird wins in this product cycle. Resources:Follow Anish Acharya on X:  https://twitter.com/illscienceFollow Harry Stebbings on X:  https://twitter.com/HarryStebbings Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Category Visionaries
Why Portnox's CEO refuses to measure Net Promoter Score | Denny LeCompte

Category Visionaries

Play Episode Listen Later Feb 11, 2026 18:01


Portnox is an enterprise access control platform that eliminates passwords and enforces zero trust security. The company was bootstrapped for over a decade, plateauing at a few million in ARR before investors brought in Denny LeCompte as CEO four years ago. Since then, Portnox has grown 8x. But this episode isn't about that growth story. Denny, a former cognitive scientist and professor who taught psychometrics, uses his scientific background to systematically dismantle Net Promoter Score—explaining why it's methodologically flawed, how it misleads organizations, and which metrics actually correlate with business performance. This is a contrarian take grounded in measurement science, not marketing opinion. Topics Discussed: The fundamental psychometric flaws in NPS: why single-item questionnaires are unreliable and why throwing out 7s and 8s violates basic statistical principles How NPS scores fluctuate based on survey UI presentation independent of actual customer sentiment Why NPS creates incentive structures that encourage gaming rather than improving customer outcomes The case for gross revenue retention and net revenue retention as the only ungameable metrics that matter How measuring human behavior changes that behavior (the Heisenberg principle applied to business metrics) Why investors care about retention rates above 90% but don't ask about NPS scores GTM Lessons For B2B Founders: Single-item questionnaires violate measurement principles: Denny's background in psychometrics immediately flagged NPS as unreliable. One-item measures lack the redundancy needed for reliability, and the methodology of throwing out middle responses (7s and 8s) then subtracting detractors from promoters is statistically nonsensical. At a previous company with thousands of data points, he observed NPS scores drop and rise based solely on how the survey rendered on the page—no business changes, just UI differences. When presentation affects your metric independent of the underlying construct, your instrument is broken. Founders with technical backgrounds should trust their instincts when measurement methodology feels scientifically unsound. Compensation drives behavior more than metric accuracy: Portnox structures customer success compensation as 50% gross revenue retention and 50% net revenue retention. These are determined by finance and can't be manipulated. Denny had to rein in his CS team when they became overly focused on time-to-value because any number you give a team becomes their obsession. With NPS, teams game survey timing, cherry-pick recipients, and optimize for score rather than outcome. This is the Heisenberg principle applied to business: measuring changes the behavior. Choose metrics where gaming the number aligns with improving actual business outcomes. Investors evaluate retention rates, not satisfaction surveys: When Denny presents gross retention above 90%, investors don't ask about NPS. Renewal behavior reveals actual satisfaction—customers voting with budget rather than survey responses. The test for any metric: "What are we doing differently if this number is up versus down?" If it doesn't drive distinct actions or reveal information not already visible in financials, eliminate it. NPS often becomes a number that exists because "we've always measured it," inherited from previous leadership without questioning its utility. Question inherited practices ruthlessly: NPS gained adoption through Harvard Business Review credibility in 2003 and consulting firms building practices around it. The promise of "one number you need" appeals to executives wanting simple solutions. But herd behavior—"everyone else measures it"—perpetuates bad methodology. Denny's advice to founders stuck with NPS: give your team something else to focus on (gross retention is straightforward: don't let customers churn), then stop doing it. Sometimes you need to point to external validation to break internal momentum. The question isn't whether NPS correlates somewhat with growth—it's whether better alternatives exist that can't be gamed. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPLSMFimtv0riPyM

Lenny's Podcast: Product | Growth | Career
Getting paid to vibe code: Inside the new AI-era job | Lazar Jovanovic (Professional Vibe Coder)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Feb 8, 2026 102:30


Lazar Jovanovic is a full-time professional vibe coder at Lovable. His job is to build both internal tools and customer-facing products purely using AI, while not having a coding background. In this conversation, he breaks down the tactics, workflows, and framework that let him ship production-quality products using only AI.We discuss:1. Why having no coding background can be an advantage when building with AI2. Why most of your time should go to planning and chat mode, not prompting3. What to do when you get stuck: his 4x4 debugging workflow4. The PRD and Markdown file system that keeps AI agents aligned across complex builds5. Why kicking off four or five parallel prototypes is the best way to clarify your thinking6. Why design skills and taste are going to be the most important skills in the future7. His “genie and three wishes” mental model for making the most of AI's limitations8. How product, engineering, and design roles are converging—and what that means for your career—Brought to you by:Strella—The AI-powered customer research platform: https://strella.io/lennySamsara—Saving lives with AI built for physical operations: https://samsara.com/lennyWorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lenny—Episode transcript: https://www.lennysnewsletter.com/p/getting-paid-to-vibe-code—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Lazar Jovanovic:• X: https://x.com/lakikentaki• LinkedIn: https://www.linkedin.com/in/lazar-jovanovic• YouTube: https://www.youtube.com/@50in50challenge• Starter Story course: https://build.starterstory.com/build/ai-build-accelerator?via=lazar (code LAZAR15 for 15% off)—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Lazar and professional vibe coding(04:53) What a professional vibe coder actually does day-to-day(09:26) Why non-technical backgrounds can be an advantage(12:24) The importance of self-awareness(14:42) His “genie and three wishes” mental model(17:43) Developing taste and judgment in the age of AI(21:46) The parallel project approach for better outcomes(29:30) Creating dynamic context windows with PRDs(36:56) Why elite vibe coders focus on planning, not coding(44:43) Creating MD files to guide AI development(50:57) Why prototyping still matters(56:50) Why “good enough” is no longer good enough(01:00:53) The future of engineering in an AI world(01:05:14) What to do when you get stuck: his 4x4 debugging workflow(01:14:27) Helping agents learn from their mistakes(01:15:35) Why watching agent output is more important than code(01:19:08) The incredible pace of AI development(01:22:55) Why emotional intelligence will become more valuable(01:28:30) How to become a professional vibe coder(01:30:10) Why building in public is the fastest path to opportunities(01:37:03) Final thoughts on focusing on quality over tech stack—Referenced:• The new AI growth playbook for 2026: How Lovable hit $200M ARR in one year | Elena Verna (Head of Growth): https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna• Elena Verna on how B2B growth is changing, product-led growth, product-led sales, why you should go freemium not trial, what features to make free, and much more: https://www.lennysnewsletter.com/p/elena-verna-on-why-every-company• The ultimate guide to product-led sales | Elena Verna: https://www.lennysnewsletter.com/p/the-ultimate-guide-to-product-led• 10 growth tactics that never work | Elena Verna (Amplitude, Miro, Dropbox, SurveyMonkey): https://www.lennysnewsletter.com/p/10-growth-tactics-that-never-work-elena-verna• Lovable: https://lovable.dev• Lovable + Shopify: https://lovable.dev/shopify• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Mobbin: https://mobbin.com• Dribbble: https://dribbble.com• 21st.dev: https://21st.dev• Lovable base prompt generator: https://chatgpt.com/g/g-67e1da2c9c988191b52b61084438e8ee-lovable-base-prompt• Lovable PRD generator: https://chatgpt.com/g/g-67e1e85fbeac8191a69b95c6d5c42ef6-lovable-prd-generator• Felix Haas's newsletter: https://designplusai.com• Bauhaus: https://en.wikipedia.org/wiki/Bauhaus• Glassmorphism: https://www.figma.com/community/plugin/1197106608665398190/glassmorphism• UI style guide: http://uistyle.lovable.app• Cloudflare: https://www.cloudflare.com• Ben Tossell on X: https://x.com/bentossell• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Peter Thiel says AI will be ‘worse' for math nerds than for writers: https://www.businessinsider.com/peter-thiel-ai-worse-for-math-professionals-than-writers-2024-4• Andrej Karpathy on X: https://x.com/karpathy• The 100-person AI lab that became Anthropic and Google's secret weapon | Edwin Chen (Surge AI): https://www.lennysnewsletter.com/p/surge-ai-edwin-chen• Why experts writing AI evals is creating the fastest-growing companies in history | Brendan Foody (CEO of Mercor): https://www.lennysnewsletter.com/p/experts-writing-ai-evals-brendan-foody• Slumdog Millionaire: https://www.imdb.com/title/tt1010048—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Unchained
Uneasy Money: How the Increasingly Better AI Agents Are Being Used Onchain

Unchained

Play Episode Listen Later Feb 7, 2026 82:43


Thank you to our sponsors! Fuse: The Energy Network MultiChain Advisors Vitalik Buterin just dropped a bombshell: the L2 vision no longer makes sense. Meanwhile, AI coding agents are going parabolic. In this monster episode of Uneasy Money, Ethereum Foundation Head of Developer Growth Austin Griffith and Optimism co-founder Karl Floersch join hosts Kain Warwick and Taylor Monahan to unpack the reasoning behind Vitalik's remarks and debate whether Ethereum needs L2s to pull institutions. They also take a deep dive into the OpenClaw and Moltbook craze and Austin shares how he has different agents running on different machines, including one that texts his wife good morning everyday. Is “AI the new UI?” Hosts: Kain Warwick, Founder of Infinex and Synthetix Taylor Monahan, Security Expert, Metamask Guests: Austin Griffith, AI Lead at Ethereum Foundation Karl Floersh, CTO of OP Labs Links: Vitalik Rethinks Ethereum's L2 Playbook, Calls for Shift Toward Native Rollups How the x402 Standard Is Enabling AI Agents to Pay Each Other Learn more about your ad choices. Visit megaphone.fm/adchoices

Infinitum
Kukičam memorije

Infinitum

Play Episode Listen Later Feb 7, 2026 90:19


Ep 277Western governments BUILT the backdoors China walked through. They are called "lawful intercept" systems.Apple's new iPhone and iPad security feature limits cell networks from collecting precise location data | TechCrunchFlorian Roth: Notepad++ hacked. This is bad. Putty level bad.iPhone 5s Gets New Software Update 13 Years After LaunchWindows 11 ima 1 milijardu aktivnih korisnika.Announcing msgvault: lightning fast private email archive and search system, with terminal UI and MCP server, powered by DuckDB – Wes McKinneyMake Finder Window Columns Resize to Fit Filenames - TidBITSApple Propelled to Record Q1 2026 Financials by iPhone and Services - TidBITSSdW (re-)joins Apple.Steve Moser: I'm not sure which is better news: Alan Dye leaving Apple or Sebastiaan joiningBasic Apple Guy: Nature is healing.Renaud Lienhart: Sounds like one of Steve Lemay's first task after Dye's departure is to try to hire back all the designers who were alienated & departed over the past decade. This is great.Shipping at Inference-Speed | Peter SteinbergerClawdbot / Moltbot / OpenClaw — Personal AI AssistantClawdbot Showed Me What the Future of Personal AI Assistants Looks LikeMoltbookI Spent 40 Hours Researching Clawdbot.Clawd disaster incomingAndrej Karpathy: A few random notes from claude coding quite a bit last few weeks.This white hat is providing over-eager AI builders a much-needed wake up call.ClawCon ?!2 nedelje za C compiler koji radi.i've made a tragic discovery using clawdbot. there simply aren't that many tasks in my personal life that are worth automatingDušan Dž.: Tim robota mi programira u Claude Code. OpenClaw mi radi istraživanje tržišta. Robot-usisivač pere pod. A ja? Ja slažem veš. Nisam se nadao ovakvoj budućnosti.Apple WINS AI because INTEL and MICROSOFT got it wrong.Apple Just Made Its Second-Biggest Acquisition Ever After BeatsXcode 26.3 unlocks the power of agentic codingApple introduces new AirTag with expanded range and improved findability10+ Things to Know About the New AirTag 2The chime has changed from the note "F" to the note "G".Oliur / ASUS just beat Apple to it.ROG Strix 5K XG27JCG 5K-GPU Supported Refresh Rate ListApple has landed the rights to turn ‘MISTBORN' into a film franchise & ‘THE STORMLIGHT ARCHIVE' into a TV series.Researcher builds bizarre 128-byte USB drive the size of a dinner plate using ancient pre-semiconductor magnetic core memory technology — data disappears once it is read, requiring special handlinghollywood.computerZahvalniceSnimano 6.2.2026.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu

Where It Happens
Claude Opus 4.6 vs GPT-5.3 Codex: Live Build, Clear Winner

Where It Happens

Play Episode Listen Later Feb 6, 2026 48:54


I sit down with Morgan Linton, Cofounder/CTO of Bold Metrics, to break down the same-day release of Claude Opus 4.6 and GPT-5.3 Codex. We walk through exactly how to set up Opus 4.6 in Claude Code, explore the philosophical split between autonomous agent teams and interactive pair-programming, and then put both models to the test by having each one build a Polymarket competitor from scratch, live and unscripted. By the end, you'll know how to configure each model, when to reach for one over the other, and what happened when we let them race head-to-head. Timestamps 00:00 – Intro 03:26 – Setting Up Opus 4.6 in Claude Code 05:16 – Enabling Agent Teams 08:32 – The Philosophical Divergence between Codex and Opus 11:11 – Core Feature Comparison (Context Window, Benchmarks, Agentic Behavior) 15:27 – Live Demo Setup: Polymarket Build Prompt Design 18:26 – Race Begins 21:02 – Best Model for Vibe Coders 22:12 – Codex Finishes in Under 4 Minutes 26:38 – Opus Agents Still Running, Token Usage Climbing 31:41 – Testing and Reviewing the Codex Build 40:25 – Opus Build Completes, First Look at Results 42:47 – Opus Final Build Reveal 44:22 – Side-by-Side Comparison: Opus Takes This Round 45:40 – Final Takeaways and Recommendations Key Points Opus 4.6 and GPT-5.3 Codex dropped within 18 minutes of each other and represent two fundamentally different engineering philosophies — autonomous agents vs. interactive collaboration. To use Opus 4.6 properly, you must update Claude Code to version 2.1.32+, set the model in settings.json, and explicitly enable the experimental Agent Teams feature. Opus 4.6's standout feature is multi-agent orchestration: you can spin up parallel agents for research, architecture, UX, and testing — all working simultaneously. GPT-5.3 Codex's standout feature is mid-task steering: you can interrupt, redirect, and course-correct the model while it's actively building. In the live head-to-head, Codex finished a Polymarket competitor in under 4 minutes; Opus took significantly longer but produced a more polished UI, richer feature set, and 96 tests vs. Codex's 10. Agent teams multiply token usage substantially — a single Opus build can consume 150,000–250,000 tokens across all agents. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ Morgan Linton X/Twitter: https://x.com/morganlinton Bold Metrics: https://boldmetrics.com Personal Website: https://linton.ai

Bankless
AI on Ethereum: ERC-8004, x402, OpenClaw and the Botconomy | Austin Griffith & Davide Crapis

Bankless

Play Episode Listen Later Feb 5, 2026 97:18


AI agents aren't “coming” to Ethereum—they're already here, spinning up on dedicated machines, clicking through wallets, deploying contracts, and even building apps for themselves. In this episode, Ryan and David sit down with Davide Crapis and Austin Griffith to map the emerging agent stack: ERC-8004 as a decentralized identity + reputation layer, x402 as payment rails for agent-to-agent commerce, and the real-world “Clawdbot” experiments that show what happens when an agent gets a wallet, a codebase, and a mandate. Along the way: prompt-injection risks, why agents read calldata like it's their native language, and why it may be the best time in history to be a solo builder—even as it gets harder to be a junior dev. ---

Major Nelson Radio
Overwatch - 10 Years, More Heroes, Big Updates | Official Xbox Podcast

Major Nelson Radio

Play Episode Listen Later Feb 4, 2026 29:39


In this episode of the Official Xbox Podcast, we're so excited to have the Overwatch team in studio with us! We're talking about game's 10-year anniversary, diving deep into the new heroes, and getting information on the story. We're also looking ahead to what will be a massive year for both the franchise and Blizzard as a whole.00:00 Introduction01:09 Overwatch is having its 10th anniversary this year. How is the Overwatch team there at Blizzard feeling about the last ten years and about this big year to come? 02:47 There's a name change with Overwatch 2 going back to Overwatch? What was the thought behind it? How's this going to work? 03:28 Along with that change, there are some other changes as well when it comes to how you're handling seasons moving forward, right? 03:55 Season 1 is launching on February 10th, and that's starting off something huge for Overwatch, right? 04:17 Talon is seeing some leadership changes at the top. Doomfist is no longer in control, right? 04:57 New heroes coming to Overwatch. This year we're getting 10 new heroes overall, which is more than triple what we get normally in the year. Five of those are dropping during season 1? 06:53 Domina deep dive 09:07 Two new DPS, let's start with Emre. 10:43 Anran deep dive 12:34 Mizuki deep dive 14:54 Jet Pack Cat deep dive 17:51 We know our roles of DPS, Tank, and Support, but you guys are rolling out Sub-Roles and Passives as well? 20:37 Visually, the game is getting a bit of an update with the UI and some engine advancements. 21:43 Plus new Post-Match Accolades? 22:48 What can you share about the new cosmetics, and if you can, which one is your favorite? 23:36 For you two, personally, what's the thing you're most excited for players to get to experience over this upcoming year? 26:15 Final thoughts? 28:00 Do you have favorite heroes or villains? 29:12 Outro FOLLOW XBOXFacebook: https://www.facebook.com/Xbox​​​ Twitter: https://www.twitter.com/Xbox​​​ Instagram: https://www.instagram.com/Xbox

Syntax - Tasty Web Development Treats
975: What's Missing From the Web Platform?

Syntax - Tasty Web Development Treats

Play Episode Listen Later Feb 2, 2026 50:58


Scott and Wes run through their wishlist for the web platform, digging into the UI primitives, DOM APIs, and browser features they wish existed (or didn't suck). From better form controls and drag-and-drop to native reactivity, CSS ideas, and future-facing APIs, it's a big-picture chat on what the web could be. Show Notes 00:00 Welcome to Syntax! Wes Tweet 00:39 Exploring What's Missing from the Web Platform 02:26 Enhancing DOM Primitives for Better User Experience 03:59 Multi-select + Combobox. Open-UI 04:49 Date Picker. Thibault Denis Tweet 07:18 Tabs. 08:01 Image + File Upload. 09:08 Toggles. 10:23 Native Drag and Drop that doesn't suck. 12:03 Syntax wishlist. 12:06 Type Annotations. 15:07 Pipe Operator. 16:33 APIs We Wish to See on the Web 18:31 Brought to you by Sentry.io 19:51 Identity. 21:33 getElementByText() 24:09 Native Reactive DOM. Templating in JavaScript. 24:48 Sync Protocol. 25:52 Virtualization that doesn't suck. 27:40 Put, Patch, and Delete on forms. Ollie Williams Tweet SnorklTV Tweet 28:55 Text metrics: get bounding box of individual characters. 29:42 Lower Level Connections. 29:50 Bluetooth API. 30:47 Sockets. 31:29 NFC + RFID. 34:34 Things we want in CSS. 34:40 Specify transition speed. 35:24 CSS Strict Mode. 36:25 Safari moving to Chromium. 36:37 The Need for Diverse Browser Engines 37:48 AI Access. 44:49 Other APIs 46:59 Qwen TTS 48:07 Sick Picks + Shameless Plugs Sick Picks Scott: Monarch Wes: Slonik Headlamp Shameless Plugs Scott: Syntax on YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads