POPULARITY
Categories
Samantha Béart (Baldur's Gate 3, Star Wars: The Old Republic) plays a sorceress with a knack for brewing up trouble in this tale with Sudanese, Egyptian and Palestinian roots. Sign up for our monthly newsletter, "The Lion's Roar", here.
The truth behind a deadly attack in Iran that killed over 100 school girls, Victoria is charging people to see their own coastline. Plus, the Iranian women's soccer team faces threats back home.See omnystudio.com/listener for privacy information.
Blake Munroe, CJ Vogel, and Jeff Howe celebrate the Texas Longhorns' impressive undefeated start, SEC play and their recent triumph at the Bruce Bolt Classic. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Innes Wilson joins us this week to give us the lowdown on his 7-0 run at the Manchester Super Major with Deathwatch!Space MarinesDeathwatchStrike Force (2000 points)Black Spear Task ForceCHARACTERSCaptain in Gravis Armour (105 points)• Warlord• 1x Boltstorm gauntlet1x Power fist1x Relic blade• Enhancement: Thief of SecretsJudiciar (95 points)• 1x Absolvor bolt pistol1x Executioner relic blade• Enhancement: Beacon AngelisLieutenant with Combi-weapon (70 points)• 1x Combi-weapon1x Paired combat bladesWatch Master (125 points)• 1x Vigil spear• Enhancement: The Tome of EctocladesBATTLELINEDeathwatch Veterans (190 points)• 1x Watch Sergeant• 1x Combi-weapon1x Xenophase blade• 9x Deathwatch Veterans• 4x Astartes shield1x Black Shield blades4x Deathwatch thunder hammer4x Power weaponIntercessor Squad (80 points)• 1x Intercessor Sergeant• 1x Bolt pistol1x Bolt rifle1x Power fist• 4x Intercessor• 1x Astartes grenade launcher4x Bolt pistol4x Bolt rifle4x Close combat weaponOTHER DATASHEETSFortis Kill Team (180 points)• 1x Kill Team Sergeant• 1x Astartes grenade launcher1x Bolt pistol1x Deathwatch bolt rifle1x Power fist• 3x Kill Team Intercessor• 1x Astartes grenade launcher3x Bolt pistol3x Close combat weapon3x Deathwatch bolt rifle• 4x Kill Team Intercessor with plasma incinerator• 3x Bolt pistol4x Close combat weapon4x Plasma incinerator1x Plasma pistol• 2x Kill Team Intercessor with superfrag rocket launcher• 2x Bolt pistol2x Castellan launcher2x Close combat weapon1x Superkrak rocket launcher1x Vengor launcherFortis Kill Team (180 points)• 1x Kill Team Sergeant• 1x Astartes grenade launcher1x Bolt pistol1x Deathwatch bolt rifle1x Power fist• 3x Kill Team Intercessor• 1x Astartes grenade launcher3x Bolt pistol3x Close combat weapon3x Deathwatch bolt rifle• 4x Kill Team Intercessor with plasma incinerator• 3x Bolt pistol4x Close combat weapon4x Plasma incinerator1x Plasma pistol• 2x Kill Team Intercessor with superfrag rocket launcher• 2x Bolt pistol2x Castellan launcher2x Close combat weapon1x Superkrak rocket launcher1x Vengor launcherFortis Kill Team (180 points)• 1x Kill Team Sergeant• 1x Astartes grenade launcher1x Bolt pistol1x Deathwatch bolt rifle1x Power fist• 3x Kill Team Intercessor• 1x Astartes grenade launcher3x Bolt pistol3x Close combat weapon3x Deathwatch bolt rifle• 4x Kill Team Intercessor with plasma incinerator• 3x Bolt pistol4x Close combat weapon4x Plasma incinerator1x Plasma pistol• 2x Kill Team Intercessor with superfrag rocket launcher• 2x Bolt pistol2x Castellan launcher2x Close combat weapon1x Superkrak rocket launcher1x Vengor launcherIncursor Squad (80 points)• 1x Incursor Sergeant• 1x Bolt pistol1x Occulus bolt carbine1x Paired combat blades• 4x Incursor• 4x Bolt pistol1x Haywire Mine4x Occulus bolt carbine4x Paired combat bladesIncursor Squad (80 points)• 1x Incursor Sergeant• 1x Bolt pistol1x Occulus bolt carbine1x Paired combat blades• 4x Incursor• 4x Bolt pistol1x Haywire Mine4x Occulus bolt carbine4x Paired combat bladesIndomitor Kill Team (265 points)• 4x Kill Team Heavy Intercessors• 4x Bolt pistol4x Close combat weapon2x Deathwatch heavy bolt rifle2x Deathwatch heavy bolter• 3x Kill Team Heavy Intercessor with power fists• 3x Flamestorm gauntlets3x Twin power fists• 3x Kill Team Heavy Intercessor with melta rifle• 3x Bolt pistol3x Close combat weapon2x Melta rifle1x Multi-meltaIndomitor Kill Team (265 points)• 4x Kill Team Heavy Intercessors• 4x Bolt pistol4x Close combat weapon2x Deathwatch heavy bolt rifle2x Deathwatch heavy bolter• 3x Kill Team Heavy Intercessor with power fists• 3x Flamestorm gauntlets3x Twin power fists• 3x Kill Team Heavy Intercessor with melta rifle• 3x Bolt pistol3x Close combat weapon2x Melta rifle1x Multi-meltaInfiltrator Squad (100 points)• 1x Infiltrator Sergeant• 1x Bolt pistol1x Close combat weapon1x Marksman bolt carbine• 4x Infiltrator• 4x Bolt pistol4x Close combat weapon1x Helix Gauntlet1x Infiltrator Comms Array4x Marksman bolt carbine
Who dares to make predictions in the current landscape? We do! Our Predictions are back. Will our track-record continue on a high or will we be fundamentally wrong? Listen in to our Predictions for 2026 Navigation: Intro What will 2026 be all about? AI, AI and … more AI The big Hardware movements Of Start-ups and VCs Regulatory & Geopolitical Headwinds… and the Wars Fintech, Crypto and Frontier Tech Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Bertrand Schmitt Introduction Welcome to Tech Deciphered Episode 74. That would be an episode about some predictions about 2026. What will be 2026 all about? I guess this year is probably starting with a bang. We saw the acquisition of xAI by SpaceX. We saw an acquisition from Grok by NVIDIA. What’s your take about what would be the big themes in 2026? I guess it would be for sure about AI and space. Nuno Goncalves Pedro What will 2026 be all about? Yeah. I predict a year that will be a little bit more of a year of reckoning in some way. There will be a lot of things that I think we’ll start seeing through. The fact that we are in the midst of an amazing transformational era for technology, the use of AI, but at the same time, obviously, a ridiculous bubble that is going alongside it as we’ve discussed in previous episodes. I think that we’ll start seeing some early reckonings of that, companies that might start failing, floundering, maybe a couple of frauds along the way, etc. I’ll tell you what I will not make many predictions about today, which is geopolitics. Geopolitics, I will not make predictions at all. Who the hell knows what’s going to happen to the world this year in 2026? I don’t dare making any predictions on that. Back to things where I would make predictions. I think on AI, we’ll have a little bit of reckoning. We’ll talk about it a little bit more in detail during this episode. Interesting elements around the hardware and physical space. Physical space, we just dedicated a full episode to it. We won’t go into a lot of details on that, but definitely on the hardware side, we’ll talk a little bit more about it. The VC landscape is going through an incredible transformation. We’ll talk about it today as well and some of our predictions for this year. What will happen to the asset class? It seems to be transforming itself dramatically. Obviously, that has a very direct impact on startups, so we’ll talk about that as well. And then to close a little bit the chapter on this, we will address some regulatory and geopolitical, let’s call it, headwinds without making maybe too many complex predictions. We shall see. Maybe by that time of the episode, we will be making some predictions. You guys should stay and listen to us, and maybe we will actually make some predictions about the geopolitical transformations that we will see this year in the world. Then last but not the least, we’ll talk about fintech, crypto, frontier tech, and a couple of other areas before concluding the episode. A classic predictions’ episode. We normally have a pretty good track record on some of these, but right now, the world is going a bit interesting, not to say insane. Bertrand Schmitt Yes, and going back to some news, Groq technically was not acquired, but, practically, it’s as if it got acquired. I’m talking about Groq, G-R-O-Q. The AI semiconductor company focused on inference AI, and it was late December. It was a way to end the year. This year, we started again with an acquisition of xAI by its sister company, SpaceX. I guess that’s where we are starting. AI, AI and … more AI We are going to start on AI. That’s definitely the big stuff. Everything these days, I guess, is about AI or has to have some connection with AI, or it doesn’t matter. I think every company in the world has seen that. You have to have the absolute minimum on AI strategy. You better execute on this strategy and show results, I would say. For the companies that were not AI native, you truly have to have a way to transform yourself. I guess at some point, the stretch might be too much, and it’s not really reasonable. Then you maybe better stay on what you are doing, especially if you’re in tech, you better be moving faster to AI. Nuno Goncalves Pedro Just to highlight, and I think throughout the episode, you’ll see that there’re obviously a lot of implications that would manifest themselves into capital markets. I mean, we’ll specifically talk about VCs and startups later on. But the fact that everything needs to be AI, the fact that there’s so much innovation happening right now, in my opinion, and this is maybe the first pre-topic to AI, is we’ll see a tremendous increase in M&A activity this year across the board. I mean, we’ve seen already some big acquihires we mentioned in some of our previous episodes, but we’ll see a lot more activity on M&A this year. Normally, that’s a precursor to the opening of capital markets. I predict also that there will be a reopening of the IPO market that never really reopened last year, to be honest. M&A, a lot more, reopening of the IPO market. Normally, it happens in the second or third quarter of the year. That’s what my M&A friends tell me. First quarter of year, everyone’s figuring out stuff. Then last quarter of the year, things should be more or less closed. Maybe the third quarter is the big quarter. We shall see. But definitely, as a precursor to our conversation today, I think we’ll see a lot of M&A, and we’ll see reopening of the IPO mark. Bertrand Schmitt I guess last year was not as big as you could expect on M&A given the tariff situation announced in April and May. I mean, it became quite tough to do IPO in such market conditions. Definitely, we can hope for something dramatically different in 2026. I guess talking about public markets and IPO, I guess the big one everyone is waiting for is SpaceX. SpaceX getting even more interesting with its xAI acquisition. Nuno Goncalves Pedro Do you think that because of the acquisition, it’s more likely that it will happen this year, or because of the acquisition, it’s less likely that it will happen this year? Bertrand Schmitt That’s a good question. My guess is the acquisition of xAI is all about xAI needing more financing and cheaper financing. This acquisition is a pathway to that. SpaceX being a much bigger company, a company that is also making much more revenues. I could bet that there is higher probability that, actually, SpaceX will go public in order to finance itself. At the same time, will it have enough time to prepare itself for the IPO given this acquisition just happened? Can they do that in 6 months? I mean, if anyone can do it, I guess it’s Elon Musk. It’s a strategy to present an even more attractive company with an even more interesting story, a story of vertical integration from AI to space. I guess the story as it’s presented itself right now, it’s one about having your AI data centers in space. Because in space, you have much better solar energy production with solar panels. You have a perfect cooling situation because you are in space. Thanks to Starlink, you have the mean to communicate between the satellites and with Earth itself. I think if someone can pull up a story like AI data center in space, I guess Elon Musk can. There is, of course, a lot of questions about is it practical? Is it economical? Yes. I certainly agree. I’m not clear on the mass, and can you make it work? Again, I mean, Elon Musk single-handedly, with SpaceX, managed to transform the space market on its head. I mean, they are the biggest satellite launching company in the world. They have the most satellites in the world. I mean, I’m not sure I would bet against him, and I guess I would probably believe that he could pull up something. Time frames, different story. The 2-3 years data center in space for AI as cheap as on Earth, I have more trouble with that one. I mean, it’s a usual suspect with Elon Musk. You promise something unachievable in a few years, but, ultimately, you still manage to reach it in 5 or 10. Again, I would not bet against the strategy. Nuno Goncalves Pedro Yeah. I’ve talked to a couple of space experts, people that have launched rockets, and have worked JPL, NASA, and a couple of other places, etc. For what it’s worth, their feedback is, “No way in hell, and we’re decades away.” We’ll see. I mean, to your point, Elon has pulled very dramatic stuff. Not as fast as he normally says he’s going to pull it, but within a time span that we all see it. Difficult to bet against him. In terms of actually the prediction, maybe to respond to the prediction as well, will SpaceX IPO? I’m going to make a prediction that has a very high likelihood of missing the mark, but I think Tesla’s going to buy and merge them both into it. It’s going to become a public company through Tesla. That’s my hypothesis. Bertrand Schmitt No. That’s supposed to be it. That’s how you solve that. Nuno Goncalves Pedro And Elon controls the whole universe. X, xAI, Tesla, SpaceX, all under one umbrella beautifully run. And SolarCity is well in there, of course, so wonderful. Bertrand Schmitt That’s possible. Certainly, you are not the only one thinking Tesla will acquire or merge with SpaceX. To remind everyone, Tesla is around 1.3, 1.5 trillion market cap. Depending on the day, SpaceX seems to be valued at similar range, 1.2, 1.3 trillion. It looks like it’s the most valued private company at this stage. These are companies of similar size, so that’s one piece of the puzzle. When you think about the combined company, we could be talking about a 3 trillion entity. Playing right here with the biggest companies in the marketplace today. Nuno Goncalves Pedro With a couple of tweets from Elon, it will rapidly get to 4 to 5 trillion. Bertrand Schmitt That’s so tricky. Nuno Goncalves Pedro Yes. On AI and back to AI, one thing I think that we’re about to see is this will probably be the year of agentic AI. Obviously, we predict a lot of growth on that side of the fence, in particular on the enterprise B2B side. We see a lot of opportunities coming through. From our perspective, at least at Chamaeleon, we generally believe that there’s going to be a lot of movements on agentic AI. It’s also going to be probably the year of the first big fails of agentic AI that will be newsworthy. There will be some elements about that loop and how it gets closed that will happen. I think we might see some scandals already. We’re already seeing the social network of bots talking to bots. We will see other scandals going on this year even in the consumer space and in the bot to bot space, which we now can talk about or in the AI agent to AI agent space. My prediction is we will see some move forwards. There’ll be some dramatic funding rounds along the way. We’ll see a couple of really cool things out of the gates coming out that are really impressive, but we’ll also see the first big misses of the technology stack. I don’t think we’ll go fully mainstream yet this year, so it’s probably maybe something more for 2027 along the way. That would be my prediction again. I think enterprise will lead the way. We’ll definitely see a lot of stuff on consumer as well that is cool. Then we’ll all have our own personal assistance in our hands, basically, literally in our phones. Bertrand Schmitt Going back to agentic AI, we also started the year with some pretty dramatic move. I mean, the launch of Clawdbot, renamed OpenClaw. I mean, this stuff took fire in like a week or 2. It was coded by just one person who actually didn’t even code the product but used AI to build the product, 100% used AI, proposing some new ways also to leverage AI to do coding. He has a pretty unique approach. It’s not vibe coding. I would say it’s a better way to do that. Then the surprising evolution with the launch of a social network for AI agents, Moltbook. I mean, this stuff, probably there is some fake in it. But at the same time, I think it’s quite impressive because it’s the first time we see truly 100,000 plus agents communicating directly to each other. Yeah. I mean, that’s the first time we see surfacing the possibility of some sort of hive mind on the Internet. It’s pretty surprising. Right now, all of this is a hack done in a few days. By end of year, by 2 years, 3 years, we might discover that, actually, the best approach to AI might not be the AI assistant like we are doing today, but a combination of hundreds of thousands of AI working closely together. We might be witnessing the first sign of new intelligence in a way. Nuno Goncalves Pedro Things like this social network might either be Skynet, the beginning of Skynet. They might be the beginning of Her, or they might just be a fad and nothing really happens. It’s just interesting to see what these agents are doing. Bertrand Schmitt Totally. Nuno Goncalves Pedro Obviously, there are real and clear and present dangers of some of the integrations of AI we’re seeing in the market. Interesting enough, and I’ll ask you for your prediction a bit, Bertrand. I think we’ll probably see the first big mishap of AI being used in some infrastructural decision in the age of AI. I mean, we’ve seen AI issues in the past and software issues in the past. We talked in previous episodes about that as well. Mishaps of software that have led to people dying. But I think probably the first big mishap will happen this year as well. Very public mishap of the use of AI and serve its interactions with infrastructure or something that’s very platform related, etc, that will have big impact that everyone will notice. That’s my prediction for the year as well. We’ll have the first big oops moment, as I would call it, for AI in this new age of full on AI. Bertrand Schmitt I would say first some perspective. I think today, people are not using AI directly for life and death decision, at least not that I’m aware. We’re not going to let AI fly a plane, for instance, tomorrow so you can be, reassured. At the same time, given there is such a race to AI, there definitely might be some mistakes. We were talking about the social network for AI agents, Moltbook. Apparently, all the keys used to secure the AI were shared by mistake because it was not properly locked down. We can see that indirectly, mistakes will be made for sure. Two, it’s highly probable that some people will trust AI too much to do some stuff, and this stuff might not work and might have some grave consequence. Hopefully, there is not so much of this. Hopefully, it’s mostly AI used for the good. But you’re right. I mean, at some point, the more we use the technology, the more there would be issue. I mean, it’s highly probable. Nuno Goncalves Pedro That will lead me to another prediction, which is, and we’ll talk about more of it later, but it probably will lead to the first significant movement in terms of regulatory environment certainly in the US at some point if it happens in the US in particular, where there will be some movement that will be like, “Hey, you guys can’t do this anymore.” Because this will probably emerge from mismanaged interfaces. From systems having access to stuff that they shouldn’t have access to in the first place. Talking a little bit more about what’s happening in AI. You’ve already mentioned some of the issues that relate actually to security and cybersecurity. We keep talking about AI. We keep talking about all these infrastructure pieces and platforms that are being built. I think we’ll have a lot more incidents like the one you just mentioned where things will be shared that shouldn’t have been shared, where people will break systems and get into it, etc. Let’s see where that takes us, which is a little bit ironic because, obviously, with AI, the promise is that cybersecurity becomes more robust as well because there’re agents working on our behalf on the cybersecurity side. There’s also agents working on the other side. Bertrand Schmitt It’s a constant race. It’s the attackers, defenders. Each time you have new technology, you have a new race to who is going to attack or defend the best. Each new wave of technology, it’s an opportunity to challenge the status quo. Nuno Goncalves Pedro The attackers have been winning, and I feel they’ll continue winning in 2026. I think it’s going to still be a year of attack. We’ll see more and more breaches, more and more stuff that will happen. Bertrand Schmitt I don’t know if they will win. I mean, it’s normal that they win once in a while. For sure, some infrastructure is not updated as it should. Some stuff are not managed as it should, so there will always be breaches. I don’t know if things are dramatically going to change because, again, everyone who cares who is going to update his infrastructure with AI for defense. There is no question that you have no choice. We will see. That I don’t know. For sure, AI will be used to attack directly with AI. Maybe you’re able to do bigger, larger scale attack. Or thanks to AI, you are simply able to create new type of attacks more easily. AI can be used behind the scene as a way to prepare and organise new type of attacks, even if it’s not used directly live in the battle. Nuno Goncalves Pedro One topic that we’ll come back to later is the geopolitics of everything, but maybe more broadly. On the geopolitics of AI, it’s very clear that we have an arms race going on. Obviously, the US on the one hand, China on the other hand is the two extremes, putting tremendous amount of capital into data centers just at the base of that infrastructure. Chipset development, chipset access, a huge theme in terms of the export restrictions, etc, that are being forced by the US. I think it will continue. From a European standpoint, obviously, they’re stuck between a rock and a hard place, to be very honest. Let’s see what happens on that side of the fence. My view of the world is that certainly from a US and China perspective, we’re going to see a lot more movements in 2026, like big movements. The Chinese movements we always see in delay. It takes us a couple of months, sometimes even more than that to understand exactly what’s going on. I think we’re going to see some huge moves this year in terms of the States, the United States of America, and China really pouring capital into the creation of the next big winners around AI. I think the US is obviously more visible. We see a lot of these companies. We’ve just discussed xAI and its acquisition by SpaceX or merger. I don’t know what they’re calling it exactly. Effectively, on the China side, the movements I think are already very big. As I said, it will take a while to figure out exactly what those moves are. One thing that I propose is that at some point, China will have very little dependency on chipsets from the US. I’m not sure it’s going to happen this year, but I think the writing is on the wall. Irrespective of any other geopolitical issues that is coming to the fore at this moment in time. That’s one of the key areas or in arenas of fight. Bertrand Schmitt It makes sense. If you are China, you will look at what happened. You would think that you cannot just depend on the largest of one country. It makes rational sense, the same way it makes rational sense for the US to limit exports to China because there is value to delay some peer pressure that could use these technologies for good but also for bad. If you were an ally of the US, that would be one thing. But when you are not an ally of the US, that certainly should be a different perspective. Maybe one last point concerning agents, I think there will be a lot that will revolve around coding. We can see OpenAI with Codex. We can see Cloud with code. There was, of course, [inaudible 00:18:28] that was trying to be big on agentic coding. I think agentic coding was one of the big transformation in 2025 and is going to get bigger in 2026. I think for a lot of people who do coding, there was a radical transformation in terms of what you can achieve, what you can do, how much you can trust AI to help you code. I start to think we might see this year, the replacement of not just one AI replace one coder, but one AI replace a full team because of the new ability to manage that at scale. Coding might be a common activity where you are going to think about outcomes, think about objective, think about how you organise, but not really coding by itself anymore. A big change, like you used to code, directly your hand on the stuff, but step by step, everyone is going to become a manager of agent. I think in one year, we saw enough transformation to think that in the coming year, the transformation can be even more dramatic. Nuno Goncalves Pedro The big Hardware movements Now switching gears to hardware. Obviously, a lot of movements in 2025 and over the last few years. One piece of thesis that we’ve had long-standing at Chamaeleon is that we will see the emergence of AI devices. Some of them have been tremendous failures as we discussed in the past. I predict that we’ll have a couple of really interesting full stack AI devices in the market this year. Why does that matter? Because, as many of you know, obviously, there’s compute that can happen in data centers and cloud infrastructure all over the world, but also there’s compute that can happen at the edges. The more you can move to the edges and the more you can create devices that actually allow you to have user experiences that are very distinctive at the edge, the more powerful some of these devices might become. I predict Apple will not be the first to launch anything on this. I predict probably OpenAI, after the acquisition of IO, will maybe not launch something this year, but will announce something this year. I’ll step back on that prediction. They’ll announce something this year, but maybe not launch. But we’ll start seeing some devices that have some interesting value in the market, probably devices that are AI devices, but they are very focused on very specific user flows, and so very much adequate to specific activities. I won’t make a prediction on that, but I think areas that would make sense for that to happen would be obviously around fitness, health, et cetera, et cetera, where we already have the ascendancy of products like Oura Ring and others out there. Definitely, that’s one area that might have quite a lot of developments. I think AI-first devices, devices that are very focused on compute at the edges, providing user flows that are AI-enabled to end users, we’ll see a lot more of that and a lot more activity this year. Again, I don’t think Apple will be necessarily ahead of the game. Again, maybe OpenAI will give us something to at least think about and look forward to. Bertrand Schmitt First, I’m not sure it will be that transformational because if it’s not in your phone, in your pocket, there is only so much you can do with it, and there is only so much computing power you will have. I’m doubtful it would be really impactful this year. Nuno Goncalves Pedro I feel we’ve been discussing this shift of paradigm in input and output. For me, some of these devices could lead to that shift. Because, again, a mobile phone is not a great long-term paradigm for the usage that we have because it’s really constrained by the screen. The screen is really what takes most of the battery life away. If we didn’t have that screen, what could we do? If we have the block that is as big as a mobile phone, and it didn’t have a screen, it was just compute, that’s a mini computer, a microcomputer. Bertrand Schmitt That’s a fair point, but I don’t see that transformation this year. That’s really more my point. I can see that you can have AI-enabled smart glasses, and it’s clear there is a race to AI-enabled smart glasses. My point is more to go beyond the gadget, it would take quite a while. It would need to have cameras. It would need to analyse what you see. It would need to hear what you hear. Again, it might come, but then at some point, it would be okay, what do you do with it? We have the example of the movie Her. That’s showing Her what it could be. There are definitely possibilities. It’s clear that if you take the big VR headset like the Apple Vision Pro, there is a failure from that perspective in the sense that I think it’s a great, amazing device. The big problem is that it’s doing way more that makes sense. I think there will be a clearer separation between your smart AR glasses that has to be light, that has to be always unconnected, and that’s primarily there to help you make sense of the world around you. The true VR headset that doesn’t really require much in terms of AI, and it’s just there to immerse you in a different world. For this, we know, unfortunately, in some ways, that there is not a lot of demand for it. Maybe there is little demand because you are too hidden in your own world. The technology is not working well enough yet. There are a lot of reasons. But I think Apple trying to do both at the same time, AR and VR, with the Vision Pro, was a pretty grave structural mistake. I think we would see a clearer line of separation between the two. There is bigger market opportunity for AR glasses. That, I certainly agree. There is opportunity to connect that to a computing device. As you talk about, your glasses are your screen, your phone becomes something in your pocket connected to your glasses. Nuno Goncalves Pedro For me, Apple has their way of doing things. From the perspective of what you said, they normally really plan their devices. Even if it’s a big shift in terms of a new area, like they tried with the Vision Pro, and we criticised them for launching it as a device that should have been more of a dev device that they really launched as a full-on device, but that’s their playbook, classically. I think Apple needs to change how they put products out and how they experiment with those products, et cetera. I think they have enough money to be doing everything all the time and figuring it out. If they don’t want to put it out, then they need to do a lot more hell of testing internally with their silos, but they should be playing across all these arenas, VR, AR, everything. They just should put devices out that are either ready for prime time, or they should call it something else. They should call it like this is a dev device or whatever it is. Bertrand Schmitt I agree with you. My complaint is more that it was marketed as a consumer device when it was not. It was a true developer device. Two, they tried to mix the two at once, and it made no sense. No one is going to walk in their home or in the street with their Vision Pro on their head. You have to be deranged, quite frankly, to have use cases like this. I think that for me is a crazy mistake from a company like Apple that prides itself in pure UI, pure user interface, very well-designed device for one specific use case, not mixing the two use cases. We still don’t have Macs with a touchscreen, you know? We still don’t have an iPad with a good OS that makes use of this great hardware. For some strange reason, they decided to mix everything in the Vision Pro with a device that weighs a ton on your head and is so uncomfortable. That’s why, for me, I’m like, “Guys, what is wrong? Why did you let this team run crazy?” I hope at some point, Apple will go back to the drawing board. My understanding is that that’s what they are doing. They are going to have two devices, one smart glasses, an evolution of the Vision Pro, just focus on VR. They might actually abandon the concept of the pure VR-oriented headset. Because, from a market size perspective, it might not be big enough for Apple, quite frankly. Nuno Goncalves Pedro I read on all of the above, and people at this point was like, “Why are then players like Samsung and others not doing it. LG, et cetera?” Because those players historically have not invented new categories. They’re amazing at catching up once the category is invented, and then they scale the hell out of it, and that’s what these companies have been exceptional at. I wouldn’t see a dramatic innovation, I think, in terms of devices coming from any of the big ones on that side of the fence. Not to disrespect them in any way, but I think that’s not been their playbook ever. Again, if the origination doesn’t come from a start-up or from an Apple, I don’t see those guys going after it. My bet is that we’ll see some start-up activity and, again, hopefully, some announcement from IO now within the OpenAI world. Bertrand Schmitt I would slightly disagree with you. I see where you are coming from. But take the Samsung Galaxy Note, that sudden much bigger headphone that no one was doing that was launched by Samsung, at some point, it forced Apple to launch an iPhone Max. Let’s look at the Z Fold that Samsung launched 7 years ago, copied by everyone. Now Samsung launching a trifold. Apple has still not launched their foldable phone. I think there is a mix, actually, of sometimes- Nuno Goncalves Pedro For me, that’s not a proper new category. It’s still a mobile phone. It just happens to have a screen that folds in half. Bertrand Schmitt The iPhone was still a mobile phone, you could argue. Nuno Goncalves Pedro No. I think the iPhone was… I could actually agree with you on that point. Maybe Apple is not as innovative in that case. I think what Steve Jobs was exceptionally good at in terms of his ability as this master product manager was to be an exceptional curator of user flows and user experiences, and creating incredible experiences from devices based on that. That was his secret sauce. Could you say, “Wasn’t all of this stuff already around?” It was. You just put it all together very neatly and very nicely. But if you’re talking about significant shifts in how a category is done, the iPhone was a significant shift in how the category was done. The Fold is still an interesting device. I actually have a Fold right now in front of me. The 7 that you highly recommended to me that we both got, the Z Fold 7. I think they do amazing devices. I don’t think they normally are the most innovative players. Then, when they come to innovation, it comes from technology edges. Obviously, they have Samsung Display, there’s a bunch of other things. They had the ability to do foldable screens in-house themselves. Bertrand Schmitt I don’t disagree with you. I think there is an interesting situation where some companies have some strengths, another one has some strengths. My worry with Apple is that this was not demonstrated with the Vision Pro. The Vision Pro was a hot pot of technologies barely integrated together, with use cases absolutely not well-defined and certainly not something that makes sense for most of us. There is a question of has Apple lost it? While Samsung actually keeps doing their own stuff, that, yes, might be more minor improvements, but at least they are doing it. Because it looks like Apple is missing the train on even the minor improvements. By the way, you might not be aware, but Samsung launched its Vision Pro competitor. Interestingly enough, it might be a better product in some ways, being much lighter and much more comfortable. Nuno Goncalves Pedro We should play around with that and report back to our listeners. Of Start-ups and VCs Moving to venture capital and the startup ecosystem and what’s happening there, I think it is very much a bifurcated environment, and it’s bifurcated for both VCs and for startups. If you’re a startup in the AI space, and you have the hottest team since sliced bread, and you can create FOMO at the speed of light, you can raise ridiculous rounds. Five hundred million at the $3 billion, or $4 billion, or $5 billion valuation, and you still haven’t really even started. First round, you can raise 500 million. That’s back to the whole discussion on Bubble and where are we, et cetera. Some of these companies might actually become huge, some of them might not. But definitely, we are seeing really the haves and have-nots on the startup ecosystem with incredible teams raising a lot of money very, very early on or mid-stage if they’ve already existed for a while, and then the rest not being able to raise. We see a lot of non-necessarily AI sectors, some of the areas of SaaS that don’t necessarily have AI in it, or fintech, or the consumer space that are really, really struggling. If you don’t have an AI story for your startup right now, it’s extremely difficult to raise money unless your numbers are just the best numbers ever. That’s, I think, the first part of the element of bifurcation that we’re seeing today. The second element of bifurcation that we’re seeing today in terms of fundraising is for VCs themselves, and really propelled by the large VC firms raising more and more capital in recent orbits, announcing 15 billion across funds raised. Lightspeed, I think, had made an announcement a couple of weeks ago as well. They’ve raised a bunch of money as well. The big guys are all raising a lot of money. At some point in time, the question some of you might ask is, “These VCs are redeploying more and more money if they have a couple of billion for a VC fund. How does that look like? Is that still VC?” My perspective, I’ve shared before in some of our previous episodes, is that that’s no longer venture capital. At that point in time, we’re talking about something else. Private equity hedge funds, if you want to call them, maybe funds that are really driven by growth investment or late-stage investment. If you have a couple of billion under management, you’re not going to make your returns by writing a $3 million check in a series seed and leading that round. That has implications for everyone in the ecosystem. It has implications for smaller funds that obviously have a lot more difficulty in raising capital. It’s difficult to differentiate. Last but not least, also for startups that really continue searching for that capital that is out there. Andreessen Horowitz, for example, runs Speedrun, which is a great program for companies around consumer in particular. Initially, it was a lot for gaming. But at some point in time, Andreessen Horowitz could decide that they don’t want to invest more in you. They just put money from Speedrun, which is obviously a very small check compared to the very large checks they could write mid to late stage and that will have an effect on you as a startup. What happens at that point in time if Andreessen Horowitz is not backing you up in later stages? More than that, what happens if I can’t get these big funds interested in me? Are the small funds still valuable to me? Punchline, my view is yes. Obviously, we’re a smaller fund, so there’s parochial interest in what I’m saying. Small funds can still create a ton of value for you, also in terms of credibility, ability to accompany you in those first stages of investment, and the ability to bring other larger investors later down the road as well. There’s definitely a big movement happening in terms of the fundraising for VC funds, which we shouldn’t neglect, which is the big guys are raising a lot more capital and are therefore emptying the market to smaller funds that are having more and more difficult raising at this point in time. We had discussed that there would be a need for concentration in the industry, that micro funds would need to concentrate, and we didn’t have the space for so many micro funds as we had around. But the way it’s happening is extremely dramatic at this moment in time. I think it will continue through 2026. Bertrand Schmitt Remember a few years ago, with the rise of AI, there was more and more of the question about, “What’s the point of SaaS at this stage?” Because SaaS was around for 15 years. Basically, how do you come up with something new that was not already tested, validated by the market? How do you bring something new? We say this was reinforced to the power of 10. If your product is not clearly built from the ground up for a new use case enabled by AI, anyone could then might have built your product 5, 10 years ago, and therefore, why now has no clear answer, and it’s a big problem. I’m still surprised myself to still see some entrepreneurs where you talk to them about AI because you don’t see them in the deck, and they explain to you, “It’s not yet there,” and you’re like, “What’s wrong with you guys?” Fine. Do whatever you want. Do a small business and whatever, but don’t think you can come up pitch and raise without an AI story. The second category is people who come with an AI story, but you can feel very quickly, I guess you saw that many times, Nuno, where just a story layered on top with little credibility. It’s not better. It’s not enough to just have a story. Your business needs to be radically built differently or radically proposing some brand-new use cases that were impossible to solve 5 years ago. Nuno Goncalves Pedro To stack up on that, absolutely in agreement. If you’re just adding to the story, and it’s an afterthought, and you’re just trying to make the story somehow gel, once you go into one or two layers of due diligence, your investors will very quickly realise that you’re not really AI-first or dramatically AI-enabled or whatever. It’s just you’re sort of stacking something on top of another thesis. It needs to make sense from the product onwards. It’s not just, let’s just put it together with chewing gum, and magically, people will give you money. It was true also if we remember the good old crypto blockchain days, where everyone’s investing in crypto. A lot of stories that didn’t make much sense. In that sense, it’s not very different. I would go one step further. I think in the world of the VC winter that we’re a little bit in, where it’s more and more difficult if you’re a smaller fund to raise your fund at this moment in time, there’s a lot of sources of distinctiveness still talked about, like proprietary networks, access to deal flow, fast track record, all that stuff that really, really matters. But our bet continues at Chamaeleon continues being that you need to be AI-first as a VC fund yourself. You need to have core advantages in using not only readily-available AI tools or third-party available AI tools, data sources, technology stacks, but actually building your own stack over time, which is what we did with Mantis at Chamaeleon. Again, just to reinforce that, I think we’re at the beginning of that stage. We, Chamaeleon, are ahead of the game, but we think that the rest of the market will have to move towards that as well. Still, to be honest, very surprising to me to see that many significant large players are doing very little still around some of these spaces. They have data scientists. They’re running some tools. They’re running some analysis and all that stuff, but it’s still, again, back to the point I was making for startups, all glued up with chewing gum. It doesn’t all come together nicely, which it does need to from a platform standpoint. Bertrand Schmitt It’s quite surprising. I agree with you that some VC funds might think that they can do business as usual in that brand-new world. It’s difficult to believe. Nuno Goncalves Pedro Maybe moving a little bit toward the capital formation piece. We already discussed the M&A space really accelerating. We’ve also discussed the IPO market and some predictions on that. Secondaries, there’s obviously a lot of liquidity coming from secondaries from mid to late stage. I think it will continue throughout the rest of 2026. A lot of activity in buying, selling in secondaries as some asset managers are becoming more distressed, as some very high net worth individuals and family offices are becoming more distressed as well, at the same time, where there’s a lot of opportunities to potentially arbitrage around some investments. I believe a lot of money will be made and lost this year by decisions made this year, just to be very, very clear in terms of equity, purchases, et cetera. Exciting year ahead of us. Definitely a very, very interesting market ahead of us. Secondaries, M&A, growth, and late-stage investing, also, early-stage investing will continue just for those that were wondering. Last but not least, the public markets, the IPO market as well. Bertrand Schmitt One of the big questions for the IPO market would be, will SpaceX go public? Would it be good for the startup ecosystem? Because suddenly that they go public, it would be to raise money. If they raise money, will there be any money left for anybody else? That would be an interesting test of the market. For sure, it would be proof that market are risk on financing a new IPO like this one. Or as you said, maybe there is no IPO, and it’s a merger with Tesla. Time will tell. Nuno Goncalves Pedro Regulatory & Geopolitical Headwinds… and the Wars Moving maybe to our topic of regulation and geopolitical headwinds, as we’re seeing … definitely not tailwinds. The Google antitrust verdict and, obviously, the remedies are expected to come forward now, and a lot of people are saying, “There are some risks of structural separation.” What do you think? Is it cool, but nothing will happen in the end dramatically? Alphabet or Google? I’m not sure, actually. It’s Google LLC. I think that’s the case. It’s The United States versus Google LLC. Bertrand Schmitt I’m not sure. Personally, I’m not a big fan. I think there needs to be a better way to manage some anticompetitive behavior. I’m not a big fan. There was this temptation to do that for Microsoft 25 years ago. Look at what happened. No one needed to buy Microsoft to leave space for others. I see the same with Google, and I guess they are happy to not be the number 1 in AI today, but to have an open AI in front of them. Even if they are doing a great job, by the way, to move forward and go faster and faster. Personally, quite impressed now with some of what they have released. Gemini 3 is doing great from my perspective. I’m not a big fan of this. I think to be clear, it’s important that bigger companies don’t behave anticompetitively, but at the same time, we need to find the right approach where it’s not about breaking these companies, and it’s also not about forbidding them to do acquisitions. Because then you end up with what NVIDIA just did with a $20 billion acquihire IP licensing type of acquisition, because they didn’t want to have the uncertainties. They didn’t want to wait 1–2 years in order to acquire the people and the technology, so they organised it in a different way. But I don’t like that. I think they should be able to acquire companies without facing so much uncertainty. To be clear, it’s not new. Uncertainty when you are Google, NVIDIA, or others, it happens. It has happened for a decade plus, 2 decades. I think there needs to be, for sure, some safety valves. At the same time, we want an efficient capital market. An efficient capital market need companies that can acquire other companies. If you don’t do that efficiently, it will be worse for the entrepreneurs, it will be worse for the investors, it will be worse for everybody. I think we have not reached a good equilibrium from my perspective. We need more efficient acquisition process. And at the same time, we need to also enforce faster anticompetitive behavior. Because what you talk about concerning Google, this is a case that was what? That is 10 years old. You see what I mean? This is way too long. If you’re a startup, you are dead by then. It’s like the story of Netscape facing Microsoft. They were dead long after the fact. I think we need a different approach. I’m not sure the best answer. I’m not sure we’ll get a better approach. There are probably too many vested interest. My hope is that it will get better with this current administration because, certainly, the past administration was very anti acquisition and efficient markets. Nuno Goncalves Pedro We’ve talked about the European Union AI Act a bunch of times, so I don’t want to spend too many cycles on that. The only effect that I would say is we are seeing in very slow motion the splitting of the Internet. I once had Tim Berners-Lee, by the way, shouting at me that we were going to break the Internet when we were applying for the .mobi top-level domain. I was part of that consortium that eventually did get the .mobi top-level domain, and I had him shouting at us. But, apparently, this is going to split the Internet, Tim. So in case you’re listening. Because it will create all these different rules. If your data is relating to consumers there, then it’s treated in a different way, and The US is… Well, obviously, we have the case of California with its own rules and laws. I don’t know. I feel we’re having a moment of siloing that goes beyond economic and geopolitical siloing. It will also apply to the digital world, and we’ll start having different landscapes around it. We’ll see how this affects global expansion of services, for example, around AI, particularly for consumer, but I don’t foresee anything dramatically positive. Recently, we had the whole deal around TikTok finally having a solution for their US problem where there’s now a US conglomerate magically that owns it. The conglomerate doesn’t magically own it, they just straight up own it for the US. But it was driven by many of these concerns around data ownership. Where’s the data? Where is it based? I think a lot of other concerns that have to do with the geopolitics of China, obviously, being the basis of ByteDance, the owner of TikTok, that still is a significant owner, by the way, in TikTok in US. Then also the interest in the economics of making money out of something as powerful as TikTok, to be honest, in The US. Just to be clear, I don’t think this was all about the best interests of consumers. It was also about money. Just follow the money. Bertrand Schmitt There are for sure, some powerful interest at play. But let’s be clear. I think one is data, as you rightfully said, but the other one is algorithm. It’s not as if China is authorising any competitor on its territory. They have blocked access to most of the Internet platforms from the US, either finding new rules or just trade blocking them. So I don’t think it’s fair competition. You don’t want some of that data in China about the US or European consumer. Three, it’s about the algorithm. If suddenly, you are a foreign power, and you can as we know in China, you better follow what’s required of you from the Chinese Communist Party. You cannot take a chance with influencing other stuff like elections in other countries. It’s fair from the US perspective. One could even argue it’s fair from a Chinese perspective to want that. I think the only one in the middle who doesn’t really know what they want is Europe because on one side, they want to benefit from American platforms, on the other end, they want to have some controls. On the other end, they don’t create the environment for startups to flourish. So in that weird situation where they have to accept some control by the big US providers and either provider of underlying infrastructure or provider of consumer business facing services. Then they try to regulate them. But I think they are misunderstanding the power relationship, and I think some of this regulation would get some blowback, at least by the current administration. Just, I believe, this morning, there was some news around X being under a criminal investigation in France. This is not going to end well for the French startup and VC ecosystem. This is not going to end well for France and Europe when you depend so much from your American friends. Nuno Goncalves Pedro Regulation will be weaponised. Regulation constraints around exports, all of this will be weaponised geopolitically, and the bigger guys will normally win. I think that’s normally what we’ve seen. Just on TikTok just to… And you guys, if you’re listening to us, just see if you see a pattern here, but obviously, 19.9% still owned by ByteDance of the TikTok entity in the US. It was initially said that 80% of the TikTok entity is owned by non-Chinese investors. Initially, people were saying US investors, and then they changed it to non-Chinese because MGX, I think, has 15% of it. MGX is based in the UAE, connected obviously to Mubadala, the Abu Dhabi sovereign wealth fund. Silver Lake is in there, I think, with 15% as well. Oracle as well with 15%. Those three are the big bucket owners together, 45%. Silver Lake having collaborated with MGX before, and I’m sure a lot of connectivity there. Then you still see a pattern in this in terms of shareholders. If you don’t, then just Google it. Dell Family Office, Vastmir Strategic Investments, which is owned by billionaire Jeff Yass, Alpha Wave Partners, obviously involved with a bunch of things like SpaceX and Klarna, Virgoli, Revolution, which is Steve Case’s, a former founder of AOL, is also in there. Meritway, which is managed by partners, I think, of Dragonair. Vinova from General Atlantic, an affiliate of General Atlantic. Also, NJJ Capital, which I believe is Xavier Nil, the French billionaire that founded Iliad. Mostly American, I think, if the math is correct. 80% non-Chinese, which was what mattered, I think, in many cases. But do see if you saw a pattern in most of those investors. I won’t say anything more than that. Maybe moving to other topics, maybe just to finalise on regulation and geopolitics. In geopolitics, we should talk about wars if we predict anything. Not that we are nasty and one want to be negative, but what the hell is going on? Will we have ending to the wars we already have ongoing or not? But before that, the struggles on the App Stores, I think, will continue both for Apple and for Google Play Store. The writing’s on the wall, the EU keeps pushing it dramatically and Apple keeps just doing stuff. I’m on the board of an App Store company. Apple just creates all these things that basically make you not really… It doesn’t work. You can’t provision then an App Store on Apple devices. On iPhones, et cetera. We’ll see how that will continue going, but I feel the writing’s on the wall. Both Apple and Google will have to open up a bit more of their platforms. I’m not sure it will have a huge impact in the medium to long term, but definitely we need to see more openness in access to apps as given by the two big platform owners, Apple and Google, out there. Bertrand Schmitt Let’s be clear. Google is way more open than Apple. We both have Android devices. You can install alternative app stores. It’s a different ballgame by very far. Nuno Goncalves Pedro Google does other nasty stuff. It’s public. You can check which board I’m a part of. You can see what that company has done towards Google over time. But to your point, yes. It is true that Google has been more open than Apple, but Google has done their own things. Just to be very clear, so I’ll just leave that caveat bracketed there for people to think about it and maybe read a little bit about it as well. Bertrand Schmitt I can say that, me, from my perspective, that path of total control that Apple has been going through on all their devices, that includes macOS, pushed me to, over the past 2, 3 years, to completely live and abandon the Apple ecosystem. I just couldn’t accept that level of control, that golden handcuff approach of the Apple ecosystem, each their own obviously, they are golden, their handcuffs, but they are still handcuffs. Personally, that pushed me way more to Linux, Android, Windows, back to Windows after all these years. I just couldn’t stand it anymore. I want to pick my devices. I want to pick what I install on them, and I don’t want to be controlled like this by just one entity for all my tech devices. For me, at some point, it was just not acceptable anymore. It’s still very warm, very golden handcuffs, but for me, they were just handcuffs at this stage. Yes, what they are doing with the App Store is very typical of that mindset. I think it’s quite sad because I think it started with good intention in some ways. “We need a new computing paradigm, we need to make things smoother and safer,” but it has really become a way to control your clients. For me, it has reached a point where it’s just way too much. Nuno Goncalves Pedro There’s obviously the great power comes great responsibility that uncle Ben told Spider-Man or Peter Parker. But there’s also with great power comes shitload of money, and control. So it’s like, “Yeah. Should we open the server? Do we want to delay opening it up?” “Yeah.” Anyway, it is what it is. Maybe let’s end on the more difficult note of the episode, which is going to be around wars. What’s our prediction? Will we have an end to the Gaza situation with Israel? Will we have an end to Ukraine and, obviously, Russia? What will happen in Iran? Those are the three big, big conflicts right now. Then, obviously, if we want to add just bonus points, what’s going to happen to Greenland, and what’s going to happen to Taiwan, and what’s going to happen to Venezuela? Let’s throw the whole basket in there. We’ve never had like… Let’s talk about all these territories and all these countries. At some point in time, I’m saying this in a light manner, but it’s obviously more tragic than it should be light, and people are dying, and there’s a lot of implications of all of that that is happening right now. Do you have any predictions, Bertrand, for this year? Bertrand Schmitt No. It’s tough to predict on an individual basis. I think on a more bigger picture basis is on one side, obviously, the rise of China on one side. You have also the rise of other countries like India, while very indirectly connected to some of these conflicts are still part of the game, buying oil from Russia, for instance. At the same time, I think overall, the US is more clear about with the sheriff in town. I think it’s good because in some ways, you cannot pay for the goods, you cannot have such a massive advantage versus nearly every other country on earth and just not be clear about who is the boss in some ways. As a result, what are the rules of the game and how it should be played? The US is not alone, obviously, you have China, you have Russia, you have India, you have Europe. You have different other countries. But at some point, it’s not good when countries are not rational and are not clear. I think I prefer the current situation where things are more clear and where you have to assume responsibilities about what you are doing. It’s time to be rational again about how the world behave. Yes, the concept of power and balance of power. I think there has been that dream, maybe mostly coming from Europe, about the end of history. I think that’s simply not the case. It’s not the end of history. It’s still about the balance of power. It has always been about the balance of power. If you are dumb enough to think it was not about that anymore, I just have a bridge to nowhere to sell you. I don’t have specific prediction, but I think it’s clear there is a new sheriff in town. There is a new doctrine about the Western Hemisphere that has been in some ways resurrected on the [inaudible 00:51:35] train, and I think we’ll see more of it. I think at this point, the biggest question is for the Europeans. What do they want to do? Because right now, their position of being a dwarf militarily while being a pretty big giant economically, I don’t think it works. Nuno Goncalves Pedro I agreed on everything that you said. I do have predictions. I’ll stick a flag on the ground just with my predictions. Bertrand Schmitt Good luck. Nuno Goncalves Pedro They are mostly positive. I do think we’ll see an end or, for the most, end to the two big conflicts, the one in Gaza and the one in Ukraine. I think Ukraine will end up in readjustment of territory and splitting between Russia and the Ukraine, but the end of hostilities, I think that we will see an end to the conflict in Gaza also with a readjustment on what that will mean for the Palestinian territories and the Palestinians in general. That I’m not sure, but I feel that there will be an end to those two big conflicts. Iran, I have no clue. I will not put a stick on the ground that I have no clue. There are so many things that could go wrong there. I’ve been reading some really interesting thoughts about even some aggressive thoughts that this might be the time to really change regimes in Iran and for the US to have a bit more of an aggressive stance. I really don’t have a perspective. Obviously, there’s a lot at stake there. Then, if we talk about the other parts, Greenland, I will not opine too much on. Maybe we’re done for now. Maybe there’ll be some other concessions to the US that weren’t already there in the ’50s. Taiwan, I won’t bet either. I’m sad to say I think it might happen at some point in time, but I’m not sure when and what would drive it. Last but not the least, Venezuela is my only really negative prediction. I feel it will continue to be a significant dictatorship as it was before managed enough by other people with the difference now that it has a tax to be paid to the US in the form of oil of some sort, etcetera, and maybe gas, maybe other things as well that it didn’t have before. That’s probably my most negative prediction for the coming year on the geopolitical side. Bertrand Schmitt Without going into detail, I would mostly agree with what you shared. At least that makes sense. But as we know, it’s not always what makes sense, but what might happen. I can tell you 100% I would not have guessed this operation against Maduro. This was so well done, well executed, and shocking at the same time that it’s… I think it shows that it’s hard to guess some of this stuff because there are certainly some new ways to wage limited war, for instance. So it’s certainly interesting, and we certainly need to get used to pretty bombastic statements. But for Venezuela, I don’t think it can be worse than what it was before. I’m probably more optimistic that gradually it can get better. Nuno Goncalves Pedro Just to put perspective on why we’re not making predictions on some of these elements, I think this is a funny story, but I was in Madeira. Actually, first time I was in Madeira, although I’m originally from Portugal. I’ve never been to the islands. Obviously, as you guys know, or some of you might know, there’s a lot of connection between Madeira and Venezuela. There’s a lot of immigration from Madeira Islands to Venezuela. One of my Uber or Bolt drivers there in Madeira was Venezuelan. Was born in Venezuela, but Portuguese descent, et cetera. He was telling me this was still last year. Late last year. Because I told him I lived in US, et cetera, and he was like, “Oh, hopefully, Trump will get Maduro out of there.” In my mind, I was like, “Dude.” No disrespect to the gentleman, but it’s like, “Okay. Mike, your perspective on geopolitics is maybe a little bit exaggerated.” And a couple of days later, we know what happened. When geopolitical decisions are better predicted by some probably very astute Uber drivers, you’re like, “Maybe I shouldn’t make a bet. I have no clue what’s going to happen, no clue what’s going to happen in Greenland, et cetera.” Anyway, a couple of predictions on that element. Bertrand Schmitt That’s why it’s so right. You have to be careful with the prediction, but it doesn’t remove the fact that I think nations and companies that have to play a global game have to understand in some ways what is the game, what are the powers in place, what could happen potentially, but also be realistic. Not be about wish and dreams, but more about, what’s the power relationship? Who has the money? Who has the means? Who has the capacity to do this or that? Because if you start that way, at least the scope of what’s possible, what’s reasonable is more and more clear more quickly. Some stuff like happened with Maduro, I would never have predicted, but for sure, if there’s one country that can do this sort of stuff, it’s the US. I’m not sure anyone has a technology and the means in terms of support infrastructure to do something like this. It’s tough to predict what will happen a year from now for any specific country, but I think that even trying to get a better understanding about the forces in play and their capacity and understanding and accepting that at some point, it’s all about real politic and relationship of power, the more your eyes would be wide open about what’s possible versus simple, wishful thinking. Nuno Goncalves Pedro Fintech, Crypto and Frontier Tech Moving maybe to our last section around fintech, crypto, and frontier tech. For me, just two very quick predictions, views of the world. I think on the frontier tech side, I won’t make a prediction. I will just tell you all to go and listen to our episodes, the one on infrastructure, which is immediately prior to this one, and the episodes that we’ve had around a couple of other topics including AI, what’s the future of your children, because I think they illustrate a lot of the points that we’re seeing and manifesting themselves over the next year and over the next 2 or 3 years as well beyond that. I feel those tomes are complete in and out of themselves, so you can just go and listen to them. Then my second comment is on crypto. I feel crypto has become of the essence, particularly under the current administration in the US, very favored. Obviously, we are now in a world where crypto is just part of the economic system, and I think we’ll see more and more of that emerging, and in some ways, crypto is becoming mainstream. Question is what blockchains will be the blockchains of the future? Obviously, there’s a bunch of bets put out there. We, ourselves, as Chamaeleon, have one investment in one of the significant bets in the space. But besides that, who’s going to win or not, we feel that we’re past the crypto winter. It’s now mainstream days, and we’ll see a lot more activity in there. Bertrand Schmitt I must say with crypto, I’m a bit confused. As you say, we are past the crypto winter. There is much less uncertainty in regul
Iran's most dangerous weapon is a threat we can't ignore. Also, the Labor government is finally checking its grants to mosques mourning the late ayatollah. Barnaby Joyce joins the program to discuss expensive cultural burns. See omnystudio.com/listener for privacy information.
Bolt has become the first major e-hailing platform to officially register under South Africa’s new transport regulations, receiving its Certificate of Registration from the National Public Transport Regulator. But what does this mean for drivers and riders? Africa Melane speaks to Simo Kalajdzic | Senior Operations Manager for Bolt South Africa Early Breakfast with Africa Melane is 702’s and CapeTalk’s early morning talk show. Experienced broadcaster Africa Melane brings you the early morning news, sports, business, and interviews politicians and analysts to help make sense of the world. He also enjoys chatting to guests in the lifestyle sphere and the Arts. All the interviews are podcasted for you to catch-up and listen. Thank you for listening to this podcast from Early Breakfast with Africa Melane For more about the show click https://buff.ly/XHry7eQ and find all the catch-up podcasts here https://buff.ly/XJ10LBU Listen live on weekdays between 04:00 and 06:00 (SA Time) to the Early Breakfast with Africa Melane broadcast on 702 https://buff.ly/gk3y0Kj and CapeTalk https://buff.ly/NnFM3N Subscribe to the 702 and CapeTalk daily and weekly newsletters https://buff.ly/v5mfetc Follow us on social media: 702 on Facebook: https://www.facebook.com/TalkRadio702 702 on TikTok: https://www.tiktok.com/@talkradio702 702 on Instagram: https://www.instagram.com/talkradio702/ 702 on X: https://x.com/Radio702 702 on YouTube: https://www.youtube.com/@radio702 CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.
Trump blasts Keir Starmer for hindering the war effort. Also, Iran has a new supreme leader - is he even worse than his assassinated father?See omnystudio.com/listener for privacy information.
Der Taxibranche geht's nicht gut. Immer weniger Fahrten, immer mehr Konkurrenz – vor allem von Mietwagenplattformen wie Uber oder Bolt. Jana Niehoff über eine Branche im Umbruch. Von Jana Niehoff.
For Amsterdam, the Bolt app launches the option to book a female taxi driver, giant chess is back and at a trial location on Frederiksplein, and how the growing international electorate can affect the outcome of the approaching Amsterdam municipal elections. A short news round-up out of Amsterdam from 4 March 2026.Remember! The local municipal elections take place on Wedneday 18 March 2026. You can find out more at amsterdam.nl/verkiezingen or amsterdam.nl/en/elections. Podcast audio produced by Broadcast Amsterdam for BRAM RADIO, the online radio station for Amsterdam.broadcastamsterdam.nlProducer and newsreader: Cathy LeungMusic bed: We Are OKLinks to news stories and sources are shared in the News section on our website and on the Broadcast Amsterdam Pinterest feed.
Safety is a core concern for e-hailing operators as it ensures that platforms engender trust among drivers, passengers and the general public. Bolt recently commissioned market research firm Ipsos to conduct research into the perceptions of rider safety in South Africa's e-hailing market. In this episode of TechCentral's TCS+, Simo Kalajdzic, senior operations manager at Bolt South Africa, discusses findings from the report and how Bolt has used them to inform decision-making regarding its approach to safety on its platform. Kalajdzic delves into: * The rationale behind Bolt's commission of the report; * Why market research firm Ipsos was chosen to conduct the research; * Key findings from the report and the products Bolt has developed using those insights; * The key drivers fuelling e-hailing adoption in South Africa and where safety ranks compared to other factors like reliability and cost; * Scenarios that lead to South African's choosing e-hailing over other transport types; * How e-hailing compares to other modes of transport in terms of safety perception; * What survey respondents said about e-hailing's impact on drunk driving in their respective cities; * Those features of e-hailing apps that make users feel safer compared to other types of transportation; and * What users can do to maximise their safety levels when using the platform. TechCentral
Kyle Crooks sits down with head coach Will Bolt to recap the weekend series at Auburn, and look ahead to this week at home.
A little girl in Kenya was declared dead for over 40 minutes — and after prayer, she began breathing again. Later, she was seen jumping rope at school, healed from lifelong medical issues.In this episode of the I Like Birds Podcast, Pastor Brian Bolt shares the powerful testimony behind his new book Revival Fire Now and everything God did on that trip to Kenya.Brian walks through:How God called him during global shutdowns to gather large crowds and preach the gospelOrganizing crusades when vendors refused to deliver essentials like chairs and porta pottiesSeeing thousands of people give their lives to Jesus across multiple nationsMiracle testimonies from Africa, Indonesia, Central and South AmericaThe prophetic word: “You'll see the dead raised” — and what happened in KenyaWhat “now faith” really means and why revival requires fireHis personal testimony of surviving a gunshot wound and giving his life to ChristHow to stay focused on Jesus in a culture full of distractionsThis conversation covers revival, obedience, faith, evangelism, miracles, integrity in ministry, and what it means to say “yes” to God even when you don't know how it will work.
The Roadrunners have earned their first ever regular season Top 25 ranking after knocking off Ohio State, #9 Coastal Carolina, and Baylor at the Bruce Bolt Classic in Daikin Park. Dan recaps the three victories, highlighting some very impressive performances on both the pitching staff and line up. Next up for the Roadrunners is a mid-week road trip to Corpus Christi, where UTSA has traditionally struggled on the island. Can UTSA avoid the post-ranking let down?
This archived episode of the Texas Predator Hunting Podcast comes from a live Instagram Q&A session.Wade answers two straight hours of listener questions covering:• 6 ARC gas system tuning• 22 ARC vs 22 Creedmoor• Adjustable gas blocks & bleed-off mode• Suppressor setups (flow-through vs traditional)• AR buffer weights & tuning philosophy• Barrel life (22-250, 22 Creed, 6 ARC)• Proof Research barrels• Bolt & BCG myths• Load development basics• Cleaning procedures for bolt guns & ARs• Titanium vs stainless suppressors• Trigger recommendations• Factory ammo performance insightsThis one goes deep into real-world rifle setup, tuning for cold weather, and maximizing AR platform performance for predator hunting.If you're running 6 ARC, 22 ARC, 22 Creed, or tuning a suppressed AR, this episode is packed.allymunitions.com
The Albanese government pleads with Australians – stop panic buying petrol. But has this war exposed how vulnerable we are? Plus, why has the government given $670,000 to Muslim groups now mourning the death of Iran's leader?See omnystudio.com/listener for privacy information.
Episodio numero 147 Come si domina uno sport? Come si fa a diventare uno di quegli sportivi capaci di vincere, e poi rivincere ancora, arrivando a diventare il punto di riferimento per quella disciplina? Come si regge la pressione, l'ansia, la solitudini di essere sempre quello e quella da battere? In questo episodio di Linee proviamo a capirlo osservando le storie e le diverse tipologie di dominio degli sportivi. Da Phelps che ha vinto sempre nel nuoto a Bolt e il suo strapotere nelle velocità, campioni che hanno avuto come principale sfidante se stessi e il cronometro; oppure dominatrici come Serena Williams e Stefi Graff, capaci per lunghi periodi di rimanere al top, oppure Djokovic, mosso sempre dalla volontà di confermare di essere il numero uno, e infine Federer, l'uomo che ha segnato un epoca attraverso il proprio talento e la propria classe. E infine campioni come Eddy Merckx e il cubano Mijain Lopez, vincenti che non conoscono età, per arrivare ai leader del sport attuale, come Armand Duplantis, Katie Ledecky, Tadej Pogacar e Johannes Klaebo. Non esiste un solo modo di dominare, ma una cosa accomuna tutti i dominatori: la volontà di migliorare sempre se stessi per rispettare la propria continua sete di successi. E poi le altre storie: le conseguenze sullo sport della guerra in Iran e Medio Oriente; l'Hockey in USA e le polemiche sui mondiali di calcio; un giocatore di basket che rischiava la pena di morte e un aggiornamento su cosa sta facendo il LIV Tour nel golf. ------------------------------------------------------------------------ Segui Linee anche su Instagram e TikTok! Questo è il sito ufficiale Questo il canale Youtube Il LINK per iscriverti alla newsletter è QUESTO QUI il link al questionario per aiutare Linee a migliorare Learn more about your ad choices. Visit megaphone.fm/adchoices
Die globale Logistikbranche hat in den letzten Jahren Extreme erlebt: Von der pandemiebedingten Überhitzung und explodierenden Frachtraten bis hin zur geopolitischen Volatilität und der anschließenden Normalisierung. In diesem komplexen Umfeld ist M&A weit mehr als ein reines Transaktionsgeschäft – es ist ein zentrales Instrument zur Portfoliotransformation und zur Sicherung der langfristigen Wettbewerbsfähigkeit eines Weltmarktführers. Mein Gast ist Katharina Schaffhauser, Global Head of M&A bei Kuehne+Nagel. Wir sprechen über ihren Weg von der Transaktionsberatung über Private Equity bis an die Spitze der M&A-Agenda eines Logistik-Giganten mit 25 Milliarden Euro Umsatz. Katharina erklärt, wie die strategische Roadmap „Vision 2030“ die globalen Zukäufe steuert und wie man ein M&A-Team führt, das weltweit die Transformation der Handelswege vorantreibt. Wir beleuchten in dieser Episode:ihren Karriereweg von PwC zu Kuehne+Nagel,Aufbau und Führung eines globalen M&A-Teams,M&A als Werkzeug für die Portfolio-Transformation,Zukunftstrends wie autonomes Fahren und die digitale Reife,Herausforderungen bei inhabergeführten Nachfolgeregelungen,und vieles mehr... Viel Spaß beim Hören!***Timestamps:(00:00:00) Intro(00:01:12) Begrüßung und gemeinsame Vergangenheit bei PwC (00:02:51) Lernkurve in der Transaktionsberatung bei den Big Four (00:04:20) Wechsel von Transaction Services in den Private Equity Bereich (00:08:13) Erfahrungen als Investment Director bei Capvis (00:09:37) Persönliche Weiterentwicklung und Strategiefokus (00:11:21) Einstieg bei Kühne + Nagel und Aufbau der Strategie (00:12:46) M&A-Agenda und die Roadmap Vision 2030 (00:14:24) Vorstellung Kühne + Nagel: 135 Jahre Logistik-Historie (00:16:54) Bedeutung der breiten Diversifikation für die Stabilität (00:17:47) Globales M&A-Team: Aufbau und weltweite Aufstellung (00:19:58) Bolt-on-Strategie und Fokus auf Spezialsegmente (00:21:06) Deal-Sourcing und Marktkenntnis als Wettbewerbsvorteil (00:24:30) Rolle der Business Units als Sponsoren für M&A-Targets (00:25:48) Pipeline-Management und Katharinas globaler M&A-Alltag (00:27:50) Relevanz von Dealgrößen und Enterprise Value (00:29:21) Integration inhabergeführter Unternehmen in einen Konzern (00:32:37) Marktausblick: Aktuelle Lage der globalen Logistikbranche (00:35:48) Technologische Trends: Autonomes Fahren auf der Straße (00:46:42) Nachhaltigkeit und ESG als Treiber der Konzernstrategie***Alle Links zur Folge:Kai Hesselmann auf LinkedIn: https://www.linkedin.com/in/kai-hesselmann-dealcircle/CLOSE THE DEAL auf LinkedIn: https://www.linkedin.com/company/closethedeal-podcastKatharina Schaffhauser auf LinkedIn: https://www.linkedin.com/in/katharina-schaffhauser-32758137/Kuehne+Nagel auf LinkedIn: https://www.linkedin.com/company/kuehne-nagel/Website CLOSE THE DEAL: https://dealcircle.com/ClosetheDeal/***AMBER und DUB.de sind die Plattformen für sichere Unternehmensnachfolgen. Schaut vorbei, wenn ihr euer Unternehmen schnell, sicher und kostenfrei zum Verkauf inserieren wollt oder als Käufer auf der Suche nach passenden Deals seid:www.amber.dealswww.dub.de***Du bist M&A-Berater im Small- oder Midcap-Segment und suchst einen Überblick über alle relevanten Deals? Jetzt schnell den
Former US army general Jack Keane joins the program to provide his expert analysis on the situation in Iran. Plus, the ABC is slammed for its disgraceful coverage of what's happening in the Middle East.See omnystudio.com/listener for privacy information.
Bolt Bros break down EVERY major takeaway from Joe Hortiz's 2026 NFL Combine press conference! The Chargers GM laid it all out: massive focus on fixing the offensive line after Bradley Bozeman's retirement and heavy pressure on Justin Herbert (54 sacks in '25
In this episode, Rory speaks with Nick Pasquarosa, Founder and CEO of Bookkeeper360, about building a modern accounting firm from the ground up and why innovation starts with listening to small business owners. Nick shares how a door-to-door side hustle in high school evolved into a nationwide cloud accounting firm serving nearly 1,000 clients with a fully remote team across 26 states. He explains how his role shifted from boots-on-the-ground bookkeeper to strategic CEO, and why leadership communities like EO, YPO, and Hampton helped accelerate that growth. The conversation dives into the firm's technology roadmap, including the development of its proprietary AI tool BOLT, designed to deliver CFO-level insights at scale while preserving the human relationship. Nick also unpacks how AI is being used internally to surface advisory insights, streamline month-end analysis, and reduce burnout without sacrificing value. Want to know how firms can leverage technology without losing their human edge? Curious how AI can enhance advisory conversations rather than replace them? Find out the answers to these questions and more in this forward-looking conversation with Bookkeeper360's CEO Nick Pasquarosa.
The Southeastern 16 crew predicts outcomes for each of the 16 SEC weekend series. Texas and Ole Miss each face Coastal Carolina, Baylor and Ohio State at the Bruce Bolt Classic in Houston, Texas. Alabama faces Iowa, Oregon State and Houston in the Frisco (Texas) College Baseball Classic. Mississippi State, Tennessee and Texas A&M take on Arizona State, Virginia Tech and UCLA in the Amergy Bank College Baseball Series in Arlington, Texas. Vanderbilt faces UC Irvine, Arizona and Oregon in the Las Vegas Classic. Florida travels to Miami for a huge rivalry series. South Carolina plays a home, away and neutral-site game with Clemson. Meanwhile, the rest of the league plays at home including Arkansas (hosting UT Arlington), Auburn (Nebraska), Georgia (Oakland) and Kentucky (St. John's), Missouri (North Dakota State), LSU (hosting Northeastern and Dartmouth) and Oklahoma (Gonzaga). Southeastern 16 Merch: https://se16.printify.me/ &COLLAR Stretchy. Wrinkle-proof. Built to look sharp. Welcome to Workleisure. Use promo code SEC16 for 16% off! https://andcollar.com/ HOMEFIELD https://www.homefieldapparel.com/ ICON WALLETS Use promo code SEC16 for 20% off! https://icon-wallets.com/ ROKFORM Use promo code SEC25 for 25% off! The world's strongest magnetic phone case! https://www.rokform.com/ JOIN OUR MEMBERSHIP Join the "It Just Means More" tier for bonus videos and live streams! Join Link: https://www.youtube.com/channel/UCv1w_TRbiB0yHCEb7r2IrBg/join FOLLOW US ON SOCIAL MEDIA Twitter: https://twitter.com/16Southeastern ADVERTISE WITH SOUTHEASTERN 16 Reach out to se16.caroline@gmail.com to find out how your product or service can be seen by over 200,000 unique viewers each month! Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Grace Tame shuts down the PM's apology to her, evidence of IS-inspired attacks against gay people, and Mike Newman talks about Australia's bloated public service. See omnystudio.com/listener for privacy information.
25 Feb 2025. Dubai Taxi Company (DTC) is preparing to enter the Abu Dhabi market through the Bolt e-hailing app. CEO Mansoor Alfalasi joins us to discuss the expansion and the company’s full-year results. Plus, with Ramadan and the weather on side, we ask whether there’s still strong demand for Ramadan tents. And the Dubai Duty Free Tennis Championships is expanding its stadium capacity, we hear from Ramesh Cidambi on the strategy behind the move.See omnystudio.com/listener for privacy information.
Prime Minister Albanese and President Trump both heckled while delivering speeches, can you guess who handled it better? Plus, a bomb scare at Albanese's residence.See omnystudio.com/listener for privacy information.
Colin and I just spent two nights at Bolt Farm Treehouse and I'll be honest… I can't turn the hospitality side of my brain off. If you're in the short-term rental or boutique hotel world, you already know their story. It's talked about a lot. But this episode isn't about the hype. It's about what it actually feels like to check in as a guest at a high-end brand everyone admires. What surprised me. What impressed me. What made me pause. Because here's the truth: not everything was perfect. There was a last-minute room switch. An "all-inclusive" experience that wasn't fully inclusive for dietary needs. A few small design and communication hiccups. And yet… it still worked. We still left feeling connected. Rested. Thought about. Why? Because the welcome was strong. The intention was clear. The storytelling was emotional. From the curated in-room upgrades to the cocktail hour experience to the way they invited guests into something bigger than just a place to sleep, there were so many moments that reminded me what hospitality is actually about. In this episode, I break down: ➡️What they did exceptionally well ➡️Where clarity and alignment could have been stronger ➡️What "all-inclusive" really needs to mean ➡️How guest flow impacts shared spaces like wellness areas ➡️Why appealing to all five senses changes everything And the biggest takeaway? Your property does not have to be flawless to be unforgettable. Excellence isn't perfection. It's thoughtfulness. It's alignment. It's making sure the experience you promise is the experience your guest actually walks into. If you're building a micro-resort, refining your STR, or dreaming about something bigger, this one will challenge you in the best way. Let's get into it. Connect with Steph: @theweberco Apply to work with us: theweberco.com
The Albanese government's Royal Commission into antisemitism opened today. Plus, the Epstein scandal has cost Britain a prince and an ambassador, but who in America has paid the price?See omnystudio.com/listener for privacy information.
Thomas und Wolfgang sprechen über ein Thema, das in der Trainings- und Therapieszene massiv unterschätzt wird: Business. Warum fachliche Kompetenz allein nicht reicht – und weshalb Trainer und Therapeuten sich zwingend mit Unternehmertum, Investments und privater Altersvorsorge beschäftigen sollten. Sie diskutieren das Management von Schichtarbeit – insbesondere Nachtschichten bei Bäckern –, warum eine feste Schlafroutine entscheidend für Immunsystem, Leistungsfähigkeit und Wohlbefinden ist und wie stark der circadiane Rhythmus unseren Alltag bestimmt. Außerdem klären sie, ob Kylian Mbappé wirklich so schnell sprintet wie Usain Bolt – und was man aus diesem Vergleich über Training, Talent und Spezialisierung lernen kann. Eine Folge über Verantwortung, Leistung und langfristiges Denken – im Business wie im Training.
During Herman Bavinck's life and for many years afterward, he was well known mainly in the Netherlands, where he was born. But today, people around the world are discovering his writings and realizing their importance. Why is that? In part, it's because Bavinck faced new challenges with honesty and humility, without compromising his Christian beliefs. Today, as we face many new challenges, we can learn a lot from Bavinck. Join Linus, Leia, and Sean as they share their excitement about this great theologian with Dr. John Bolt, professor emeritus of Systematic Theology at Calvin Theological Seminary in Grand Rapids, Michigan. Thanks to the generosity of Reformed Fellowship, we are pleased to offer two copies of Herman Bavinck by Simonetta Carr. Enter here to win. Show Notes: Dr. Bolt had some additional notes about Bavinck to share with our listeners: Bavinck frequently spoke of the gospel as a "pearl of great price" (or treasure) and as a "leaven." The gospel is the most important thing in the world; it brings us into fellowship with God in Christ. But, secondarily, it is also a leaven because it changes individuals and societies. Bavinck also frequently quotes James 1:21: "Every good and perfect gift is from above, coming down from the Father of lights in whom there is no changing."
Where is Sarah Ferguson hiding? Andrew Lownie delves into possible reasons why she is hiding. Plus, a Muslim senator tells Pauline Hanson to leave Australia.See omnystudio.com/listener for privacy information.
Use promo code BOLTBROS on Sleeper and get 100% match up to $100! https://Sleeper.com/promo/BOLTBROS. Terms and conditions apply. #SleeperIn this video, we dive deep into snap counts, sacks allowed, pressure rates, PFF grades, and overall league rankings to see who truly stands out at the center position in 2025.Tyler Linderbaum logged 1,007 snaps with an 80.3 overall PFF grade (5th out of 40 centers) and an elite 83.7 run-blocking grade. His 3.85% cumulative pressure rate since 2023 shows consistent pass protection efficiency.Connor McGovern played 1,037 snaps and allowed zero sacks for the second straight season, finishing with a 2.8% pressure rate — a major improvement from 2024. His 73.4 pass-block grade highlights strong protection reliability.Bradley Bozeman played 1,058 snaps but ranked 40th out of 40 centers with a 51.7 overall grade. Despite similar snap volume, his pressure rate and blocking grades lagged behind the competitionAFC West Roundtablehttps://www.youtube.com/@AFCWestRoundtableLinks:https://www.Beacons.ai/boltbroshttps://www.riverslake.org/Merch!https://nflshop.k77v.net/Ry9ymXhttps://www.boltbros.live/merch#lachargers #chargers #nfl #boltup #shorts #memes #meme #justinherbert #jimharbaugh #nflfootball #TylerLinderbaum #ConnorMcGovern #BradleyBozeman #NFL #NFL2025 #OffensiveLine #CenterPosition #NFLAnalysis #PFFGrades #FootballBreakdown #NFLRankings #Oline #FootballAnalytics #NFLComparison #SportsDebate
Rich sits down in Boulder with movement expert Lawrence Van Lingan to break down why most running “fixes” miss the point—and how crawling patterns, breathing, and nervous system health can unlock better form, fewer injuries, and more sustainable HYROX performance.00:00 — Why this convo can change how you train for HYROX07:45 — The “missing link” in PT + rehab: relationships through the body18:30 — Stop over-cueing: crawling patterns that clean up running naturally33:10 — Vagus nerve + HRV: the hidden driver of sustainable performance47:40 — Breathing truth bombs: CO₂ tolerance, Bolt score, and nose-breathing (without the hype)
Today on Galway Talks with John Morley: 9am-10am HSE says elective hospital at Merlin Park proceeding – but denies 200 beds were ever part of the plan - we'll be speaking to the Minister for Education and Galway West FG TD Hildegarde Naughten for further clarity Anger in Connemara as bus park in Kylemore to go ahead while other plans rejected on environmental grounds 10am-11am CCPC Urges Ireland to Open Taxi Market to Uber and Bolt - but what do taxi drivers think- we'll be finding out Searches continue after the former Prince Andrew's release from custody in England- we speak to a reporter in London Finbar Wright is coming to perform on a Galway stage - he'll join us live in studio this morning 11am-12pm Galway Thoughts Panel – Deputy Pete Roche and Cllr Alan Curran to discuss what's been making the headlines this week We'll also take a look ahead to all the weekend's sporting action with Darren Kelly
Send a textHow SaaS CEOs Should Navigate AI-Native, AI-Augmented, and Bolt-On AI Strategies to Protect Revenue and Reduce Churn Guest: Ken Lempit, President & Chief Strategist at Austin Lawrence Group -- AI is not just another feature cycle — it's an inflection point for SaaS.In this episode of SaaS Backwards, Ken Lempit steps into the guest seat to break down what AI really means for SaaS companies, especially mid-market and enterprise software vendors trying to protect revenue while planning their next product evolution.Ken draws a powerful parallel between today's AI shift and the early 2000s transition from client-server to cloud — arguing that this AI cycle is moving faster and carries even greater competitive risk.He explains the critical differences between:AI-native SaaS productsAI-augmented platformsBolt-on AI featuresAnd why the wrong strategy could quietly increase churn, shrink pipeline, and erode relevance.You'll also hear:How to diagnose whether you have a GTM problem or a product relevance problemWhy “vibe coding” poses real risk to mid-market SaaS vendorsShort-term product and pricing moves to survive the next 12–18 monthsLessons from BackEngine's pivot from conversation mining to revenue enablementWhy your AI narrative may matter more than your marketing spendIf you're a SaaS CEO, founder, or go-to-market leader wondering how aggressive your AI roadmap needs to be, this episode is your strategic wake-up call.Get a free SaaS GTM Checkup: https://info.austinlawrence.com/saas-gtm-checkup ---Not Getting Enough Demos? Your messaging could be turning buyers away before you even get a chance to pitch.
In this episode, we dive into the BOLT Score, a widely recommended test in the world of breathing training. We discuss what the BOLT Score measures, its relevance for hikers and mountaineers and if and how it should be used. == Want to get fit, strong and resilient for your hiking adventures? Check out the Online Summit Program: https://www.summitstrength.com.au/online.html
From a sequence starting in 2025. You can join, live, each Tuesday, 7.30 p.m. Ireland time (the same as UK time)! Information about the sequence can be found here: https://first164.blogspot.com/p/zoom164.html
The ISIS brides scandal deepens, the US prepares for war against Iran, and Jacinta Allan loses her cool.See omnystudio.com/listener for privacy information.
The Irish taxi market should be opened up to facilitate ride-hailing platforms, such as Uber or Bolt. That's the call from the Competition and Consumer Protection Commission, whose Chair Brian McHugh joined Anton this morning. Also to discuss further was David Mitchell, Spokesperson for the All-Ireland Taxi Representatives Association.
We're keeping the AI Tools series rolling with Adir Traitel, entrepreneur, product leader, and early adopter of just about every vibe coding tool out there. Adir joins Matt and Moshe to share hard‑won lessons from building real apps with v0, Bolt, Replit, Figma Make, and more, all while running his own startup and consulting on product builds across industries.From his early days in project management and mobile app startups, through work with companies like Moovit and across FinTech, AgTech, and credit scoring, Adir has consistently been the “try it first” person for new build tools. In this episode, he breaks down what these platforms actually do well, where they fall short, and how product managers can use them responsibly for experiments, prototypes, and beyond.Join Matt, Moshe, and Adir as they explore:Adir's journey from PM and founder to heavy user of vibe coding tools in his current startupHis 3-layer view of the ecosystem: AI dev assistants (Cursor, Antigravity, Claude Code), front-end mockup tools (v0, Figma Make), and full‑product builders (Lovable, Base44, Bolt, Replit)V0: where it shines for quickly building functional UIs (like his electricity consumption app) and where it starts to crackLovable: great for sites and simple flows, but not ideal for complex SaaS or CRM‑like productsBolt: fun and fast for concepts, but why it never got him close to productionReplit: stronger agents and capabilities, but weaker UI output and surprising backend defaults that can get very expensive very quicklyFigma Make and Google Stitch: when design quality trumps everything else, especially for SaaS interfacesThe real costs of vibe coding: AI token spend, hosting/pricing traps, and why production economics matter as much as build speedWhat his “dream product” would look like, including multi‑agent environments, better security/privacy, and built‑in QA and CI/CDHow all this is reshaping the product management role, and why curiosity and tool fluency are becoming must‑have skillsAnd much more!Want to connect with Adir or learn more?LinkedIn: https://www.linkedin.com/in/adirtraitel/ Website: https://adirtraitel.com/You can also connect with us and find more episodes:Product for Product Podcast: http://linkedin.com/company/product-for-product-podcastMatt Green: https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky: http://www.linkedin.com/in/mikanovskyNote: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
A new book could prove Bolt was right regarding an activist's Indigenous heritage claim, Jim Chalmers is starting to crack as he handles the economy poorly, and the government continues its weak stance against the ISIS brides.See omnystudio.com/listener for privacy information.
Es increíble como un pequeño adaptador/receptor USB puede hacer a alguien tan feliz. Básicamente con este cacharro no hay más espera en la conexión de mi teclado Logi MX Keys Mini con el ordenador. Te invito a debatir sobre este tema en el Foro de la Comunidad de TuPodcast https://foro.tupodcast.com Y otras formas de contacto las encuentran en: https://ernestoacosta.me/contacto.html Todos los medios donde publico contenido los encuentras en: https://ernestoacosta.me/ Si quieres comprar productos de RØDE, este es mi link de afiliados: https://brandstore.rode.com/?sca_ref=5066237.YwvTR4eCu1
The Coalition have picked its new frontbench team to take on the government, former General Jack Keane gives an analysis of Trump's potential war plans, and Drew Pavlou talks about why he was kicked out of the US.See omnystudio.com/listener for privacy information.
Angus Taylor is facing pressure from his party, a sociologist discusses the stark difference between pro-Palestinian and Iranian protests, and Drew Pavlou talks about why he thinks he was deported from the US. See omnystudio.com/listener for privacy information.
Season two of the Husker247 Nebraska Baseball Podcast kicks off with a quick look at the start of Nebraska's 2026 baseball season. The Huskers are set to get underway this weekend in Arizona and the pod takes a look at a couple of keys for the Big Red as the season gets rolling. In the second half of the podcast, Nebraska head coach Will Bolt joins to discuss the team ahead of the start of the season. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
Dave, Josh, and Mario are live inducting the Hall Of Fame class of 2026 for the Bolt Crew Podcast. Plus the can't miss guys in the 2026 NFL Draft Class.
Music Matters host Darrell Craig Harris catches up with viral outlaw country artist Travis Bolt from his home in East Texas to talk about his viral country hit "Never Tried Cocaine" and his journey in dealing with Tourettes as a busy recording and touring artist! About Travis Bolt East Texas-born singer/songwriter Travis Bolt's outlaw country sound isn't just a genre, it's his lifestyle. His music is the soundtrack of nights spent around the classic Harley-Davidson motorcycles he loves to work on and tear up back roads with."I write real songs for real people" 'Blues At My Funeral' - Out Now! ‘Burning Bridges' - Out March 6th! www.linktr.ee/travisboltmusic About Music Matters with Darrell Craig Harris The Music Matters Podcast is hosted by Darrell Craig Harris, a globally published music journalist, professional musician, and Getty Images photographer. Music Matters is now available on Spotify, iTunes, Podbean, and more. Each week, Darrell interviews renowned artists, musicians, music journalists, and insiders from the music industry. Visit us at: www.MusicMattersPodcast.comFollow us on Twitter: www.Twitter.com/musicmattersdh For inquiries, contact: musicmatterspodcastshow@gmail.com Support our mission via PayPal: www.paypal.me/payDarrell voice over intro by Nigel J. Farmer
This week, Alex is joined by the "Mr. Worldwide" of Marvel Snap: Dara! They kick things off with a life update, discussing Dara's move from NYC to Australia and now Thailand, living the $3 Bolt ride life while navigating a 12-hour time difference.They dive straight into the Star-Lord Season Pass review. While Alex thinks it's a solid 4-Star card, Dara drops a massive hot take: Star-Lord might be better than Shou-Lao due to his insane synergy with Fin Fang Foom and Grandmaster.Then, they roast the Super Premium card, Magus, agreeing it is "Toxic Doxy" levels of bad and a hard skip (1 Star). They also review Moon Dragon, deciding it's a "Doom 2099" dependent card that falls flat if not played on Turn 2. On the flip side, Dara claims Drax (Avatar of Life) is the best non-Season Pass card of the month, praising its ability to counter Ramp decks.Finally, they open the Mailbag to discuss a wholesome community letter and debate a spicy game design question: Should Marvel Snap introduce a 5-Turn Game Mode to create a true Aggro meta? Plus, a heavy dose of nostalgia as they reminisce about Warcraft III tower rushes and the "Golden Era" of Blizzard.Join Alex Coccia and special guest Dara as they chat about this and more on this episode of The Snap Chat—and catch Cozy and Alex every week as they discuss all things Marvel Snap.Have a question or comment for Cozy and Alex? Send them a Text Message.You've been listening to The Snap Chat. Keep the conversation going on x.com/ACozyGamer and x.com/AlexanderCoccia. Until next time, happy snapping!
What if mental clarity, emotional regulation, and better sleep weren't about adding another practice—but undoing a hidden one?In this conversation, Patrick McKeown reveals how chronic over-breathing quietly drives anxiety, rumination, poor sleep, and brain fog. Drawing from decades of research and lived experience, he explains why breathing less (not more) can improve oxygen delivery, blood flow to the brain, and nervous system balance.This episode challenges modern breathwork myths and offers practical, science-backed ways to retrain your breathing for everyday life.Show Partners:Get your MENTAL FITNESS BLUEPRINT here! A special thanks to our mental fitness + sweat partner Sip SaunasPersonal Socrates: Better Question, Better LifeConnect with Marc: https://konect.to/marcchampagneTimestamps:00:00 — The question that opens every interview: “Who are you?”01:20 — Living out of the head vs. living life03:10 — How stress, sleep, and breathing patterns intersect05:00 — Discovering breath as a path to presence07:40 — Why The Power of Now actually worked10:15 — Walking away from the corporate world12:30 — The origins of the Buteyko Method14:40 — Why breathing more air can reduce oxygen delivery17:10 — Nasal breathing and brain function19:50 — Rumination, CO₂, and cerebral blood flow22:30 — Why slow breathing isn't always good breathing25:10 — Everyday breathing vs. breathwork sessions28:00 — Practical exercise: calming the nervous system32:10 — Clearing a blocked nose naturally36:40 — Breathing for performance and public speaking41:30 — How to retrain your breath throughout the day46:00 — Measuring progress: the BOLT score & breath mastery50:10 — Final reflections on calm, clarity, and control*Special props